I recently had a discussion as to whether PSExec, the legitimate Microsoft Sysinternals tool often abused by malicious actors for remote code execution, should be included on a list of indicators related to a recent intrusion event. While my overall opinion of indicators of compromise (IOCs) as they are used (as opposed to their underlying idea) is that they are useful, but far less so than most think, the question is significant as nearly all organizations demand a list of IOCs to accompany threat reports. So in this particular case, the question becomes: at what point should a “legitimate” tool, abused for malicious purposes, be included in a list of IOCs related to a security event?

First, a digression. This entire conversation would be relatively easy if IOCs were created and used as originally intended: as indicators that a compromise has occurred, or is occurring. Furthermore, an IOC as originally intended is a construct with multiple parts, such that a combination of observables (file name, file hash, IP address, domain name, registry key, etc.) taken together corresponds to an indicator of compromise. Taken together, an IOC provides a compound structure identifying a potential attack based upon past experience – when applied in this way, IOCs can be very useful and informative in identifying and enriching available data to detect and disposition a potential intrusion event.

Yet in reality, IOCs rarely (if ever) meet the criteria of their initial formulation. Instead of being compound items indicating a potential compromise, IOCs in practice are atomic observables designed to identify high-confidence, known-bad behavior. Thus, the lists of IP addresses, file hashes, domain names, and other items all on their own with little or no enriching context, comprising the modern list of (devalued) IOCs.

So, knowing the reality of IOCs as opposed to their theoretical underpinning holds for information security practitioners, what to do with a hash value for a legitimate piece of software maliciously used, such as PSExec? 

While an ‘enriched’ IOC might enable some degree of fine-tuning, simply having the hash alone leaves few options for response. Furthermore, how even the debased, atomic indicator is used has bearing for its value (or cost) in inclusion on a list of indicators. For example, many shops (especially smaller, poorly funded security shops) operate largely on a blacklist approach for any suspect or ‘malicious’ indicator – passing a hash for PSExec to this environment might identify some potentially malicious activity, but it could also generate false positives that waste resources. On the other hand, a mature security organization can disposition, based on the hash value and its corresponding version of PSExec, if that version aligns with internal activity and how to disposition events relative to everyday operations. In the former case, including PSExec in the list of IOCs may be actively harmful, in the latter somewhat useful.

Key to proper assessment and inclusion is the aspect of contextuality: PSExec in isolation is a neutral observation. However, when either aligned with follow-on observations (e.g., execution of newly observed, unsigned, or suspicious binaries) or environmental awareness (e.g., if PSExec is legitimately used in the environment, and if so what version is used by IT personnel), the item becomes far more interesting. Yet, given the “lowest common denominator” approach for indicator distribution, enrichment, and processing, almost no indicator ‘feed’ in existence will enable any of these items short of an incredibly well-curated STIX feed – which at least within my experience, is akin to a unicorn: I’ve heard a lot about it, but yet to see one in practice.

Thus, the ability to properly utilize reporting that identifies the use of legitimate tools for malicious purposes seems to hinge on the ingesting organization’s ability to properly disposition and enrich this information within their environment. In an information security landscape where many organizations still rely on simple indicator blacklist approaches, including such items in IOC lists seems therefore not only unhelpful, but potentially damaging in terms of resources wasted pursuing detections. Presumably, mature and well-resourced shops already have the resources and expertise to read contextual information (whether the originating threat intelligence report, or performing malware or incident analysis themselves) and thus enrich the individual item.

Overall, information security practitioners cannot simply ignore items such as PSExec, ProcMon, IRC software, etc. But listing such items as malicious absent context or enrichment – which is the industry-default for indicator sharing at this time – seems less than helpful if not outright harmful given the capabilities of many organizations whose security programs rely on blind ingesting of indicator lists. Given the “state of the now”, it would seem the best approach for the so-called “benign indicator” is to avoid inclusion in indicator lists given potential misuse or misunderstanding, but to include and describe the use of such tools in malware analysis, incident reports, and similar narrative structures. While this is an inefficient approach that does not lend itself well toward “machine speed” sharing and detection, until the vast majority of organizations mature in capability and processing to enable contextual IOC (as opposed to atomic indicator) ingestion – AND threat intelligence and indicator feed companies align to support such efforts – description absent indicator listing seems the best approach.