Recently Juan Andreas Guerrero-Saade and Silas Cutler presented new research on the cluster of activity encompassing Stuxnet, Duqu, and Flame at the Kaspersky Lab-sponsored Security Analyst Summit. (Note for those reading this from US, Canadian, and related government networks: accessing the research link previously will display potentially leaked, non-public information which could be construed as a spillage event, so click with caution depending on where you are.) The technical analysis accompanying this work is quite interesting, engaging, and as far as I can tell without fault – spicy images of intelligence community slide decks aside, it is well worth the time of diving in for both the blog and family-specific reports accompanying it. But along with the detailed analysis of these threat entities comes a threat intelligence neologism: the concept of a “Supra Threat Actor”.

The idea comes with an initial caveat, and appears to be motivated by the observation, that “private sector threat intelligence has largely shied away from abstract methodological discussions” for grasping interrelated activity among cooperating or coordinating entities. Yet this statement sets the subsequent argument on ill footing from the start due to a false premise. Multiple frameworks – rooted in abstract conceptions of what constitutes a specific threat actor or cluster of activity – exist, stretching from the US ODNI’s Cyber Threat Framework to intrusion analysis approaches to the behavior-centric approach of the Diamond Model. While none of these methods are perfect and all leave something to be desired, they all can easily be extended to consider such overarching adversaries for which “Supra Threat Actor” is invented. If there is a fault, one might lay the blame at poor application of models capable of handling such a concept, rather than introducing new terminology that appears unnecessary.

Moving beyond the initial problematic statement, the introduced concept of a “Supra Threat Actor” aims to resolve an inability to break free of “one-to-one equivalence of a ‘threat actor’ to an institution or organization” which leaves practitioners supposedly blind to “multi-institution, multi-country, or multi-group orchestration.” Prima facie, this seems reasonable enough as we’re used to cyber threat attribution assigning a specific activity (or group of activities) to a specific, concrete entity: this is the NSA, that is the GRU, etc. Yet in delving a bit deeper, as expressed in various legal proceedings within the US and private-sector reports delineating specific units and organizations operating under state-sponsorship, professional analysis and attribution has taken into consideration multiple entities operating under a broader authority. This includes examples such as sharing access, information, and tooling. Further examples include the identification of subgroups, contractors, and assets operating under temporary control of larger entities with the possibility (if not actuality) of taking their services elsewhere. While these observations are not labeled anything unique in terms of threat intelligence taxonomy, they clearly demonstrate an awareness (if not an explicit naming convention) that state-sponsored activity more often than not involves multiple entities under the same (or even shifting) decision authority utilizing bits and pieces from commonly-held toolkits and potentially infrastructure as well.

Where items get interesting though – and where the specific analysis between Stuxnet-Duqu-Flame and related entities may represent a very distinct outlier – comes in the overlap and commonalities between technical artifacts. First and foremost, the very basis on which this argument and the original analysis rests – malware analysis and compiled code review – focuses on the tools leveraged by an adversary and, as a result, has no visibility into the motivations, objectives, or operational methodologies of the adversary (or adversaries) in question. As such, we are analyzing the means to an end for some objective, which need not be specific or unique to the motivation behind the particular end in question. To use an example, many adversaries – from red teams to state-sponsored adversaries – leverage the Cobalt Strike framework for operations. In doing so, we can only link these adversaries together through technical commonality, but no one would seriously state that these adversaries must all derive from some common point or command authority, or share the same objectives and goals. Thus correlation through tooling based upon extensive malware analysis can reveal links between entities, but the nature of these links must be understood as potentially limited in overall meaning depending on circumstances.

Noting the limitation of technical-focused analysis in this respect, the cases described by Mssrs Guerrero-Sade and Cutler diverge from the simplistic Cobalt Strike example significantly in that the entities in question appeared to leverage bits and pieces from an overarching “toolkit” of source code and functionality. Instead of simply reusing a tool as entities might with Metasploit or Mimikatz, operations over time are linked through deep technical integration and overlap extending to coding practices and source code functionality. The idea of code reuse analysis over time to analyze an adversary is not novel (as shown in Micah Yates’ 2017 RECON presentation), but also does not necessarily indicate an overarching authority guiding operations. Within the “modern cyber intelligence environment”, where practitioners range from state-controlled entities (such as militaries or intelligence agencies) to state-directed contractors (Booz-Allen or Lockheed Martin) to pure mercenaries (HackingTeam and DarkMatter), capability and technical fragmentation become not mere possibilities but realities. Thus while it is certainly plausible that the reuse of specific functionalities and code overlaps for Stuxnet-Duqu-Flame indicates that these entities emerge from some overarching organizational collaboration and direction, the pure code-based analysis does not allow an analyst to dismiss the possibility that these entities may have also employed the same contractor (or even developer) at different times which led to the emergence of perceived commonalities through external sourcing. Essentially, there may be some “Supra” basis to the connections to these elements, but the epistemic limitations of the mode of analysis allows this to only become a possibility and not a certainty.

To be clear – this critique is not specifically refuting links between Stuxnet, Duqu, and Flame, but rather identifying this particular instance as unique (not the least because these campaigns were among the first publicly-disclosed complex CNE operations) with additional considerations (and possibilities) in play. Due to these concerns and the unique nature of these events (both technically and historically), the Stuxnet-Duqu-Flame combination represents an outlier for analysis and potentially a poor foundation for deriving a new conception of adversary attribution and categorization that would be applicable to other events. Moving forward in time from the early 2010s, computer network operations become more muddied both based on the example set by the Stuxnet-Duqu-Flame events and an ever-increasing “division of labor” within the computer network espionage (CNE) and offensive cyber operations (OCO) spheres.

Particularly, imagining a given “APT” or “threat actor” as a monolithic entity under which reside focused developers and operators seems misguided, with clustering then occurring at higher levels of coordination. Just as an entity such as “The US Army” or the “People’s Liberation Army Navy” consist of multiple bodies under the overarching name with external support parties enabling operations, extensive cyber-nexus operations have similar components and dependencies. Bureaucratic structures and economic efficiencies make it much more likely (without engaging in too much mirror imaging) that an “APT” already represents a cluster of distinct organizations working together (or contributing to a joint project) to develop tooling, deploy capabilities, and engage in actions on objectives – some elements of which can be diverted or repurposed to other operations and gain the appearance of being a distinct entity in its own right. Breaking out this concept, we increasingly observe operations broken down into specialist parties – dedicated tooling developers, initial access providers, post-exploitation activity, and all of the above guided by targeteers and analysts  taking in “the big picture”.

Thus the concept of a “Supra Threat Actor” is not wrong per se, but instead muddies the situation unnecessarily through new terminology layered on top of an already shifting landscape which can be described in current terms. The relationships in Stuxnet-Duqu-Flame may represent a very unique combination of the above phenomena (especially when considering potential intergovernmental cooperation and coordination far beyond established relationships such as FVEY) necessitating special care and analysis, but developing an entire new category with a population of one when the totality of existing operations seems decently served by existing frameworks seems excessive. Essentially, rather than attempting to invent a categorization for interrelationships among threat actors or adversaries, analysts are better served working through existing observations and conventions to identify existing links and overlaps in activity and to define these within current terminology. Not only will this produce less confusion, but by keeping with current practice instead of adopting the “Supra” concept analysts leave open the flexibility to capture shifting relationships among parties over time (as organizations change and relationships shift) rather than cementing a “Supra” relationship which may be very specific to a given target or objective, and irrelevant for all other matters.

Overall, Mssrs Guerrero-Sade and Cutler provide an excellent and engaging analysis, and the above critique is not meant to take away from or diminish their accomplishment. Yet in framing the “takeaway” from their technical work as the uncovering of unique, previously undescribed threat relationships seems disingenuous given past work and distracting from otherwise faultless research.


1 Comment

A XENOTIME to Remember: Veles in the Wild – Stranded on Pylos · 04/12/2019 at 13:39

[…] utilizing purely technical approaches for differentiation (an issue I lightly touched on in a recent post) becomes problematic, especially when trying to define attribution to specific, “who-based” […]

Comments are closed.