Few arguments are more frustrating to deal with than those which include compelling, accurate observations to plaster over selective examples and willful misrepresentation. Such is the case with a recent video posted by Ralph Langner concerning ICS-specific cyber threat intelligence (CTI). Langner makes some astute observations concerning the industrial threat landscape that are worth repeating and recognizing, but then uses these items in conjunction with selective representation of publicly available observations to push a specific, predetermined agenda.

First, a disclosure – I previously worked in CTI focused on industrial operations for Dragos. As of 26 October 2020, I no longer work there, but am obviously sympathetic to the discipline, so I have a bias. However, I no longer have a “dog in the fight” of ICS threat intelligence, so I feel free to comment rather widely on the subject to explore Langner’s argument.

First, Langner makes an argument concerning the paucity of ICS-specific attacks as limiting the use of threat intelligence. On this point, Langner is correct – there have only been four known deliberate disruptive attacks against industrial systems which have proceeded from initial actions to final objectives. If we view the corpus of “cyber-physical attacks” as consisting only those completed, (somewhat) successful events, he is correct.

Yet this view is myopic and ignores significant activity by multiple entities to gain initial access to industrial environments. As previously described by myself and seven months later by Dale Peterson, ICS-focused attacks are not simple matters – Langner himself notes this by indicating the poor success rate of attackers in their efforts. Speaking from personal experience in non-defender roles, future success in operations is ensured by continuous research, target development, and access acquisition to enable an effect at a later time of one’s choosing. Thus we should anticipate multiple access and information gathering operations targeting critical infrastructure verticals – something which has been documented in government releases and various press reports – as preliminary steps toward some future event.

Langner notes the lack of actual incidents as a priori proof of the limited value of threat intelligence, yet this view distorts and fails to understand the basics behind threat intelligence in its support to operations. In the scenarios as described, we should anticipate – given the likely fallout in terms of repercussions and responses – that a deliberate “cyber-physical” attack on infrastructure in the US, Germany, Russia, China, or similar would be met with consequences. Specifically what is unfortunately unknown as deterrence strategies are not transparent for analysis, but we can reasonably expect that outside relatively permissive conflict areas (such as Ukraine or the Persian Gulf region where such attacks have taken place) actual effects on objectives would only manifest in the course of war.

In this scenario, we are faced with a situation where effects may not appear except in emergent situations, but the precursor actions necessary to execute them have already taken place. In such a situation we could “wave our hands” and say that lack of (current) attacker intention to deliver attacks means we are “safe”, but this seems both foolhardy and negligent. Instead, an understanding of adversary intrusion activity and potential targeting focus (e.g., 2017-2020 intrusions in North America focusing on smaller electric entities instead of larger utilities) can enable meaningful defense, response, and investigation prior to an incident. In this setting, commentators like Langner would opine that the lack of attacks shows no need for CTI – while actual asset operators and defenders recognize the ability to identify attacks before they manifest in disruptive events, showing clear, useful value.

Langner then moves on to note that ICS threats are always exaggerated – and on this, I will admit he is largely correct. Media and vendor reporting repeatedly conflates items ranging from scanning to initial phishing with ICS-specific threats, when determining such intentionality is difficult in most cases absent follow-on activity. In this respect, the industry can and should do better – but this is not an argument against ICS-centric CTI as much as it is against media and public reporting on ICS threats. While the two may overlap, they are not the same – and a corpus of nuanced understanding and thorough reporting of ICS-targeting events – from FireEye’s reporting on the Triton attack methodology to my own work detailing the subtleties of the 2016 Ukraine event show how significant, meaningful lessons can be learned from available data to inform ICS defense.

Even in cases of failure – which as I’ve previously argued can be said for all ICS-related incidents to some degree – we nonetheless gain insight into intrusion, lateral movement, and persistence mechanisms within ICS networks that can meaningfully benefit defense. Just because the lights remain on, water continues to flow, or other noticeable effects fail to materialize does not mean networks are “safe” – it may only mean an attacker failed to adequately execute their operation. We could rest on our laurels due to adversary incompetence, which Langner appears to suggest in his video, but doing so would represent negligence of the highest order. Instead we should, through ICS-focused CTI, attempt to learn all we can to try to identify, mitigate, or disrupt attacker operations through all phases of activity irrespective of that attacker’s maturity or capability. Even a case such as the 2017 Triton/TRISIS incident still resulted in multiple plant shutdowns and lost production, even if the attackers were unable to execute a truly disastrous safety-focused attack – such impacts cannot simply be shrugged off, and we as defenders should be embarrassed if we are not working to prevent even ham-fisted attacks such as this.

Langner makes a further point on adversary capability, using the Bowman Dam event as signifying adversary immaturity with a specific focus on Iran. While I recognize a level of immaturity in operations associated with Iranian entities (as described by government-linked reporting), such descriptions ignore the rather potent effects Iranian-nexus actors have deployed in their near-abroad: various rounds of SHAMOON, Dustman, ZeroCleare, and possibly a recent Thanos ransomware variant. While largely IT-based, these wiper events linked to Iran have overwhelmingly targeted critical infrastructure and industrial entities. Furthermore, slight changes in access or further penetration into victim environments could unleash a scenario Langner himself visualizes: deploying a disruptive capability within a production environment absent any industrial-specific capability. Such an attack would be as effective in the US as it is in UAE, yet the main difference here is not attacker immaturity as much as it is likely fear of repercussions from such an event. 

As in the examples described above, asset owners and defenders would likely experience nearly all the steps just prior to an attack rather than an outright attack itself due to adversary restraint and fear of (potentially kinetic) consequences. Yet this restraint does not let defenders off the hook for protecting and monitoring their networks, which CTI can provide based on both observed adversary activity as well as understanding of general tradecraft in network intrusions.

Langner concludes by emphasizing “basic” security controls in industrial networks. While I agree with Langner’s emphasis, I disagree with his dismissive attitude toward actually executing such basics. Overall, asset owners desire to run safe, reliable networks – but must do so while balancing multiple considerations from operational availability to budgeting to prioritizing maintenance actions during planned downtime. While persons like myself and Langner may overemphasize cyber as a point of investment and improvement, plant managers face many potential items for action. To concentrate minds and demonstrate the need for even supposedly basic steps such as network segmentation (and if you’ve ever dealt with a “flat” network, you know that segmenting it is hardly a “basic” task), having case studies and examples through CTI and intrusion analysis allow us to make the case for these efforts. Exhortations and finger-waving as in Langner’s video may seem fine for those of us in a cyber bubble, but plant operators and their managerial and financial masters need more to justify response. Nuanced, accurate, and hype-free CTI provides this justification which can help allocate investment and move security forward.

Overall, Ralph Langner, through his team that performed parallel analysis of Stuxnet with other researchers, has made many contributions to the field of ICS network operations and security. But whether out of ignorance or bias based on his own commercial offering, he has not merely failed to understand appropriate applications of CTI to industrial defense, but in many cases misrepresented the field as well. This is unfortunate as he retains a position of authority in the field – but hopefully correctives such as this can add further detail and analysis to his opinion piece.