When reporting on cyber-attacks, articles and media frequently (if not exclusively) focus on the damage or immediate result: how many machines were impacted, how much data was compromised, or what (if any) physical consequences emerged from the event. The latter is especially the case with ICS-focused attacks, from Stuxnet to CRASHOVERRIDE to TRISIS. While this emphasis is understandable and obvious, it also obscures or ignores an important aspect that serves as either a significant secondary or, in some cases, the primary goal of such attacks: strategic messaging to either victim or observing audiences. If we relate “cyber” in the state-sponsored context as a form of information operations, aligning strategic communication goals and impacts to cyber events is not merely reasonable, but necessary.

For this discussion, I am adopting the US Department of Defense Joint Chiefs of Staff definition for “strategic communications”:

“This strategic communication concept deals with the challenge of convincing others to think and act in ways compatible with our objectives, whether this means causing them to adopt a specific course of action or to simply understand us better and accept us more.”

Within this definition, strategic communications fit into overall concepts similar to Military Information Support Operations (MISO, the new term for “PSYOPS”) with the aim to influence or alter hostile (or suboptimal) attitudes and opinions of foreign entities, with the ultimate goal of thereby shifting behaviors. While such actions are normally carried out through traditional communication forms, kinetic action can also fall within the bounds of influence operations when the action itself serves as a means of communicating intent, resilience, deterrence, or some other message to either the target of such kinetic action or observers thereof. The clearest examples are military deception (MILDEC) operations, another pillar of information operations, through feints, demonstrations, ruses, and displays. While MILDEC focuses on altering the behavior or decision-making of opposing force leadership and agents, strategic communication shifts to populations and non-military decision-makers. In this context, a kinetic operation designed to influence a population would include a show of force demonstrating overwhelming capabilities, thus undermining popular or public support for resistance.

Shifting to cyber operations (oddly enough, yet another pillar of information operations), cyber impacts can be designed in such a fashion that the true or most significant impact is the perception created by the attack, versus the damage or immediate result of the attack itself. The examples that most readily come to mind in this case consist of ongoing Russian operations against Ukrainian resources: from the 2015 electric grid attack to CRASHOVERRIDE to “NotPetya” to VPNFilter, all sought (and in all cases except VPNFilter, achieved) disruptive impacts to various aspects of Ukrainian society. The power events are obvious in impact, but the NotPetya attack resulted in significant disruption to Ukrainian society as a result of its initial infection vector: a trojanized update to popular accounting software essentially required for Ukrainian businesses. VPNFilter, while caught before the attack could “go live”, similarly presented the possibility through sheer number of infected hosts to create disruption through large-scale denial of service (DoS) operations and similar events.

While each of the above is notable for immediate disruption (and NotPetya especially so given how this malware spread to impact entities far beyond the primary target of Ukraine), there is also a more subtle and insidious aspect to these operations. To understand this aspect of events, one must first understand the conflict simmering between Russia and Ukraine since 2013-2014. This conflict included the transfer of (predominantly ethnic Russian) territory from Ukraine (Russia’s annexation of Crimea) and ongoing, low-level conflict between separatists in Ukraine’s east (with extensive Russian assistance) and Ukrainian interests. This conflict emerged from a “people power” event in late 2013, when the pro-Russia (or Russian-backed) Prime Minister, Viktor Yanukovych, was forced from power and replaced by a pro-Western regime. Thus, Russia lost a client state – long associated with the Russian nation ethnically, religiously, and geographically going back centuries – to potentially hostile influence. A Ukraine in NATO situation would represent a far more serious blow to Russian interests that the (relatively tiny) Baltics.

Central to this conflict is a change of authority in Ukraine from one government (Russian-oriented) to another (Western-aligned in outlook and interest) following several years of similar (albeit mostly domestic) political conflict between these interests. From a Russian perspective, actions weakening or delegitimizing Ukrainian authority serve a vital strategic interest in undermining a potentially hostile (or at least non-cooperative) government and replacing it with one more amenable to Russian interests.

Traditional ways of achieving this effect include “classic” pillars of information operations such as MISO/PSYOPs, propaganda, and to a lesser extent MILDEC operations. New to the arsenal are cyber operations that can undermine a population’s confidence and faith in a government to provide vital (or basic) services and security. Looked at in this fashion, the 2015 and 2016 Ukrainian power events take on a much more significant role beyond their rather limited impact to electric transmission or distribution – instead, such events become signifying items in an ongoing operation to undermine confidence in Ukrainian authorities to provide or ensure basic public services. In this fashion, the timing of the operation – to roughly take place at approximately the same time of year in 2015 and 2016 – serves as an explicit message that even suspecting (or knowing) that such an attack was likely (in 2016), Ukrainian authorities were unable to protect civilian infrastructure.

NotPetya expands upon this by extending operations from specific portions of the Ukrainian electric grid to demonstrate an ability (and willingness) to disrupt Ukrainian commercial activities by targeting accounting software used across the country for attack. While NotPetya is mostly known for its impacts beyond Ukraine in the majority of reporting, impacts in Ukraine itself extended quite widely to include services ranging from radiation monitoring at Chernobyl to ATMs on the streets of Kiev. In this fashion, NotPetya expanded to impact the daily life of Ukrainians at a very noticeable level, again serving to undermine confidence in the current Ukrainian administration to secure basic resources and infrastructure against attack.

Examined in isolation, each of these events (and similar items since 2014) indicates a disruptive incident, but put together as part of a cohesive, enduring campaign, such events combine to represent a sustained effort to undermine confidence in the Ukrainian government. While many have looked to actions in Ukraine as a “test lab” for cyber operations in a permissive environment, such views overlook the significance of these attacks as events “in themselves” sustaining information operations against Ukrainian authorities and influencing the Ukrainian people.

On a related note, suspected Iranian (or other) operations against Saudi Arabia – and especially Saudi Aramco – since 2012 have resulted in multiple disruptive, borderline destructive attacks from the original Shamoon attack through StoneDrill and potentially the TRISIS safety system attack. In these cases, much emphasis can be placed on the immediate nature of these events: disrupting IT operations or destabilizing ICS operations to produce a noticeable effect on the target organization. But in the background this entire period, Saudi Arabia has attempted to modernize, diversify, or otherwise improve its oil-focused economy through multiple initiatives including the long-rumored public offering of Aramco shares. When viewed in light of these initiatives, the various rounds of disruptive events fit a more insidious pattern: of undermining confidence in the Saudi government’s ability to foster, promote, and most especially protect private investment given the external threats facing Saudi society. Thus, while organizations in Saudi are immediately impacted, the ultimate cost comes in investment, engagement, or other efforts foregone as a result of perceived risk in the Saudi environment.

Broadening our view of the impact of cyber-attacks to encompass strategic communication ensures that the total cost of such events is appreciated. Moving to a US-centric example, the mere hint of operations against the US election system and infrastructure serves to cast doubt on the security and integrity of the system – setting the stage for internal conflict and discord. While immediate impacts may be slight (or even non-existent), the ability to influence populations and politicians through fear and fostering uncertainty results in potentially powerful, long-lasting effects. From a defensive standpoint, even a successful interdiction and remediation of an attack may still result in lasting anxiety – so an attack may have “failed” or been limited in its initial effects but produces lasting dread that ultimately fulfills the adversary’s interest.

What we as network defenders can do to counter the strategic implications of even unsuccessful attacks is difficult to discern. Certainly, maintaining vigilance and ensuring that cyber defense continues to be effectively conducted and resourced are obvious items. But it may also be necessary to expand our own ability to communicate to wide and diverse audiences of the precise nature, impact, and implications of attacks to fight back against the sort of “fear, uncertainty, and doubt” (FUD) produced through inaccurate or simply misunderstood discussions surrounding events. Greater transparency around defense – including acknowledging the near inevitability of a breach – may thus instill a more realistic understanding in broader populations of how these events play out, and therefore limit the potential secondary, communicative effects of such events through greater knowledge. In this sense, greater transparency and outreach to broader audiences beyond the security community can build a more informed and resilient audience in the face of influence operations conducted through cyber means.

Greater transparency surrounding cyber-attacks may seem counter to many organizations perceived immediate best-interest: to recovery from an event while minimizing loss (in money, assets, reputation, or other qualities). Yet in providing less information about an impact, imaginations can run wild and narratives can be shaped in an information vacuum to take on far more dangerous forms. In the case of attacks on critical but privately-owned infrastructure – from the financial system to power grids – greater transparency into how and why an event occurred, and reasonable detail on how an event was detected and then remediated may work to restore public faith in targeted institutions in the face of sustained attacks to sow doubt in the minds of populations.

Selected, Additional Resources:

US Governmental Information Operations and Strategic Communications: A Discredited Tool or User Failure? Implications for Future Conflict – Steve Tatham

Information Operations in Strategic, Operational, and Tactical Levels of War: A Balanced Systematic Approach – Bunyamin Tuner

Influence Cyber Operations: The use of cyberattacks in support of Influence Operations – Pascal Brangetto and Matthijs Veenendaal


1 Comment

Corazon Stanciel · 11/10/2018 at 12:36

Thanks for sharing..

Comments are closed.