The most frustrating type of bad argument to refute are those which feature or rest upon a kernel of truth. In the worst, most-annoying scenario, one must deal with a counterparty that simply reasserts their position without hesitation resembling the chess-playing pigeon of Internet fame. More worrying still is that circumstance where the counterparty – for reasons of bias, conflict of interest, or incentive – responds in bad faith, identifying and amplifying those correct components while dissembling around weaker (or incorrect) positions.

The above interlude accurately describes a niche – but at times heated – discussion surrounding the security of sensors, whether in terms of general security applications or more significantly in industrial control system (ICS) networks. The basic argument can be distilled as follows: all security solutions (and for ICS operations, effective and integrity-assured processes) are reliant upon receiving accurate information from observed systems. If operations or security cannot guarantee or verify the integrity of sensor output (or the sensors themselves), then one cannot guarantee the fundamental integrity of the network or process itself.

The overall situation presents itself as a reductio ad absurdum: when one cannot definitively or completely prove or affirm the security and integrity of system sensors, one must therefore accept the premise that this lowest-level of device is a fundamental weakness and a near-unsolvable attack vector. Organizations and defenders are thus left in a situation where first principles – those fundamental items that form the bedrock of all subsequent observations and conclusions – are not merely called into question, but identified as unreliable and insecure.

If we accept sensor security is the foundational and most-pressing issue of security (especially for the ICS space), then what should one actually do about the situation? As one would expect, there are several companies founded purely to “solve” this problem. But if fundamental sensor security is the defining issue – the very foundation around which all subsequent analysis, processing, and identification is built upon – then can any product or methodology actually solve the issue? If we approach the problem from this perspective, we’ve walked in to another reductio ad absurdum position, where we now occupy an untenable (at least for realistic operations) stance where no data can be trusted, where all platforms (down to Layer 0 of the Purdue Model) are suspect and perceived as fundamentally insecure.

Based on such difficulties, available solutions within the product space typically involve the installation and implementation of a secondary, separate sensor network on top of the existing network for process telemetry. This duplication would appear to solve the problem (so long as the duplicated network is not also compromised), but does so at immense cost in terms of both resources and overhead. When combined with the impossibility of ensuring that even this secondary network is neither compromised nor maliciously modified, such solutions become completely impractical.

Yet the above dilemma of sensor security seems to be not only inaccurate in implication, but false in characterizing the problem. Importantly, “sensors” is a catch-all for a complex system of platforms, communications, and interactions between foundational physics and a human operator. The “sensor” as viewed explicitly as a Layer 0, discrete physical device capable of subversion is perhaps the least concerning issue when considering the flow of information (and multiple touchpoints for subversion or manipulation) from physical process to operator. Essentially, “sensors” merely represent one aspect of an overall chain in information gathering and exchange, starting with translating physical observations into electronic data and ending with the processing of such data by some supervising device or human being. Given this whole-of-information approach to sensors as an interlocking system rather than simply being a discrete device, there are in fact multiple possibilities for manipulation and subversion – many of which are far easier (and more probable) to execute than the nightmare Layer 0 device compromise.

“Sensors” serve as the fundamental observation system through which physical process or direct observable states are recorded and then translated into forms that are actionable. Such action can be taken by automated systems (e.g., a PLC operating within pre-programmed ladder logic) or a human operator, with the resulting data format (and speed of transmission and processing) dependent on the actor. Furthermore, this link is not direct, but relies upon multiple interconnecting pieces: the physical process or action, which is observed by the sensor, which then transmits translated process data via a data link to a receiver or processor entity, and then final delivery to and action by a supervising machine or human being. While modifications to this chain at the lowest possible level (the physical process or the immediate observer) represent the most potent and difficult to detect attack vectors, they also remain the most expensive, unlikely, and difficult to execute attacks given the multiple necessary steps required to achieve success and the need to do so across multiple collection points to ensure consistent modification across collection points. Essentially, one is in a position where attempts can be made to modify or subvert multiple discrete endpoints at the sensor level, or manipulate such information at a single point where such data is ultimately received or acted upon.

While a true sensor or (unobserved, undetected) process alteration remain the most concerning attack vectors, they also are the most difficult, expensive, and hard to scale. Thus, looking at the history of attacks (especially in ICS environments) and continued trends, all external-party attacks and modifications come at “higher levels” of the overall sensor stack, typically focusing on either processing of data in software or display of data to the receiver (human or machine). In many respects, this is analogous to the concept of signal jamming in electronic warfare. When jamming a signal, you don’t jam the transmitter (source of the signal) for a variety of reasons, from it being extremely difficult to highly uneconomical. Instead, you jam the receiver by injecting into, overpowering, or otherwise modifying the transmitted signal or receiver capability to prevent the communication chain – from origin to link to endpoint – becoming complete.

Given practicalities, difficulties, and adversary economization of effort, focusing on sensor-level security or sensor-level manipulation is not only irrelevant given the existing environment, but such an emphasis actively draws resources from efforts more likely to result in improved security. For example, devoting resources and capability to building out a duplicated, ‘secure’ sensor network on top of existing devices means that one has likely invested significant resources which draws away from securing information processing and display. As a result, while one can assure the integrity of fundamental data, its subsequent transport, processing, and display is still subject to manipulation or subversion. 

Actual, effective security focuses not on duplication of data to identify deviations or manipulations, but rather on building out robust and overlapping visibility into overall network and process environments to ensure monitoring of entire attack and communication pathways. Using ICS as our reference point, even something as profoundly concerning as sensor manipulation still requires some degree of adversary control, direction, and delivery of actions on objective to make such a compromise valuable. All of these items display or create observational touchpoints across multiple data sources that can be used for security purposes to identify, diagnose, or detect malicious activity.

Ultimately, security against even highly concerning events such as sensor manipulation or supply chain interdiction still relies on an ability to observe, monitor, and detect an adversary attempting to interact with or otherwise inject into the compromised environment to make this subversion valuable and actionable. As a result, defenders still have a number of opportunities to identify and defeat such techniques. By focusing not on the supposed sophistication or power of low-level attacks or injection but rather on the multiple, necessary dependencies to make such items valuable, controllable, or actionable, defenders remain in a position to control the engagement and defeat adversaries – so long as they are collecting relevant data, correlating it with events, and responding in a reasonable amount of time.