Recently I engaged in conversation with Dale Peterson dealing with the gas explosion events in Massachusetts. For background, following the event in question there were multiple unfounded claims of a “cyber” cause behind these events followed by significant pushback from various ICS security experts. Where Dale and I enter the picture and disagree concerns reported comments from the American Gas Association (AGA) via Blake Sobczak:
“…the information we have seen reported in the media is inconsistent with a cyber attack.”
Dale’s comment was questioning whether this “speculation” was any better than the initial unfounded claims of a cyber nexus that started this controversy – and my response was that the comment represents not speculation, but judgment based on (admitted) limited evidence. Ultimately, we agreed to disagree but I think this discussion provides a nice window into an important question concerning intelligence, and cyber threat intelligence in particular: at what point do you make claims, and how does one frame claims based upon known incomplete evidence?
In the Massachusetts case, I support the AGA claim because as reported, it is framed within the available information: the information observed is not consistent with a cyber attack. That can certainly mean the event was caused by some cyber event, but notes that the evidence as reported presently does not support such a claim – which is vastly different than the unfounded, tangential claims put forward by others emphasizing a likely cyber nexus. As such, AGA’s statement does not represent speculation – “the forming of a theory or conjecture without firm evidence” – so much as a limited claim based upon limited available information.
From a threat intelligence perspective, this is important because of one simple, seemingly obvious point: we (cyber threat intelligence professionals) will almost never have a “complete” view of events under consideration. Short of an absolutely perfect forensic investigation complete with all log, host, and network data analyzed without error or gap and where no other, unanalyzed events exist, the idea of a “perfect” intelligence picture for a cyber event (or really any event) is an absolute fantasy. Yet at the same time, consumers – whether customers, stakeholders, or supported defenders – demand (and need) intelligence to frame and support decision-making.
The solution to the above dilemma is framing conclusions based on available evidence while recognizing the limits imposed by available information. While short, this is what I find in the AGA’s claim – that this represents not speculation, but informed commentary on available facts. From an intelligence professional’s perspective, this is the same as forming an assessment of activity when the analyst knows they don’t have the complete picture – since such a state is likely impossible and unattainable. Thus some level of judgment and confidence (or to put it in derogative fashion, “weasel words”) are necessary to not only provide an assessment on matters, but also how or to what confidence level such an assessment was made.
This may seem pedantic to some and elementary to others, but ensuring mutual comprehension between intelligence professionals and intelligence consumers on this point is necessary, for it may lead to disagreements such as that between Dale and myself. Namely, where claims based upon limited information (since that’s all that is available) are dismissed out of hand as “speculation” when in fact they represent the “best-available” assessment in an imperfect information environment.
In many ways, this reminds me of my own place in the overall intelligence picture when I was in the Navy: providing time-sensitive support to tactical units with best-available information, also known as threat “indications and warning” (I&W). When viewed in the classic intelligence distillation picture where one starts with raw data and ends with finished intelligence (after several weeks or months), I&W resides somewhere in the middle: trying to provide clear, actionable data as quickly as possible to support immediate- or near-term decision-making. As such, one must make claims based upon an incomplete set of data to ensure valuable (and potentially vital) information is pushed to consumers while still actionable. At the same time, framing such notifications and indicating confidence and potential gaps is equally important – knowing how the information was obtained (to the extent possible) or what it may leave out is vital to the consumer in assessing the value and accuracy of the data with respect to the current situation.
As a result, you can say that I’ve intensely lived this discussion for quite sometime, and therefore get somewhat sensitive when speculation is conflated with measured judgment on (acknowledged) incomplete data – because the two are dramatically different.
To close this (mercifully brief) post: one should certainly refrain from making claims or judgments on matters in the absence of evidence. But necessity and reality demand that professionals regularly assess matters based upon incomplete, patchy data because complete data will likely never materialize and consumers require some assessment as quickly as possible while retaining some degree of accuracy. Thus defenders or decision-makers in nearly all walks of life must accept and learn to deal with assessments based upon incomplete data (and recognize that the assessments they already receive are founded upon incomplete information) – and include this in their own evaluation of an intelligence product. Similarly, intelligence professionals must ensure they identify and notify the limits in terms of accuracy and confidence within reports to ensure that decision-makers can make appropriate calls on received reports.
Overall, it would be nice to live in a world where we had regular, sustained access to complete and accurate information on any and every event. We do not live in this world. But to then toss out narrowly-framed statements that acknowledge their basis on slight amounts of evidence – quite different from speculation – is to set an increasingly difficult barrier before the evaluation and timely dissemination of information as situations develop. Recognizing that we do not live within a perfect world and never will, but ensuring we keep constant reminders of our limitations, is vital to be successful in this business.