The School of Athens is one of the most famous images of Renaissance painting, blending Classical historicism with an increasing appreciation for the intellectual history passed down from Greece to the Western world. The figures at the center of this image represent divergent views in how we reason about and understand the world: the elderly Plato, pointing up to the world of Forms from which all of reality springs, and Aristotle pointing to Earth (level, but not necessarily ‘down’), emphasizing an evidence-based approach of observation from which general principles spring. As an aside, Aristotelian logic, centered on the syllogism, focuses on the movement of general principles to specific observations – but Aristotelian metaphysics and influences on science stress the formation of general principles from particular observations. Essentially – the School of Athens shows a discord between a deductive and inductive approach to understanding reality.
All of the above may seem irrelevant to the field of information security, but I find the underlying lessons (and tensions between) these views to be incredibly informative in understanding how we, as security professionals, process and understand observations in executing network defense. To focus conversation, I find the general alignment to be true: ‘bottom-up’ approaches to network security – aggregating multiple data points to form a complete whole – such as anomaly detection correspond to an inductive approach to security; while ‘top-down’ pursuits positing general conceptions of adversary activity to specific actions align with threat-centric analytics. Individual observables – in many cases corresponding to the much lauded but insufficiently robust concept of ‘indicators of compromise’ – simply form the background for these two more fundamental conceptions of observation.
Taking the School of Athens as our guide toward greater understanding of information security practices, both approaches certainly have their advantages: an inductive approach building up from individual data-points is forever anchored in actual data (or, ‘reality’) ensuring its relevance to actual operations; meanwhile, deductive pursuits center on fundamental goals for adversary action, thus orienting their observations toward the general trends and requirements of network intrusions. Yet both also have their drawbacks. Inductive reasoning in security planning risks getting lost in individual details while ignoring (or missing) the larger picture of what these tell – or even worse, wrongly interpreting evidence from very specific events leading to a false. general conclusion. Deductive approaches risk missing potential attack methodologies or techniques due to failure of imagination or scope, thus failing to capture entire classes of attack progression irrespective of evidence based on how detection and alerting are established.
While each have their pitfalls – and arguably, the pitfalls of a deductive approach failure are far greater – experience and reasoning guide me toward a deductive, threat-focused perspective for designing security detections and alarms. Some of this thought is already captured in previous discussion on threat analytics, but in this instance I wish to explore some of the deeper theoretical underpinnings behind a threat-centric (deductive) approach relative to an anomaly-focused (inductive) mechanism.
First, the scope and range of possible observables (the ‘world of the possible’) within the field of information security is, surprisingly, circumscribed. While modern technology enables a great variety of actions, from a security practitioner’s perspective (or, for that matter, an adversary’s), the realm of relevant items is somewhat finite: mechanisms for communication, for persistence, for information gathering, and for movement. Many possible examples exist based off of these more fundamental principles, but overall the scope of the possible is far less than what one would find in the physical universe, conducting more formal scientific exploration. As a result, while an inductive approach can ensure potentially overlooked phenomena are captured via observation toward building more general principles, within the information security realm a finite perspective of potential events (or event types) means observable phenomena are amenable to grouping and characterization in light of general, theoretical principles. Viewed in light of threat behavior, far more potential variety exists in grouping and deploying specific implementation actions than the corpus of discrete phenomena (e.g., watering hole attacks, credential capture, or exploit deployment).
Thus, a security professional can view multiple paths – using the same general ‘building blocks’ – toward operational goals. Defining and bounding behavior for purposes of detection and response thus depends on identifying those collections of behaviors that align with an overall malicious concept. Building from individual observables – a collection of atomic indicators or anomalies – can develop a picture of maliciousness, but as this ‘bottom-up’ approach is unbounded it can never be definitively completed. The observer or security practitioner thus continues to collect and observe on a continually evolving picture of the broader trend forecast by individual observables. The result for the analyst is increased workload while furthermore increasing the time and effort required to coalesce individual observables into a general picture defining the active malicious intrusion.
Second, and related to the above, a threat-centric, deductive approach to security alert development brings focus to observation and analysis. By postulating general conceptions of malicious activity and purpose, individual observables can be collected into a conceptual framework of why they occurred and for what purpose to focus investigation and remediation. Admittedly, this process can lead to cognitive bias when not rigorously applied thus engendering false or misleading conclusions as analysts attempt to fit data to a pre-existing picture. However, so long as this significant trap is avoided, the deductive approach provides speed and efficiency in evaluation.
The inductive approach relies on the meticulous gathering of substantial amounts of evidence in order to build a more complete picture of reality for an analyst to react to. If early reactions are prioritized, an inductive approach will lead to significant false-positives and analyst fatigue as individuals are tasked with attempting to build out the more general picture from scarce pieces of evidence. Waiting until sufficient evidence is gathered to completely (or even adequately) support an inductive conclusion for a possible security event builds certainty and avoids the traps of overreaction, but at the very significant cost of dramatically increasing the amount of information required (and the time over which such information must be gathered). The result is delayed response time as the approach must observe various malicious activities before supporting an actionable, firm conclusion.
Lastly, and related to the concept of threat profiling, a deductive approach enables an organization to approach defense from the perspective of likely events and risks. While this approach will fail to capture true outliers and ‘black swans’, it should – if properly executed – hold up under the most relevant threats facing the organization. The deductive approach from first principles – in this case, threat behaviors and objectives – enables the allocation of scarce resources (time, money, and personnel) to the most relevant threats based upon the general conception of their operation.
Conversely, an inductive approach, while arguably more complete and capable of capturing true outliers from preconceived notions of attack, do so at great cost. Evaluating and attempting to formulate a broader narrative around individual observables – outside of context – means security analysts, facing atomic detections, must attempt to mold these into a coherent whole. Generally, data is insufficient to do so without significant inference, and absent guiding principles for such inferences, analysts are prone to mistakes or simply failing to grasp the potential implications of otherwise harmless-seeming data.
Overall, humans think in stories. While building up general theories of the world from observation aligns most accurately with our idealized conception of science, in general individuals formulate judgment and assessment of phenomena within pre-conceived constructs. As a result, people will take more general conceptions of behavior or phenomena and attempt to ‘fit’ observables within this preconception. Based on this proclivity, an inductive-focused approach runs the risk of shoe-horning a counter-intuitive approach to the world with the risk of also failing to challenge or shape preconceptions of reality resulting in the worst of both worlds. When sufficient intellectual and judgmental rigor is available – along with the necessary amount of time – an inductive approach can certainly capture and categorize a far wider variety of activity.
Unfortunately, we live in reality and are bounded by the limits of our own psychology. Therefore, rather than attempting to make human cognition into something it is not, let us try to embrace and take advantage of our way of thinking and arm it with proper first principles. Building off of a deductive model of information security analysis and detection, we as practitioners must place great emphasis on defining and determining ‘first principles’ to fuel subsequent judgment. This requires substantial investment in understanding adversary actions, intent, and operational possibilities. While not easy, the payoffs are quite large given the advantages described above while also operating within our own proclivities as human beings.
Ultimately, we have arrived at a rather odd conclusion: in the effort to better appreciate and understand observable reality within the security space – as bounded by our own cognitive predilections and capabilities as human beings – we must take the counter-intuitive action of embracing a Platonic approach to framing reality rather than the Aristotelian focus on evidence gathering toward a bigger picture. The approach is imperfect, as are our minds. But to ensure the most amount of accuracy within an approach that will forever be bounded by our own possibilities as individual reasoning entities, we as security professionals must apply the greatest amount of possible rigor in formulating the general principles of adversary behavior and intent to inform our investigations. Failure to do so, and an inability to question assumptions or pursue second-order questions, dooms the most effective approach to security monitoring and response to failure.