A common statement heard in information security circles these days is “the perimeter is dead.” The concept behind the statement is simple and seemingly obvious. Historically, security professionals only dealt with two networks: the “home” network (which was managed, safe, and trusted) and the “outside” or “external” network (regarded as risky, if not outright dangerous, and uncontrolled). Separating these two was the “perimeter” – the classic example of a firewall governing what traffic is permitted to flow between these networks. From a security standpoint, operations simply meant updating the firewall or other network boundary rules and controls, with the assumption that the internal environment is “safe”. This view resulted in the “M&M Theory of Security” – a crunchy outer shell protecting a more vulnerable interior, with the impact of robust external network controls and little to no internal visibility or monitoring.

Move forward in time, and the adoption of cloud-based models, vendor access to sensitive networks, and “bring your own device” cultures produced an increasingly porous perimeter – to the point where a well-defined and -regulated network boundary ceases to exist. In this sense, the perimeter really is dead – and convenience- and service-based models killed it. To meet this “new” landscape, organizations are pushed to adopt “zero trustmodels – where no device is deemed inherently “good” or “safe”, but rather the threat landscape effectively extends to the interior. Whereas a perimeter-based defense focused on classical network security monitoring, zero trust dives down to the host-level and emphasizes host-based monitoring and endpoint detection and response (EDR) products. While this is a powerful concept that holds many things to recommend it, at the same time at its most zealous level represents a pivot to another extreme on the security continuum.

Essentially, both concepts – “perimeter” and “zero trust” – are deeply flawed oversimplifications of far more complex issues. For I would argue that the traditional “perimeter” was hardly the hard-and-fast boundary it was held to be, while at the same time networks feature (or are capable of) greater diversity and control such that a complete “zero trust” model can appear excessive quite quickly. The reason for this is that well-designed and -implemented networks are (ideally) not flat, undifferentiated objects, but rather complex entities that, like onions and ogres, have layers. In this sense, a network can contain varying levels of trust, and multiple perimeters with varying degrees of enforcement.

When looked at from this perspective, the idea of a single, definitive perimeter between external and internal has long been deprecated (arguably as soon as the first DMZ was introduced in network models) and has little value or meaning. However, it would be pedantic to simply state this as an observation that undermines two, somewhat extreme viewpoints of security models. Rather, recognizing and embracing the fact that networks can (and should) contain multiple enclaves – hence, multiple perimeters – offers a host of options for security architecture, detection, monitoring, and response. The issue is not that network security monitoring is impossible because of BYOD, encryption, cloud architectures, or other reason, but rather the classical implementation and model for network security monitoring must change to follow operations: moving down into subnets and LANs for inter-network monitoring, and moving out into the cloud and similar to capture external dependencies.

In this sense, network defenders and security architects can begin envisioning multiple enclaves instead of one single, monolithic network – some services may be shared across all enclaves (such as Active Directory (AD) for the typical Windows domain environment), but traffic between enclaves and between enclaves and the outside world can face varying degrees of restriction and monitoring. Applied in this fashion, the “zero trust” model that seems so tempting in light of the current state of IT environments becomes instead a “variable trust” model. When adopted in this sense, devices (such as BYOD or contractor equipment) can be placed on dedicated enclaves featuring lower levels of trust than the network’s AD servers, which hopefully are quite trusted and safe. Between these enclaves, network traffic can be blocked, limited to only necessary functions, or some combination but monitored through internal taps and visibility. Within this model, an untrusted enclave (the “BYOD LAN”) can coexist with a highly trusted enclave (AD servers and similar items). Hosts should still feature significant monitoring and visibility, but different controls and restrictions can be placed on items within enclaves, and the traffic between them monitored and controlled.

The concept of “the perimeter is dead” lives alongside another thought that “network security monitoring is dead” – the latter because ever-increasing amounts of encryption in network traffic essentially makes communication invisible. While it is certainly true that the amount of encrypted traffic is increasing to the point where the majority of external traffic flows leverage some type of encryption, this in and of itself does not invalidate the network security monitoring approach, nor does it necessitate a purely host-based endeavor. Rather, network-focused monitoring will continue to add significant value (even for encrypted communications) in any of the concepts discussed thus far – perimeter-based, zero trust, or enclave-based architectures.

Even when traffic is not completely visible or transparent for analysis, value still exists in monitoring for network communication as a correlating element with host-based observables or to fill in gaps where host-centric monitoring may be impossible (such as in an industrial control system network). In this case, even when the underlying traffic is securely encrypted, traffic flows and metadata (even as simple as source and destination hosts and ports combined with traffic volume) still provide value in framing a potential situation, determining if suspected malware contacted a command and control node (C2), or if exfiltration is taking place. Complete visibility may not be possible, but network artifacts or flows can be aligned with process execution and host-based artifacts to develop a holistic vision of an event.

More importantly, within an enclave model where there are multiple, semi-permeable perimeters within the overall network, the combination of host and network visibility enables robust monitoring for or detection of lateral movement techniques. Even in cases where the latest-and-greatest credential harvesting software captures credentials and an attacker begins to leverage RDP or other legitimate means to pivot through the network, host-based artifacts (such as the steps leading to or during credential capture, or post-access steps such as system survey activity) can be combined with network flows (RDP or RPC traffic connections in the network) to get a more complete picture of an event, identify what techniques the attacker deploys, and then frame out a remediation and response plan.

To recenter this conversation on the original idea discussed, “the death of the perimeter,” envisioning network security as an all-or-nothing approach where one either possesses a firm external boundary (the perimeter exists) or no such thing is present (no perimeter, adopt zero trust) seems intellectually lazy and disingenuous. This is not to say that zero trust is a bad idea – in fact, it is a really good idea when used in coordination with other detection mechanisms and methodologies. The problem is combined with an artificial premise of a “perimeter”, the approach bakes-in security gaps where they need not exist. By understanding and embracing a view where networks consist of multiple, semi-distinct enclaves possessing varying degrees of trust and required functionality, robust network security monitoring and traffic control techniques can re-enter the fray along with host-centric zero trust models to build truly fantastic security programs.

While all of this sounds easy, and in the network case we are really dealing with technical concepts and technologies that have existed for years, implementation still requires work. Too many networks, as a result of legacy decision-making or inefficiencies, remain “flat” with little differentiation or variation. Potential physical boundaries may go unexploited (e.g., a physical device is in place separating networks, but separate LANs are not implemented and traffic is not captured), and policy decisions (separating “user land” from infrastructure or production equipment) may be unenforced. This type of decision-making must end, for these failures to embrace latent defensive and administrative advantages facilitates an attacker’s mission while making life ever more difficult for dedicated network defenders.

Categories: GeneralInfosec