As of this writing (late December 2019), an argument has continued through multiple social media formats on the value (or harm) created by “offensive security tools” (OSTs). As with most discussions taking place online, the discourse – rooted in an originating, absolutist blog post – has rapidly devolved into rival camps that equate essentially into “yay” or “boo” choruses with respect to security testing tools and software. While such crowing from the rooftops afforded by Twitter may make adherents to either side “feel better” about themselves, such shrill repetition of increasingly maximalist positions does absolutely nothing to resolve or produce growth out of a perceived suboptimal situation.
Essentially, rival sides in the information security ecosystem – pure “blue teamers” and die-hard “red teamers” – have taken increasingly extremist views on a subject in response to a provocative initial argument. To distill the originating premise, offensive security tools, by marrying opportunity (vulnerabilities) with capabilities (exploits or similar mechanisms), significantly lower the bar to intrusion operations. When such tools are publicly released, they enable the proliferation of capability to various actors, accelerating attacker evolution while at the same time disadvantaging defenders (especially those outside of well-funded, well-equipped organizations). This is a very real and concerning problem, given the proliferation of tools, from publicly-available PowerShell frameworks to obfuscation mechanisms to game-changing tools such as Mimikatz, from sanctioned red team exercises to state-sponsored hacking activities. Yet in decrying the situation, do we come close to solving or fixing circumstances?
First, we must look at the present situation in two discrete manners: a functional, practical mechanism, and an ethical viewpoint. In the case of the former, sane discussion can be had about operational net loss or gain by introducing new offensive capabilities into the public (or even commercial) space. For the latter, conversation rapidly devolves to assignations of blame and individual culpability over enabling “bad things” to happen. The former enables productive, reasonable conversation to solve problems – the latter leads to name-calling and increasingly shrill communication. Sadly, and likely because of the communication medium involved (Twitter, blog posts [yes, like this one], etc.), the latter, ethical, and more importantly personalized discussion appears to have gained supremacy. Thus a discussion that could hinge on nuanced conceptions of disclosure, defensive detections, and responsible release has instead degraded into so much name-calling and sanctimonious posturing.
If we truly wish to take a heated and increasingly emotional discussion and turn it into something reasonably productive – and more importantly, actionable within the bounds of reality – what would this look like? First, for as much hand-wringing over the public release of offensive tooling takes place, such disclosures simply are not going away. An attempt at legal regulation, licensing, or prohibition will drive activity now available on GitHub into underground markets and forums – just as such things existed in the 90s and 00s. While throwing up “barriers to entry” for parties seeking to adopt the latest and greatest in offensive tooling, such an approach (since it cannot effectively eliminate development and dissemination) moves security into defense via capability obscurity, something most security professionals rightly recognize as a cheaply gained (and easily evaded) security posture leading to little long-term benefit.
Second, accepting that such releases are either inevitable or simply unavoidable, reactions to them are best couched in terms enabling detection and response instead of a knee-jerk reaction to ban or shame such activity. Cries towards prohibition or unfettered restraint ignore not only the pressures (both commercial and reputational) to publish, but also the reality of a marketplace where even the best-intentioned individuals nonetheless have a desire to share and demonstrate the results of their research. Simply saying that such activity is “wrong” and “should not occur” undermines and calls into question the very existence of researchers, developers, and practitioners focused on “breaking” things.
Instead of engaging in an absolutist argument over the value of penetration testing tools (and their possible reuse by unethical, malicious actors), a far better conversation would be how “red” and “blue” can work toward a common ground. This would embrace both continual development of mechanisms to subvert security controls (so as to continue pushing defense forward) while at the same time doing so in such a manner as to empower even the most basic security teams to detect the reuse or application of such capabilities. Note, resourceful, dedicated adversaries will acquire, use, and modify whatever tools they have at hand (either publicly, commercially, or privately available) to achieve their goals – so expecting the developers of an open-source tool to anticipate some criminal or state-sponsored entity’s work to obfuscate a capability is not only unrealistic, but is also profoundly unproductive to the conversation. Instead of working on “edge cases” surrounding the evolution and modification of tools, perhaps discussions should focus on bidirectional information sharing and capability improvement.
Viewing “red” and “blue” as entities working in concert (e.g., “purple teaming”) instead of in opposition, different possibilities come to light. However, such a reinterpretation of relationships also entails modification of behavior, expectations, and deliverables on both sides. As an aside, I have often despised red teams in my career as they both try to emulate actual offensive actors without ever having “been” the APT, while at the same time being dismissive of users and defenders as technically inferior. Similarly, I’ve seen red teamers and related entities (with some justification) hold blue teamers in disdain as sluggish, uncreative, and unresponsive beyond what some blinking box or other tool tells them. Obviously, both perspectives are skewed and, in their own ways, wrong – yet they do persist, between the “glamor” of penetration testing and the workingman “honesty” of blue team defense operations. The first and most important step, not just for the penetration testing tool discussion but the overall security picture more generally, is to abandon such preconceptions and to embrace a mutually-reinforcing security model.
Such a model is less “penetration testing” of networks and more exercising capabilities to train for the fights to come. Looking at both sides (and potentially exchanging personnel between them) will help break down such barriers. When it comes to specific tools and capabilities, the same mindset should apply – how a given development, bypass, or exploit can be used to benefit defense rather than just producing such a capability as an end in itself. Offensive tooling produced absent such examination is at best some frictionless spinning in a void of irrelevance, and at worst an irresponsible release of capabilities that could be used for harm.
The obvious follow-up to this comment is an idea of “responsible disclosure”, similar to what is increasingly expected of vulnerability analysis. In this model, the release of an offensive security tool is accompanied by not only documentation and explanation of that tool’s function and capability, but also mechanisms to detect, identify, or mitigate that tool. An example of this is GentilKiwi’s publishing of YARA rules to accompany Mimikatz releases. Just as responsible vulnerability disclosure includes an expectation that a vulnerability researcher will work with the impacted party to develop a patch (so long as the impacted party acts in good faith), offensive security researchers should presumably be held to a similar standard – if not through some legal sanction, than at least through community policing and understanding – to provide defensive recommendations and detections to accompany a new capability.
This mindset seems uncontroversial and reflected in some of the more flexible adherents of the “offensive tools are bad” camp – yet it only covers part of the problem. The other side is a lack of willingness, if not outright laziness, on the part of defenders to understand, research, and build capabilities against newly-identified offensive techniques. As a (mostly) life-long blue teamer, I understand and appreciate how many things are going on at once and the lack of resources for capability research beyond what is of immediate need. At the same time, this mindset will only serve to continually hold defensive operations back against continual offensive evolution. We as defenders may decry the public release of a new offensive framework, but personally I would much prefer such a release (where I can examine, acquire, and experiment with the code and techniques in question) as opposed to driving such development and release to areas inaccessible (or at least difficult to view).
Essentially, defenders should look at the public release of offensive capabilities as an opportunity and not a burden, so long as such release is made with defender needs and lifecycles in mind. Instead of only learning of a new attack technique or methodology once it hits the defended network, public release (and in-network testing through red teaming) allows defenders to grasp such methodologies in advance of hostile actor use. In this fashion, networks can evolve to incorporate needed visibility, implement required controls, or close off potential attack vectors before other parties take advantage of them.
One corollary to the above is that not all (and in fact, most) security teams are neither resourced nor staffed well enough to actually execute such a model. In this case, I think we have identified a separate, even more concerning issue as opposed to something intrinsic to the development and release of penetration testing tools. That many organizations – including some which are vital to social functions – lack the resources, personnel, and tooling necessary to combat increasing offensive capability development is not the fault of those pushing offensive boundaries forward, but rather the result of a social unwillingness to appropriately resource and invest in critical infrastructure functionality. While we may declare such operations would be easier absent PowerShell Empire or CobaltStrike, the fact remains that the organizations in question must ensure security and continuity of operations – that the field continues to shift and advance is something operators, consumers, investors, and regulators must take into consideration to ensure that such organizations are allowed the flexibility to adequately invest in such capabilities, or given the opportunity to share and disseminate knowledge with peers to build a “herd-like” immunity.
Ultimately, hackers gonna hack, and defenders will need to deal with the threat landscape irrespective of community opinions on what is “just”, “appropriate”, or “desirable” when it comes to vulnerability or exploit release. One of the greatest problems in this discussion is an inability of both sides – that favoring offensive tool development, and that establishing defense as most important above all other concerns – to understand how each perspective can inform, benefit, and improve the other. In doing so, each must make a sacrifice – red must be willing to slow or modify release schedules to ensure adequate blue understanding and possible defensive capabilities, while blue must also be willing to admit that the field of information security is not static and defensive operations must continually evolve and adapt. Where these trends leave gaps, such as smaller, poorly resourced organizations, either public entities (‘government’) must be ready to step in, or community practitioners should understand and appreciate such externalities and work to resolve them where possible.
This is a short blog post, and is only meant to highlight issues and offer broad suggestions – all of the ideas above can (and should) be fleshed out in greater detail to enable actual execution. However, the main thrust of this argument is to point out that so much debate on the subject of offensive tooling takes exceptionally maximalist positions that enable little if any discussion on how both aspects of the discussion can (or should) coordinate to ensure the greatest possible security benefit. Retreating into tribalist camps of “blue team” or “pentester” not only makes for shrill discussion, but it also does little to resolve the debate – in fact, such braying probably sets the field back years while actual adversaries continue to build, refine, and evolve their own tradecraft.
1 Comment
Security Externalities and the Undefended Victim – Stranded on Pylos · 12/31/2019 at 21:41
[…] over the release and disclosure of “offensive security tools” (OST – previously addressed here), one disadvantage is constantly referenced to show the harm of publicly-available hacking tools […]
Comments are closed.