The term “sources and methods” brings passionate, sometimes pained reactions within the information security community. On the one hand, there are those engaged in traditional intelligence operations for whom “sources and methods” are vital resources, to be maintained and preserved at almost any cost to ensure continuous collection. Contrary to this, those engaged in operations and active, day-to-day defense, find “sources and methods” to be immaterial if they prevent or inhibit sharing information that could improve defenses right now. Usually (but unfortunately not always) sitting in between these warring (or at least opposed) parties are decision-makers responsible for the overall strategic thrust and management of resources who need to determine what balance to strike between maintaining “sources and methods” and burning such resources for immediate operational needs.

While I certainly am exposed to this problem within the cyber security realm, my first glimpse of the “sources and methods” conundrum was as a information warfare officer in the US Navy balancing signals intelligence resources against operational needs – both for units afloat in various theaters, and troops on the ground in central Asia and the greater Middle East. Within these circumstances, my role was unique, as I was not an intelligence officer nor a day-to-day operations or line officer: instead, my role was that “in the middle” between information gathered and collected and the person or entity forward that could use that information. Observing and occasionally deciding matters in this position, I felt a keen (and sometimes painful) awareness of the contradictory needs between intelligence “sources and methods” preservation and operational requirements. For example, a unit afloat would really benefit from knowing that adversary, potentially hostile long-range attack aircraft have taken off from bases far in that country’s interior – but if the only way of gathering this information is through sensitive signals intelligence means, sharing this information resulting in an “own-unit” response effectively tips one’s hand to the adversary that some capability exists. Similarly and more painfully, a ground unit in Afghanistan would certainly benefit from knowing the day-to-day operations of bombmakers and similar entities – but if such persons find their materials constantly discovered or evaded, a vital source of information will likely dry up preventing further exploitation.

Undergirding all of this is a vital if macabre cost-benefit evaluation: does the benefit of “burning” a capability outweigh the cost? Questions that come into play include: does the resulting action further strategic goals more than maintaining collection? Will a failure to act on information result in loss of life? How easily can the burned source be replaced – or is an alternative even reasonable to expect, especially in the case of “close access” resources? Answering these questions – preferably in advance – is vital to making what can be really important decisions of greater scope than those immediately impacted might realize. For example, if a certain detection for adversary units afloat is revealed, a unit commander in peacetime (or a time of growing tensions) may be relieved and better able to manage his/her risks in the moment, but at the cost of a capability that would be profoundly useful (at least initially) in wartime. Similarly, stopping a single improvised explosive device (IED) seems a great success, unless doing so results in a failure to stop the source of such devices – thus jeopardizing the ability to mitigate one source of IEDs in theater. In the moment, such harsh calculations can really suck, but when exposed to the “big picture” their value (even with certain, sometimes deeply saddening costs) becomes clear.

How does this relate to cyber? First, I find that the “sources and methods” debate within cyber breaks into two categories of concern: strategic action and operational incident response. Most practitioners (if not all threat intelligence professionals) are familiar with the latter, which can shed better light on the trade-offs inherent in the former. Thus exploring what we intuitively do in some “day-to-day” operations may help shed light on the bigger decisions underlying “sources and methods” decisions.

The sources and methods debate within operational incident response (IR) is simple, and intuitive to those who have participated in or led an IR operation. When scoping and analyzing an intrusion in the lead-up to remediation, a vital question is: “Did we get/identify everything?” The risk here is simple and critical: if an alternate backdoor or other command and control (C2) channel exists that was not discovered in initial scoping, the adversary will be able to resume access on a “backup” channel that the IR team lacks visibility on or knowledge of, resulting in a repeat compromise. Essentially, the IR team must weigh a sources and methods question before taking action: it makes no sense (and can be self-defeating) to respond to an intrusion immediately if the result is burning sources (current visibility into the adversary) when additional intrusion or communication vectors exist. Once the IR team can reasonably answer the question, “do we see everything?”, then and only then can remediation efforts (blocking observed C2 channels, removing implants) occur with any reasonable assurance of preventing re-infection.

Moving up a level to the “strategic” sense of sources and methods disclosure and impacts to operational cyber defense, operations typically (and not at all unjustifiably) finds great frustration in intelligence disclosure, with the reason often being protection of “sources and methods”. Items such as the failure to disclose at all visibility into active intrusions, or disclosing but doing so months or years after the event in question, are incredibly frustrating for defenders for it feels that vital defensive operational information that could be profoundly useful right now is preserved for some “hand-wavey” excuse of protecting “sources and methods”. Yet simply writing off this reasoning is dangerous, and reflects a lack of vision and scope on the part of operational defenders.

Looking at our immediate-term, IR example, strategic disclosure (resulting in burning sources and methods for the information at hand) faces the same potential pitfalls as reacting too soon to an intrusion. Burning a source to enable operations may provide immediate satisfaction and a short-term improvement to defense – but at the cost of visibility into a certain activity in the long-term. Examples of where this could fail would be disclosing information on a “minor” (in the “greater scheme of things” sense) intrusion event resulting in follow-on intrusions to more sensitive or vital targets shifting to some new, yet-to-be-discovered methodology – essentially, the original source “went dark”. While the first organization will be profoundly benefited by this sharing of information, the overall picture and balance of (strategic) accounts is the worse off – this may be hard for defenders at the initial impacted organization to swallow, but especially when dealing within nation-state led efforts at intelligence collection (and dissemination) this appears to align appropriately with their needs and desires in both collection and sharing data.

Moving away from the perspective issue described in the previous paragraph, an evaluation of goals and responsibilities for respective organizations sheds another light on matters. First, the following argument deals solely with nation-state intelligence capabilities and their civilian sharing counterparts (e.g., NCCIC or NCSC). As a result, as a second point, we will assume that commercial threat intelligence providers – beholden as they are to customer requirements and near-fiduciary duties to their clients – operate under a different set of responsibilities and duties. From this distinction, we will also assert here that while nation-state cyber intelligence organizations are not necessarily the best, their resources and capabilities mean they very often have unique (and more extensive) visibility into events – but also far more sensitive capabilities that could cause embarrassment or more severe repercussions if disclosed. Based on all of these items, such organizations are entrusted with managing the cost-benefit balance I described earlier: will the sharing of information in the immediate term bring sufficient value to outweigh the loss of information in the long-term? The answer to this, as with many things, is “it depends”.

The answer to this question is something we can (and should) take issue with to challenge the reasoning behind such decisions – but so long as we (in the broader commercial space of cyber defense and intelligence) also understand these (state-centric) organizations operate with a different value proposition than we do. Principally, national-level organizations entrusted with sensitive source data must not only weigh the value of disclosing today’s intrusion against tomorrow’s (potential) intrusion, but also the value of disclosing a capability today at the cost of foregoing that capability for later actions – especially actions or requirements in time of crisis or even war. Commercial threat intelligence providers rarely face this question – and if they do, it is as a contractor to the very organizations mentioned above. So something that may seem trivial to us within the commercial or private-sector space may in fact be rather vital from a public or state-centric perspective.

For example, disclosing intrusions into even seemingly vital enterprises – such as cleared defense contractors – may seem obvious in terms of strategic interest, especially to the breached contractors. Yet depending on how this information was gathered, it may tip off the adversary to rather unique visibility into their operations. For example, if the only (plausible) means for collection or identification of adversary assets and activity would be access to the adversary’s own networks this information would be incredibly valuable and worth preserving even if it results in seemingly catastrophic losses in the near-term. Simply, the access and information available would be vital in time of war to either tracking the adversary’s actions for more vital reasons, or for enabling offensive actions to cripple that adversary from undertaking further operations.

I’ve spent significant time justifying the “sources and methods” reason for withholding information, for I feel there is a lot to be said for this justification. But at the same time, the above offers an explanation for why this justification is important in a general sense as opposed to a defense of its implementation in all cases. When weighing the decision to disclose or not, those in possession of information must truly embrace and accept the costs and benefits on both sides of the argument – and not privilege one over the other. All too often it seems, the persons controlling dissemination are also those “owning” one side of the debate: the intelligence collection responsibility. As a result, a frequent (and in many cases, a justified) criticism is that intelligence collectors favor their own interests, despite evidence or “greater good” considerations, over operational need. Once this perception takes root, it is hard to dispel or eliminate, resulting in continued lack of trust in future decisions regarding “equities” or “sources and methods” preservation.

Thus, “sources and methods” is a very legitimate defense for not sharing or disseminating information – but because of the inherent secrecy surrounding the term, it is easily abused and once such a perception sinks in, it is difficult to erase. To deal with this issue, my first recommendation is that those engaged in cyber operations as defenders, SOC analysts, IR personnel, etc. understand and appreciate that “sources and methods” really is an important and critical distinction that, because it may work against one’s own personal interests, can be difficult to swallow at times. But my second recommendation is more important potentially and also more vital: that those parties privileged to such information and making dissemination decisions realize the power in their hands and the consequences of abuse. Once the perception sets in that “sources and methods” is simply a self-preservation tactic for vault-dwelling intel flunkies, that perception becomes ever more difficult to eliminate. If intelligence professionals want operations personnel to understand and accept the significance of “sources and methods” decisions, then intelligence professionals must impartially apply a cost-benefit evaluation when making the determination to share or withhold. Additionally, such decisions should be made as transparently as possible to build trust within the realm of operations to ensure future decisions are accepted, even if not particularly enjoyed.

Ultimately, cyber intelligence – as with any other form of intelligence – is only valuable insofar as it leads to operational results. Some results are more valuable than others, thus intelligence must in some cases be withheld for the sake of “bigger wins” down the road. But when intelligence begins viewing itself as an “end-in-itself” as opposed to a “means-to-an-end”, intelligence confuses and subverts its purpose while hindering actual operations from doing its job. Thus while operations personnel must come to terms with reasons for withholding information, intelligence professionals must also ensure that they build and earn the trust of such personnel and never lose sight of the ultimate purpose for their own activity.