Kaspersky recently released a new public report on a group they refer to as ‘Slingshot’ (https://securelist.com/apt-slingshot/84312/). Aside from being a fairly complex adversary based on the description, one thing immediately struck me in the first paragraph:

“This turned out to be a malicious loader internally named ‘Slingshot’, part of a new, and highly sophisticated attack platform that rivals Project Sauron and Regin in complexity.”

 

For those watching at home, Project Sauron and Regin – also previously disclosed by Kaspersky – are threat groups allegedly tied to US computer network operations. For some background on these, I recommend the following:

In the Kaspersky report’s own words, “Slingshot is very complex and the developers behind it have clearly spent a great deal of time and money on its creation.” Linking this very complex and unique malware by implication with Regin and Project Sauron thus seems to implicitly tie the families together – something that crops up near the end of the blog post when mentioning where the malware’s targets exist: a focus on Middle East and African countries, which happen to overlap with various targets of the ‘War on Terror’.

Therefore, it came with little surprise that the public disclosure happened to cover a US government (USG) operation – in this case, operations linked to counter-terrorism activities, as reported by CyberScoop. The resulting conversation seems to divide into two distinct positions:

  1. Kaspersky has it out for the USG – especially in light of recent issues with USG banning Kaspersky from government machines!
  2. Kaspersky is just doing its job protecting customers and flagging malware!

Without confirming or otherwise supporting any of the claimed attribution in this case, I would offer to say that the truth lies somewhere between these extremes, and the situation is not one of binary classification.

First, Kaspersky’s ‘duty’ as an organization should be to satisfy whatever agreements it has forged with customers. This spans from ensuring its product detects anything the company deems to be malware, and that its intelligence offering details any activity that may be relevant to its clients. I think this is rather straightforward and obvious – if Kaspersky comes across something it can identify as malicious, its product should reflect this judgment. To do otherwise would be to violate a (borderline) fiduciary agreement with those that trust it to keep them safe.

BUT – the manner in which Kaspersky disclosed Slingshot was different than supporting its customers. Instead of updating AV signatures or publishing an internal threat intelligence report for paying customers, Kaspersky went a step further – and published a very public blog post on the subject. At this stage, Kaspersky is not supporting customers (presumably they received all the benefits of this work already), but instead making a public pronouncement deliberately and very vocally calling out such activity.

As one might note, I’m very torn about this entire series of events. On the one hand, I am incredibly supportive of identifying, defeating, and mitigating malware when and wherever you might find it because from a technical perspective, you might never know exactly what it is purposed for, or where it might next be used. On the other hand, getting involved in this space of network security will, by necessity, entail tangling with nation-state operations of varying levels of legitimacy – the disclosure of which carries obvious consequences in terms of intelligence losses.

To divert into an example momentarily, let’s discuss MS17-010. The Shadow Brokers-leaked series of exploits represented potent – but aged – capabilities for remote code execution on targets. They may have still been actively used at the time of disclosure. BUT – once disclosed, they were open to abuse and reuse by any number of adversaries As a result, once this attack path was identified, all security firms owed it to their customers to incorporate this information into detections. The impact may be potential thwarting of nation-state (legitimate) activity, but considering any entity ‘in the wild’ could utilize this technique would – in my opinion – make any security provider obliged to begin defending against it.

Now, let’s turn to Slingshot. In this case, the activity was, presumably, discovered via internal telemetry and not some public announcement. If some Shadow Brokers-style release accompanied this, I and many others in the community have completely missed it. Based upon that, Kaspersky seems to owe it to their customers, given knowledge of the event in question, to update AV detections and threat intelligence to follow suit. Not doing so could presumably be called negligence and dereliction of duties.

But in identifying and disclosing the activity identified, Kaspersky did not just update AV signatures and (likely) provide a private threat report to customers. Instead, they published a very public blog post on the subject. This action goes well beyond any conception of duty to customers, and moves into public broadcasting of discovery. And this is where things get interesting.

On the one hand, the malware and its use are very interesting from a purely academic perspective. It reflects a somewhat unique and novel capability, and thus appears to be worthy of sharing with the broader community of information security researchers. Yet at the same time, Kaspersky – as detailed at the very start of this meandering article – calls out fairly deliberately a relationship to previous FVEY capabilities. So in disclosing publicly in this manner, was the organization pursuing a deeper agenda than simply doing the community some measure of service?

For the last question – I cannot answer this, and I do not know for sure how this would have played out. The capability and targeting available through telemetry information disclosed by Kaspersky (MikroTik capability in MENA region) certainly hints at counter-terrorism operations (as opposed to ransomware or stealing valuable intellectual property), but this is an analytical judgment – not a statement of fact (from my position). But in providing a very public disclosure of what appears to be a fairly narrowly-tailored infection chain, Kaspersky seems to be casting significant light on a relatively specific threat. And this seems odd.

Within a tight and competitive marketplace, security firms are incentivized to burnish their credentials and call attention to potentially headline-grabbing discoveries. For many entities in the world engaging in computer network espionage or related activities, this is obviously a bad thing – but for the most part, that there are bodies out there willing to find and defeat such activity is an unvarnished ‘good thing’ for most people. When such activity happens to thwart long-running counter-terrorism activities, that may seem to be a very unfortunate development – and one may legitimately wonder how Kaspersky (or any similarly-situated entity) could have known this potential impact – at least, based upon available information. However, the very public and unfiltered disclosure of attacks and their attribution – even if thinly veiled as in the Slingshot case – may present its own problems. While I like to think I generally champion transparency in security research, letting the whole world know (as opposed to just stakeholders and potential victims) can have repercussions of its own – and in this particular case, Kaspersky went for the full, public disclosure route.

Ultimately, I’m not sure where I reside in terms of judging this action. On the one hand, I lustily agree that outing malware when and wherever possible is the goal of any dedicated security researcher. But, revealing such things in such a very public manner seems to move beyond simply ensuring defense and moves toward making a statement. One could argue that publicity is necessary to ensure maximal community sharing, but information security is already full of trusted and vetted sharing groups where such information could easily be disseminated to ensure maximal impact without bringing excessive publicity. Overall, the situation is quite tricky, not least of which because of the existing antagonisms within the involved organizations. But if there’s a lesson for the rest of us here, it is quite simple: informing the community is always laudable, but making a (potential) spectacle of a security announcement can lead to some rather interesting side effects.