Cyber-threat activity has become an increasingly common topic of discussion in the media, and society in general. Why is this? Because of public disclosures, which have helped to shine a light on a hitherto murky part underworld of malware and hacking across the internet. But until now, little has been considered about the potential impact of these disclosures on the attackers themselves.
Saher Naumaan, Threat Intelligence Analyst at BAE Systems Applied Intelligence.
That’s why BAE Systems recently compiled a new report, analysing motivations for public disclosure of threat activity and attacker responses that appear to have been influenced by it.
Who discloses publicly and why?
Disclosures comes from a variety of sources, from government agencies to threat intelligence teams in private security companies to researchers from human rights groups, or even individuals or groups who prefer to remain anonymous.
Why do they typically want to disclose?
There’s really a range of motivations from the various parties. Some researchers or companies may want to publicly attribute an attack to a particular country or entity, or at least to reveal more information about the attackers, in order to boost their organisation’s reputation while offering useful intelligence to the wider community to improve collective defences.
Often research like this can help to drive a more effective industry-wide response to a particular threat, and could even disrupt adversary operations. Sometimes the aim of publicly revealing details about a group is done primarily to name and shame them like governments looking to indict specific individuals and announcing that they’re using certain TTPs and indicators (“burning”). Other may want to “dox” individuals involved in attacks and publicly reveal their personal details.
How do attackers respond?
It’s important to remember that no two attackers are the same. Whether we’re talking about state-sponsored operatives, financially motivated criminal actors, hacktivists, or even private investigators, they all have their own distinct motivations and characteristics that inform their decisions. As such, it’s impossible to know without asking them directly what precipitated a certain course of action or response.
That said, there’s considerable evidence that they do take action in response to public disclosure of their activities and sometimes hackers even monitor what is being reported about them. TRITON group actors have been observed monitoring open source coverage of their activities, for example. Responses fall into three basic categories. They may “go quiet” and cease current activity; “change up” to alter aspects of their operations; or “get angry.” Some actors may also continue their operations unchanged.
Why might an attacker go quiet?
Ceasing activity is a natural response to public disclosure. Attackers know that researchers and perhaps governments are onto them, and don’t want to continue being tracked or risk attribution to their country or agency. One of the first incidents of this kind was Mandiant’s 2013 report on APT1/Comment Crew, a state-sponsored group that was consistently active but whose C&C servers fell silent immediately following the disclosure.
However, going quiet doesn’t mean this is the last we’ll hear of these hackers. Especially in the case of state-sponsored operatives, it might mean they’ll re-emerge under a different guise. This happened when Middle Eastern APT Operation Cleaver was called out in 2014. Its members are thought to have reappeared the following year in the form of the OilRig group.
What is the risk of hackers changing their tactics?
Most threat groups that are burned in public disclosure go quiet before subsequently changing their approach and tooling. The danger here in drawing attention to the fact we’re on to them is that they’ll disappear altogether or change tactics to avoid being found again. This happened most notably with an East Asian state-backed threat group, which was burned in the landmark CloudHopper report in which BAE Systems and PwC revealed large-scale attacks on managed service providers (MSPs).
Following that report, the group shifted to new techniques and tools, swapping PlugX for Quasar RAT and a new custom tool called RedLeaves. It became more careful and siloed its campaigns based on tools, targets, and IT infrastructure, which made it more difficult to map out its operations. In some cases, there’s a risk that the group in question decides to fight back.
Security vendor ClearSky’s investigations into the Charming Kitten group led the attackers to launch a spoofed version of the ClearSky website designed to harvest credentials from the firm’s employees and customers. In a similar petty and vindictive fashion, Kaspersky’s 2018 report uncovering the operations of MuddyWater led the group to upload a YouTube video demonstrating how to “disable” the Russian security firm’s antivirus product.
Are there any unintended consequences of disclosure?
You never know how attackers are going to react. But along with the responses outlined above, there can also be some more serious consequences of public disclosure.
The first is replication. Once tools and techniques are made public knowledge, other threat groups could use them in their own attacks. This happened when a novel SMB watering hole technique used by Eastern European group Dragonfly was subsequently used by Middle Eastern threat groups.
Attackers could also use copied tools and techniques to plant false flags in their own operations, in order to mislead researchers. The destructive Olympic Destroyer malware targeting the Pyeongchang Olympic Games in 2018 was found to contain elements linking it to numerous groups based around the world; however it was finally discovered to be the work of an Eastern European Group.
Finally, it goes without saying that an attack group will probably factor the disclosing organisation into future operations, whether this means retaliating or, more likely, adapting their TTPs to avoid detection in the next campaign.
- Protect your business network with the best endpoint security.