Last autumn, the scandal surrounding the resignation of Tory MP, Brooks Newmark, caused controversy over the use of social media to ensnare. The unfortunate Mr Newmark, who thought he was conversing with a 21-year-old called Sophie over Twitter, was in fact sharing lewd images with a male Sunday Mirror reporter.
Was he a victim of entrapment or a guy caught acting duplicitously? The jury is still out on that one. But one thing is certain: social media is now being used in a myriad of different ways.
Take, for instance, information security. Many businesses monitor network activity and scan email for malicious attachments. But attacks continue to slip through the net, with phishing emails being one of the most common methods of attack.
Phishing scams lure users to click on a suspicious link or open an obscurely named file sent in an email. The user does what the attacker was hoping, opens the email or clicks on the link, and the kill chain starts, establishing a backdoor into the company network. Voila, the attacker now has access to the internal network and can begin to escalate access privileges to get to really sensitive data.
But what happens when you reverse the psychology of this attack? Now, the email account has been deliberately created and set up on your domain solely for the purpose of monitoring attackers. The email 'user' is a bogus entity and you know that any communication sent to that email address should be regarded as either junk mail or a malicious attack. Instead of being compromised, you have now captured a malware sample and can immediately start looking for other instances of similar content sent to others in your business. Your incident detection and response stance improves enormously.
By using bogus email accounts, it's possible to create a source of DIY threat intelligence which is able to monitor for suspect emails in real time. But we need to give the email user a convincing identity that will appeal to the hacker.
The majority of targeted attacks start with a spear phishing attack. Such attacks contain a varying quality of research and profiling, employed by the attacker to find suitable candidates to target within the organisation. Facebook, Google, LinkedIn and other media are trawled for information and maybe even a little social engineering is employed, with the attacker snooping or even phoning the receptionist to determine which employees are worth targeting. (It's for this reason that you should instruct administrative staff never to divulge names or contact details).
By manually seeding social networks with regularly updated profiles, it's possible to give our bogus staff members real identities. This technique allows us to create what is known in security circles as a 'honeynet'. The idea is based upon a concept first devised by Clifford Stoll in the early 1980s and documented in 'The Cuckoo's Egg'. Stoll was the first person to entrap and to document hacking, which led to the conviction of hacker Markus Hess, a KGB spy who was stealing US military intelligence.
When creating a honeynet, think about content that would be attractive to the attacker. What do you do? What intellectual property do you have? What about unreleased business performance data? Customer databases? Credit card data? Make those fake roles relevant to the content. New starters are perfect cannon fodder for a spear phishing campaign as they aren't familiar with internal processes, probably haven't had security inductions yet and feel nervous about speaking up or getting fired in the event of doing something silly on their desktop.
Staff with access to other resources, possibly with raised privileges, but who may not be suspicious or aware of attacks, also make ideal bogus identities. The more genuine connections they have, the more plausible they are as real people. Hence, the more likely they are to be the recipients of targeted malware and the more useful the threat intelligence we receive.
Maintaining several distinct social media profiles and making them seem real can be taxing. Twitter bots would be the ideal route to populate their Twitter profiles with content that appears fresh, but it's always a risk: if it's too automated, it becomes obvious that the profile is fake.
So that's where an interesting paper that popped up last year can help. The authors of an algorithm called Bot or Not? have released a tool that attempts to determine whether a Twitter handle is genuine. By using the tool to work out if the bot content we are using to populate a profile is detectable or not, we can determine if it passes for a bona fide employee.
Using this honeynet, it then becomes possible to check for any similar patterns on mail logs. It is even possible to reverse engineer the malware, and find out where the connection goes back to. Obtain a sample and destination IP address and upload it on to a site such as VirusTotal or similar and you might just save someone else from being compromised too.
So is social media ensnarement ethical? I think that depends upon the motive and what the facade is intending to prove. If the honeynet prevents an attack on the business, makes public a possible exploit, and deters hackers, it seems both ethical and advisable to me. And I'm pretty sure Clifford Stoll would approve.
- Ken Munro is a Senior Partner at Pen Test Partners LLP