Fake LinkedIn profiles are using AI-generated headshots to impersonate companies
Fake LinkedIn accounts with authentic images are proving hard to detetct
Creating fake social media accounts to trick people is hardly a new tactic, but there’s something sinister about this new campaign that makes it stand out from the crowd.
An in-depth analysis posted to the KrebsOnSecurity blog claims cybercriminals have been using artificial intelligence (AI) to create profile pictures of non-existent people, and pairing that information with job description stolen from actual people on LinkedIn.
That way they’re creating fake profiles which, for most people, are almost impossible to identify as fake.
Numerous use cases
Users have spotted a growing trend where suspicious accounts attempt to access various invite-only LinkedIn groups. Group owners and administrators are only able to spot what’s going on after getting dozens of such requests at once, and seeing that almost all of the profile pictures look the same (as in, same angle, same face size, similar smile, etc.).
The researchers say they have reached out to LinkedIn’s customer support, but so far, the platform hasn’t found its silver bullet. One of the ways it’s going about this challenge is requesting certain companies send a full employee list, and then banning all accounts that claim to be working there.
Besides not being able to determine who is behind this onslaught of fake professionals, the researchers are also struggling to understand what the point of it all is, exactly. Apparently, most of the accounts aren’t monitored. They aren’t posting things and aren’t responding to messages.
Cybersecurity firm Mandiant believes hackers are using these accounts to try and land roles in cryptocurrency firms, as the first stage in a multi-stage attack whose goal is to drain the company’s funds.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Others think this is part of the old romance scam, where victims are lured by pretty pictures into investing into fake crypto projects and trading platforms.
Furthermore, there is evidence of groups such as Lazarus using fake LinkedIn profiles to distribute infostealers, malware, and other viruses, among job seekers, especially in the cryptocurrency industry. And finally, some believe the bots could be used in the future to amplify fake news.
Responding to KrebsOnSecurity’s research, LinkedIn said it was considering the idea of domain verification, to tackle the growing problem: “This is an ongoing challenge and we’re constantly improving our systems to stop fakes before they come online,” LinkedIn said in a written statement.
“We do stop the vast majority of fraudulent activity we detect in our community – around 96% of fake accounts and around 99.1% of spam and scams. We’re also exploring new ways to protect our members such as expanding email domain verification. Our community is all about authentic people having meaningful conversations and to always increase the legitimacy and quality of our community.”
- These are the best privacy tools around
Via: KrebsOnSecurity
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.