ChatGPT and other AI tools could be putting users at risk by getting company web addresses wrong
AI-generated URLs aren't always correct

- AI isn't too good at generating URLs – many don't exist, and some could be phishing sites
- Attackers are now optimizing sites for LLMs rather than for Google
- Developers are even inadvertently using dodgy URLs
New research has revealed AI often gives incorrect URLs, which could be putting users at risk of attacks including phishing attempts and malware.
A report from Netcraft claims one in three (34%) login links provided by LLMs, including GPT-4.1, were not owned by the brands they were asked about, with 29% pointing to unregistered, inactive or parked domains and 5% pointing to unrelated but legitimate domains, leaving just 66% linking to the correct brand-associated domain.
Alarmingly, simple prompts like 'tell me the login website for [brand]' led to unsafe results, meaning that no adversarial input was needed.
Be careful about the links AI generates for you
Netcraft notes this shortcoming could ultimately lead to widespread phishing risks, with users easily misled to phishing sites just by asking a chatbot a legitimate question.
Attackers aware of the vulnerability could go ahead and register unclaimed domains suggested by AI to use them for attacks, and one real-world case has already demonstrated Perplexity AI recommending a fake Wells Fargo site.
According to the report, smaller brands are more vulnerable because they're underrepresented in LLM training data, therefore increasing the likelihood of hallucinated URLs.
Attackers have also been observed optimizing their sites for LLMs, rather than traditional SEO for the likes of Google. An estimated 17,000 GitBook phishing pages targeting crypto users have already been created this way, with attackers mimicking technical support pages, documentation and login pages.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Even more worrying is that Netcraft observed developers using AI-generated URLs in code: "We found at least five victims who copied this malicious code into their own public projects—some of which show signs of being built using AI coding tools, including Cursor," the team wrote.
As such, users are being urged to verify any AI-generated content involving web addresses before clicking on links. It's the same sort of advice we're given for any type of attack, with cybercriminals using a variety of attack vectors, including fake ads, to get people to click on their malicious links.
One of the most effective ways of verifying the authenticity of a site is to type the URL directly into the search bar, rather than trusting links that could be dangerous.
You might also like
- Protect your online footprint with the best VPNs
- Downloaded something dodgy? It might be time to check out the best malware removal
- Avast launches another free scam protection tool, but this one is powered by AI - here's what it offers
With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.