OpenAI admits new models likely to pose 'high' cybersecurity risk

A representational concept of a social media network
(Image credit: Shutterstock / metamorworks)

  • OpenAI warns future LLMs could aid zero‑day development or advanced cyber‑espionage
  • Company is investing in defensive tooling, access controls, and a tiered cybersecurity program
  • New Frontier Risk Council will guide safeguards and responsible capability across frontier models

Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks as, in theory, they could be able to develop working zero-day remote exploits against well-defended systems, or meaningfully assist with complex and stealthy cyber-espionage campaigns.

This is according to OpenAI itself who, in a recent blog, said that cyber capabilities in its AI models are “advancing rapidly”.

While this might sound sinister, OpenAI is actually viewing this from a positive perspective, saying that the advancements also bring “meaningful benefits for cyberdefense”.

Catch the price drop- Get 30% OFF for Enterprise and Business plans

Catch the price drop- Get 30% OFF for Enterprise and Business plans

The Black Friday campaign offers 30% off for Enterprise and Business plans for a 1- or 2-year subscription. It’s valid until December 10th, 2025. Customers must enter the promo code BLACKB2B-30 at checkout to redeem the offer.

Crashing the browser

To prepare in advance for future models that might be abused this way, OpenAI said it is “investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities”.

The best way to go about it, as per the blog, is a combination of access controls, infrastructure hardening, egress controls, and monitoring.

Furthermore, OpenAI announced that it would soon introduce a program that should give users and customers working on cybersecurity tasks access to improved capabilities, in a tiered manner.

Finally, the Microsoft-backed AI giant said it plans on establishing an advisory group called Frontier Risk Council. This group should consist of seasoned cybersecurity experts and practitioners and, after an initial focus on cybersecurity, should expand its reach elsewhere.

“Members will advise on the boundary between useful, responsible capability and potential misuse, and these learnings will directly inform our evaluations and safeguards. We will share more on the council soon,” the blog reads.

OpenAI also said that cyber misuse could be viable “from any frontier model in the industry”, which is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industry partners.

“In this context, threat modeling helps mitigate risk by identifying how AI capabilities could be weaponized, where critical bottlenecks exist for different threat actors, and how frontier models might provide meaningful uplift.”

Via Reuters


Best antivirus software header
The best antivirus for all budgets

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.