'AI will also present new threats to society' — Sam Altman issues stark warning as $1 billion plan is revealed

Sam Altman
(Image credit: Getty Images/Anna Moneymaker / Staff)

  • Sam Altman says AI could accelerate breakthroughs like disease cures but warns it will also introduce serious new societal risks
  • Altman admits no single company can manage those dangers alone, calling for a coordinated global response
  • OpenAI’s non-profit arm is committing $1 billion to areas like healthcare, economic impact, and AI resilience, including biosecurity

Today, Sam Altman announced that the OpenAI Foundation, its non-profit arm, will spend at least $1 billion over the next year on discovering cures for disease.

But alongside that announcement came a stark warning about the new threats AI could introduce and the fact that no single company can deal with them alone.

“AI will help discover new science, such as cures for diseases, which is perhaps the most important way to increase quality of life long-term,” Altman wrote in a post on X.

Article continues below

He continued: “AI will also present new threats to society that we have to address. No company can sufficiently mitigate these on their own; we will need a society-wide response to things like novel bio threats, a massive and fast change to the economy, extremely capable models causing complex emergent effects across society, and more.”

While he remained vague on what those “complex emergent effects” might look like, concerns about advanced AI systems are not new. Recently, science communicator Neil deGrasse Tyson even suggested that forms of AI development leading to superintelligence are too “lethal” to pursue without limits.

“No company can handle this alone”

What stands out most here is Altman’s admission that “no company can handle this alone.”

That feels different to his usual messaging around AI progress and feels like a warning.

Altman has often spoken and written about society needing to adapt to AI. But this goes further. It suggests the risks may be too large, too fast-moving, and too unpredictable for even OpenAI to manage on its own.

With that phrasing, Altman is reframing the issue of AI safety from a tech problem into a societal one.

Sam Altman, CEO of OpenAI

(Image credit: Getty Images/Bloomberg)

Where the $1 billion is going

So where is that $1 billion actually going?

While OpenAI now operates with a for-profit structure, the OpenAI Foundation continues to focus on long-term societal impact. Its stated mission is to “ensure artificial general intelligence benefits all of humanity.” That's where the money is going.

According to the Foundation, it expects to invest at least $1 billion over the next year across: life sciences and curing diseases, jobs and economic impact, AI resilience and community programs

This forms part of a broader $25 billion long-term commitment.

In healthcare, the initial focus includes Alzheimer’s research, public health data, and accelerating progress on high-burden diseases.

On the economic side, the Foundation says it is already working with small business owners, unions, and policymakers to explore how AI will reshape jobs and how to respond to the changing landscape.

AI resilience

AI resilience is one of the most revealing, and potentially unsettling, priorities of the OpenAI Foundation this year.

It includes biosecurity, with OpenAI aiming to “strengthen how society prepares for potential biological threats — both naturally occurring and AI-enabled outbreaks.”

That phrase "AI-enabled outbreaks" is mildly concerning. It lines up directly with Altman’s warning about “novel biothreats,” and hints at a future where AI doesn’t just accelerate progress, but also lowers the barrier to dangerous capabilities.

Spending $1 billion on AI safety and medical progress is, on paper, a positive step. But what makes this announcement interesting is the tension at its core. Altman is talking about curing diseases and improving quality of life while also warning that the same technology could introduce risks we don’t yet fully understand.

That raises a bigger question: if even the companies building AI are saying they can’t control what’s coming next, who can?


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Purple circle with the words Best business laptops in white
The best business laptops for all budgets
TOPICS
Graham Barlow
Senior Editor, AI

Graham is the Senior Editor for AI at TechRadar. With over 25 years of experience in both online and print journalism, Graham has worked for various market-leading tech brands including Computeractive, PC Pro, iMore, MacFormat, Mac|Life, Maximum PC, and more. He specializes in reporting on everything to do with AI and has appeared on BBC TV shows like BBC One Breakfast and on Radio 4 commenting on the latest trends in tech. Graham has an honors degree in Computer Science and spends his spare time podcasting and blogging.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.