Forget AI, Microsoft is working on "superintelligence" - with medical diagnosis the first area of interest

AI
(Image credit: Pixabay) (Image credit: Shutterstock)

  • Microsoft’s medical AI already outperforms experts on complex diagnoses
  • Human oversight remains Microsoft’s answer to fears of machine autonomy
  • The promise of safer superintelligence depends on untested control mechanisms

Microsoft is turning its attention from the race to build general-purpose AI to something it calls Humanist Superintelligence (HSI).

In a new blog post, the company outlined how its concept aims to create systems that serve human interests rather than pursue open-ended autonomy.

Unlike “artificial general intelligence,” which some see as potentially uncontrollable, Microsoft’s model seeks a balance between innovation and human oversight.

A new focus on medicine and education

Microsoft says HSI is a controllable and purpose-driven form of advanced intelligence that focuses on solving defined societal problems.

One of the first areas where the company hopes to prove the value of HSI is medical diagnosis, with its diagnostic system, MAI-DxO, reportedly achieving an 85% success rate in complex medical challenges - surpassing human performance.

Microsoft argues that such systems could expand access to expert-level healthcare knowledge worldwide.

The company also sees potential in education, envisioning AI companions that adjust to each student’s learning style, working alongside teachers to build customized lessons and exercises.

It sounds promising but raises familiar questions about privacy, dependence, and the long-term effect of replacing parts of human interaction with algorithmic systems, with questions remaining about how these AI tools will be validated, regulated, and integrated into real-world clinical environments without creating new risks.

Behind the scenes, superintelligence relies on heavy computational power.

Microsoft’s HSI ambitions will depend on large-scale data centers packed with CPU-intensive hardware to process massive amounts of information.

The company acknowledges that electricity consumption could rise by more than 30% by 2050, driven in part by expanding AI infrastructure.

Ironically, the same technology expected to optimize renewable energy production is also increasing demand for it.

Microsoft insists AI will help design more efficient batteries, reduce carbon emissions, and manage energy grids, but the net environmental impact remains uncertain.

Mustafa Suleyman, Microsoft’s AI chief, notes “superintelligent AI” must never be allowed full autonomy, self-improvement, or self-direction.

He calls the project a “humanist” one, explicitly designed to avoid the risks of systems that evolve beyond human control.

His statements suggest a growing unease within the tech world about how to manage increasingly powerful models, as the idea of containment sounds reassuring, but there’s no consensus on how such limits could be enforced once a system becomes capable of modifying itself.

Microsoft’s vision for Humanist Superintelligence is intriguing but still untested, and whether it can deliver on its promises remains uncertain.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.