“Open innovation is the foundation of AI progress” says Jensen Huang as Nvidia reveals big open source AI push - major acquisition sealed and new models on the way
Nvidia acquires SchedMD to strengthen open-source software
- Nvidia acquires SchedMD to maintain Slurm as open source workload management software
- Slurm manages scheduling and resources for large clusters running parallel AI tasks
- Nvidia launched Nemotron 3 models, including Nano, Super, and Ultra sizes for AI tasks
Nvidia has announced a major expansion of its open source efforts, combining a software acquisition with new open AI models.
The company has announced it has acquired SchedMD, the developer of Slurm, an open source workload management system widely used in high-performance computing and AI.
Nvidia will continue to operate Slurm as vendor-neutral software, ensuring compatibility with diverse hardware and maintaining support for existing HPC and AI customers.
Slurm and more
Slurm manages scheduling, queuing, and resource allocation across large computing clusters that run parallel tasks.
More than half of the top 10 and top 100 supercomputers listed in the TOP500 rankings rely on the services, with enterprises, cloud providers, research labs, and AI companies across industries using the system, including organizations in autonomous driving, healthcare, energy, financial services, manufacturing, and government.
Slurm works with Nvidia’s latest hardware, and its developers continue to adapt it for high-performance AI workloads.
Alongside the acquisition, Nvidia also introduced the Nemotron 3 family of open models, including Nano, Super, and Ultra sizes.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The models use a hybrid mixture-of-experts architecture to support multi-agent AI systems.
Nemotron 3 Nano focuses on efficient task execution, Nemotron 3 Super supports collaboration across multiple AI agents, and Nemotron 3 Ultra handles complex reasoning workflows.
Nvidia provides these models with associated datasets, reinforcement learning libraries, and NeMo Gym training environments.
Nemotron 3 models run on Nvidia accelerated computing platforms, including workstations and large AI clusters.
Developers can combine open models with proprietary systems in multi-agent workflows, using public clouds or enterprise platforms.
Nvidia provides tools, libraries, and datasets to support training, evaluation, and deployment across varied computing environments.
Nvidia has released three trillion tokens of pre-training, post-training, and reinforcement learning data for Nemotron 3 models.
Additional AI tools, including NeMo RL and NeMo Evaluator, offer model evaluation and safety assessment.
Early adopters integrating Nemotron 3 include companies in software, cybersecurity, media, manufacturing, and cloud services.
Nvidia has made open source models, AI tools, and datasets available on GitHub and Hugging Face for developers building agentic AI applications.
“Open innovation is the foundation of AI progress,” Jensen Huang, founder and CEO of Nvidia, wrote in the company’s press release.
“With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.