The Netherlands' ambitious homegrown AI model enters the real world
Dutch AI project GPT-NL is trying to build a European alternative to Silicon Valley
- The Netherlands has begun real-world testing of its national AI model GPT-NL
- GPT-NL is designed as a European alternative to Big Tech systems
- The project focuses on practical public sector uses, including government communication and municipal assistants
The Netherlands is trying to build a national artificial intelligence model that is not controlled by Silicon Valley. The country developed its GPT-NL model over the last two years. Now the model is beginning to move beyond the lab and into real-world testing.
A partnership between Dutch government agencies and research organizations built GPT-NL. The idea was to focus less on viral demos and more on practical deployment inside government agencies.
GPT-NL is positioned as infrastructure instead of as a consumer chatbot competing for attention. If it works, GPT-NL will prove that an AI system can operate within European legal frameworks and public sector expectations without relying entirely on foreign companies. Europe already depends heavily on non-European cloud services, office software, and AI systems. Supporters of GPT-NL argue that dependence creates a strategic weakness.
Institutional AI
Five organizations have begun feasibility studies with plans to expand and eventually launch commercially later in the year. One of the first pilots involves Gem, a virtual assistant already used by nearly thirty Dutch municipalities. The feasibility study is examining whether GPT-NL can improve the quality of responses citizens receive when asking Gem questions.
Another pilot focuses on a government writing assistant designed to help civil servants draft clearer letters. That may sound less glamorous than image generation or AI video tools, but it touches on a very real issue in public administration. Official communication around benefits, debt, and social services is often dense enough to confuse the people receiving it. GPT-NL is being tested to see whether it can make those interactions more understandable.
The Netherlands Forensic Institute is using GPT-NL for its work, fine-tuning it on forensic datasets to improve classification across huge volumes of investigative evidence. TNO itself is also testing GPT-NL internally for sensitive projects where commercial AI systems may present privacy or security concerns.
Anti-Silicon Valley AI
The most striking thing about GPT-NL may be how it was trained. While major AI companies face growing legal battles over copyrighted training data, GPT-NL has reached licensing agreements with Dutch news publishers covering newspapers, broadcasters, and online media platforms across the country. According to the project, it is the first AI initiative anywhere in the world to secure paid collective agreements with all major publishers in a single market.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
That achievement matters because the relationship between journalism and AI companies has become increasingly hostile. Publishers argue that their work has been scraped without permission and repurposed into systems that can compete directly with the original reporting.
GPT-NL's licensing terms are publicly documented, publishers are compensated, and technical safeguards are intended to prevent users from extracting licensed content directly through prompting.
Still, the project faces the same reality of the cost of scaling that confronts nearly every AI initiative. GPT-NL's 25 developers and budget are tiny by AI standards. That creates a tension running beneath the optimism surrounding the project. GPT-NL appears usable for institutional use, but continuing to improve the model while keeping pace with global AI development will require sustained funding and political support.
Still, there are only a few meaningful AI challengers outside of the biggest American companies. The Netherlands is effectively testing whether there is another route forward, one centered on public institutions, negotiated copyright agreements, and local control over data infrastructure.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.