Should everyone have access to ChatGPT? AI for all raises some important, yet difficult, questions

A person holding out their hand with a digital AI symbol.
(Image credit: Shutterstock / LookerStudio)

Earlier this year, reports surfaced that Sam Altman, OpenAI’s CEO, floated the idea of giving every UK citizen free access to ChatGPT Plus.

According to The Guardian, the proposal came up in discussions with UK technology secretary Peter Kyle in San Francisco. It could have cost as much as £2 billion. Money, the government, thankfully, seems to have decided would be better spent elsewhere.

But the rumor raises an important question: should everyone have access to ChatGPT Plus?

The case for universal access

Love them or hate them, AI tools like ChatGPT are already changing how people study, work, and create. From summarizing dense documents to generating presentation slides in minutes. So they can be really useful.

As an AI ethicist and author of Artificial Negligence, James Wilson tells me: “On the positive side, from a productivity and thought partner perspective, sure, why not have a sparring partner for brainstorming or to make slides look pretty? Just as long as it doesn’t stop people from doing their own critical thinking.”

There’s also a fairness angle. If powerful AI tools are locked behind premium subscriptions, only the wealthy will benefit. Universal access could help bridge, rather than widen, the digital divide. “Democratizing AI is going to be vital to avoid exacerbating the wealth divide,” Wilson adds.

In other words, if AI becomes as essential as the internet or email, as many predict, shouldn’t it be available to all?

The risks and red flags

But, of course, there are some significant caveats to rolling out AI to everyone.

Wilson’s first reaction to the universal access proposal was blunt: “My first thought is – isn’t that sort of how drug dealers work? They give you the hits for free until you are dependent on them, then they gradually put the prices up.”

Even without pricing and dependence problems, there are deeper issues here. We know that ChatGPT and other large language models can be highly persuasive, even when they’re wrong. “The way these LLMs have anthropomorphized themselves into our lives means that we tend to become too trusting of them,” Wilson says. “Hence the risk that we blindly accept their hallucinations as truth.”

People are already turning to AI for therapy and even romance. With outcomes ranging from surprisingly useful to seriously problematic.

These risks multiply if AI tools are used across all industries, especially in government and business. A civil servant or corporate analyst could unknowingly copy misinformation into a report, which then travels up the decision-making chain. “From a professional/government perspective, this error could get lost in the decision-making chain because the person using the LLM will undoubtedly pass the misinformation onto others as their own work,” Wilson explains.

Then there’s bias. All AI models reflect the worldviews of the people and institutions that build them. “The providers (OpenAI, Meta, Deepseek, etc.) created these models using the training data they chose (or stole) and trained them based upon their biases and ideologies,” Wilson says.

That matters when geopolitics seeps in. “We are already seeing this with Deepseek denying anything bad ever happened in Tiananmen Square. How long before ChatGPT starts spouting anti-abortion propaganda, etc?”

When this filters into classrooms or shapes culture more broadly, the dangers become impossible to ignore. “Put this in the hands of everyone and you have a very powerful way of rewriting history Orwell-style and nudging social behavior and culture,” Wilson warns.

Public good or private product?

It's also worth considering that if governments subsidise universal access, they’re effectively endorsing one company’s worldview. Is it really wise to outsource public knowledge infrastructure to OpenAI or any other private firm?

We’ve faced similar debates before around search engines, social media platforms, and broadband rollout. The difference here is that ChatGPT doesn’t just connect people to information; it can also reshape it, which makes the stakes even higher.

Finding the middle ground

One answer is to roll out AI tools to everyone, but in a smart and deliberate way.

Everyone could have some level of free access to the premium tiers of ChatGPT, but that would need to be paired with digital literacy education, transparency about model limitations, and safeguards against monopolization. Governments could support open-source alternatives or fund AI systems designed as true public utilities.

The idea of free ChatGPT Plus for all may never have been a serious plan, but it sparks a serious debate. Universal access sounds progressive, even inevitable.

But as Wilson points out, it also risks dependency, misinformation, and subtle cultural manipulation on a mass scale.

AI might be the next public good, but it’s also the next public hazard. The question isn’t just whether everyone should have access to ChatGPT. It’s whether we’re ready for what that would mean.

You might also like

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.