Everyone is wrong about AI regulation, and the history of the Internet proves it
If the early days of AI were marked by a laissez-faire approach, 2026 will be the year of intense battles over regulation and who, if anyone, gets to control it.
I think back to the early Internet, specifically the 1992-1994 period, when high-speed connections were sparse and the web still mostly described something spiders built.
The Information Superhighway that the World Wide Web spawned (built on the Internet's framework) was like the establishment of a vast and rapidly-growing city that anyone with a connection could visit; no passports or border security stood guard, and no one checked who built a business or information repository or what any of them might contain. It was the wild west, and for years, government officials at virtually all levels took a decidedly hands-off approach. As with other tech epochs, the vast majority of officials barely understood the Web, let alone what controls and regulations might be necessary.
In the US, a small army of congresspeople and officials (then-Senator Larry Pressler and former VP Al Gore, among them) eventually led the regulatory charge, so that by 1996, we at least had the Telecommunications Act of 1996. However, this was less about regulating the content of the Web than about handing control of the Internet to the FCC so that it could be managed to some extent like a utility.
A byproduct of this is that these internet providers and platforms were shielded (thanks to the now infamous Section 230) from responsibility for any of the content that appears on their services and networks.
Does hands off work?
It's the kind of get-out-of-the-way regulation that the current US Administration is proposing, though the reasoning is less about unfettered access and more out of concern for global competition.
AI is not necessarily about the people. It's likely that long-term concern revolves around the coming AI arms race; the country with the most powerful AI may control information and attack vectors that affect access to information and critical infrastructure (not to mention who controls the robot army).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Caught in this mix are, in the US at least, 50 state governments that are probably more interested in regulating AI to protect their citizens from things like AI bias, AI misinformation, and even AI access to systems that would be better handled by humans. There's also some concern about protecting jobs from AI.
From the time the Internet started functioning (in the late 1980s) to the introduction of the World Wide Web (arguably 1993), it took years for even this milquetoast Telecom Act to arrive, and decades more for anyone to realize that we might also need to look at information regulation. While the EU has moved quickly to enact regulations that protect users and their data, the US has, aside from the aging Child Online Protection Act (COPA), virtually no federal Internet regulations and few toothy state-level controls.
The rapid, unprecedented growth of AI (see "AI Time") and its spread into every sector of our lives has, though, inspired some panicky work at the US state level and, perhaps, equally panicky pushback at the US federal level. To wit, the White House's latest executive order seeks to eliminate and block any and all state-level AI regulation.
Executive inaction
On some level, though it pains me to say this, I agree with the White House: the US cannot afford to fall behind its chief global adversary, China, in the AI race. We're currently ahead, but that's changing fast. One report noted that China's "open-source LLMs' global share" had, in just a year, grown from 1.2% to just 30%.
The threat of Western tools being overrun by China is not spin. It's a real possibility. An equally real possibility, though, is that no one fully understands the potential harm of vast, global use of AI. Having no regulatory framework means that we're basically throwing up our hands and seeing what happens. The US White House's light-touch approach is not going to be enough.
Worse yet, its insistence on focusing on anti-bias efforts in AI will prove a harmful distraction. Withholding funds for states that don't comply will only serve to slow down AI development and create even more unreliable models.
AI is bigger than all of us
What's needed is a compromise regulation that consistently works across states and even reaches our global partners, say, in the EU. It requires decisions about what's best for humanity and to ensure that we maintain a balance of control between us and China. It means using an examination of our recent Information Age past and the growth of the Internet to inform our approach to new information and intelligence technology that is moving at triple time. It means avoiding past mistakes and getting ahead of the unintended consequences of an AI future.
The squabbling between state and federal officials is unhelpful. Just as the local governments' trying to regulate with a patchwork of their own legislation is sure to fail, not because the regulation is bad, but because it won't hold up with technology that effortlessly crosses every known border.
The lack of rational, coherent discussion on this topic is not just frustrating; it's dangerous. AI development will continue apace, no matter what we do. If the world doesn't wrap its arms around thoughtful regulation, including checks, balances, and control, AI will end up no less dangerous than unregulated nuclear power.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.
Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.