An AI executive's dire warnings about the future are chilling – but his solution is worse than the problem
Trusting profit-driven tech companies to reshape society is a nightmare in the making
AI is making for a fraught future, with problems that DeepSeek senior researcher Chen Deli believes tech companies are best suited to solve. DeepSeek is one of China's hottest AI upstarts, albeit one facing some political and technical headwinds, but for a startup that jolted global markets with a low-cost AI model that spurred a wave of open-sourcing from competitors like OpenAI, DeepSeek has been unusually quiet. So when one of its leaders warns that AI could eliminate most jobs over the next two decades and cause major disruptions that society is not ready for, people pay attention.
The “honeymoon phase” we are in now will end, and people will face a wave of layoffs vast enough to reshape social contracts and institutions. He made it sound like a less immediately deadly Black Plague for its rewriting of people's lives. It's certainly not the most outlandish claim. But Chen’s proposal for corporate saviors sounds as nonsensical as any AI hallucination.
"Tech companies should play the role of guardians of humanity, at the very least, protecting human safety, then helping to reshape societal order," he said, setting off every warning bell imparted by the entire history of dystopian science fiction, not to mention actual tales of history.
The word reshape alone ought to chill the bones. He’s effectively saying the corporations building the tools that might upend society should also be in charge of designing what comes next. It’s as if Oppenheimer had asked the Manhattan Project to write the postwar constitution, but only after nuclear reactors had an IPO on Wall Street. The suggestion isn’t just naïve. It’s deeply dangerous.
The changes wrought by AI go well beyond who gets replaced by a chatbot that's sometimes adequate at the job. Deli's not wrong to point out that AI systems will increasingly outperform humans. But what kind of world are we building when those jobs are gone?
AI already sets the tone for what we see online, what we buy, and how we behave, with the tech companies monetizing every bit of us and our data they can. The idea of these same companies, insulated from meaningful oversight and beholden only to profit margins, serving as the selfless custodians of a chaotic society, is laughable. If anything, they’ve made it abundantly clear that they’ll prioritize growth, revenue, and everything else above humans and the broader project of civilization, even when the collateral damage is obvious.
Every week, there seems to be another embarrassing or outrageous story born from the flaws and foibles of AI, and plenty more about how people are misunderstanding and misusing the technology. Yet the response is almost never more than a shrug and a promise to fix it eventually, right after they complete their next crucial investor call.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Human intelligence regulating the artificial kind
To be fair, public regulators haven’t exactly dazzled us with their speed or savvy. The EU’s AI Act is a good step, but not enough on its own, and the U.S. regulatory frameworks are fragmented and mostly reactive. The average congressional hearing on AI is a grim parade of buzzwords and tech executives politely nodding at lawmakers who don’t understand what they’re talking about. China, where DeepSeek is based, has been more aggressive in some areas, but it’s hard to argue that centralized authoritarian control is the better model for tech governance. Surveillance concerns and speech limitations don’t get easier to swallow just because they have a human signing the rules.
The current state of regulation is uneven, inconsistent, and often too slow. But that doesn't mean the answer is to hand over the reins to the developers like they are benevolently neutral. They are not your friends or your representatives. They are certainly not suited to be physical and civilizational caretakers of humanity. They are commercial actors with products to sell and quarterly metrics to hit. When push comes to shove, they’ll sand down any ethical qualms until they fit neatly inside a slide deck.
You can’t mitigate harm when the very act of mitigation threatens your business model. If an AI-powered hiring system turns out to be discriminatory, fixing it costs money. If an automated content generator floods the web with low-quality sludge, turning it off affects revenue. There’s no incentive to do the right thing unless someone forces their hand, and by that point, it’s usually too late.
The tech industry has shown repeatedly that it’s not equipped to self-regulate in a way that prioritizes the public good over private gain. In fact, the mere idea that the architects of disruption should also be in charge of constructing what replaces the old order should terrify anyone who’s ever been on the wrong side of a platform’s algorithm.
It's not anti-progress, it's pro-humanity
None of this is to say that AI doesn’t have incredible potential for good or that demanding safeguards means you're anti-technology. Despite confusion over the term, it's worth remembering that the Luddites weren't against technology either; they were anti-exploitation. Their protests weren't about looms, but about factory owners who used those looms to undercut skilled labor and impose miserable working conditions.
Chen Deli is right to ring the alarm, but wrong about who should hold the bell. Whistleblowers don’t tend to emerge from boardrooms. We don’t yet have a coherent framework for what responsible AI governance looks like. We have pieces, but no connective tissue to make those ideas stick, and we lack the political courage to impose them on the people with the most power.
Still, I’m not entirely pessimistic. The frameworks we need could exist. They could be built by coalitions of governments, civil society, independent researchers, and yes, even some principled voices from within the tech world. But they’ll only come into being if enough people demand them.
If the next decade really does bring the kind of transformation Deli predicts, we’ll need more than corporate promises. We’ll need rules with teeth to preserve the safety and dignity of humanity without trying to make it a product for sale.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.