Can we tax the robots?

A line of robots typing at computers
(Image credit: Getty Images)

For over a century, public finance has rested on a straightforward social contract: individuals work, earn income, and contribute taxes, and in return, governments provide social protection and public goods.

That balance is now shifting. The rapid rise of LLMs and AI tools is challenging the foundations of this arrangement, disrupting how income is generated. In turn, this is shifting how governments raise revenue.

Andrew Pery

AI Ethics Evangelist at ABBYY.

This will have major effects on government finances and the way the wider economy works. Without a tax system that reflects the impact of automation, the consequences could be significant. Labor incomes may decline sharply, demand for public spending may rise, and government revenues could come under growing strain.

Article continues below

With rapid change on the horizon, discourse around potential solutions is increasing. A paper by RAND Corporation cautioned that “As capabilities improve and AI is diffused, we need approaches to help maintain economic opportunity, social cohesion, and democratic legitimacy.”

Tax the robots

With pressure on public finances growing, the policy instinct is immediate: tax the robots. In an interview with Axios, OpenAI’s CEO Sam Altman said that AI superintelligence will be so disruptive that there is a need for a “new social contract”.

OpenAI has just released its policy blueprint, ‘Industrial Policy for the Intelligence Age,’ which proposes taxes on automated labor. This was seen as an acknowledgment that AI could reduce the payroll tax base funding Social Security.

In my opinion, this is not the solution that policymakers might think it is. The legal architecture of taxation resists the impulse to tax the robots, because taxpayers must be legal persons, natural or juridical, capable of holding rights, earning income, and bearing liability.

AI systems are none of these things. They cannot own property, file returns, or respond to enforcement. Granting legal personality to AI might seem innovative, but machines cannot be punished or compensate victims. Worse still, shifting liability to AI could allow the humans and corporations who design and profit from these systems to escape accountability entirely.

Reimagining the structure of society

There are several alternatives to the idea of a “robot tax” that may be more practical and more effective as the structure of the labor market shifts.

Instead of asking whether AI should be taxed like a person, it’s more useful to examine how automation reshapes the way value is created.

When machines take over tasks once done by people, income shifts away from wages and towards company profits. One response is to tax those profits differently, for example, by estimating the salary of the worker the system replaces as a basis for taxation, helping replace lost income tax and keeping revenues stable.

Another option is an “automation tax”, designed to reflect how much a firm substitutes machines for human workers. This could be calibrated using indicators such as revenue per employee or the share of tasks automated within a business, capturing the shift from labor to capital.

This should be seen less as a penalty on innovation and more as a structural rebalancing of the tax system. As AI makes companies more productive, more of the money flows to businesses rather than workers.

If governments continue to rely on wage taxes while fewer people earn wages, the system will stop working properly. Changing how profits are taxed helps ensure tax revenue continues to align with where the money is actually being made.

Another possible approach is a guaranteed annual income, funded by companies that benefit most from AI-driven profits to share the gains of automation more fairly.

A guaranteed annual income may be augmented by creating an AI dividend fund, which would consist of a nominal “compute levy” on the use of AI applications in commercial settings, applied, for example, to AI agents that replace human labor.

The proceeds of the AI dividend fund would be distributed proportionately as direct payments to “give people breathing room to retrain their skills and re-orient their lives.”

Combined with strong retraining programs, these policies do more than just soften the impact of change - they help support an economy that can adjust more easily and better handle future change.

An innovative example is the Singapore Government's recent offer of free premium access to AI courses, to help its citizens acquire skills for the AI economy.

The benefits of the AI economy accrue disproportionately to AI developers and to corporations that deploy them. Therefore, common sense would dictate that they ought to assume the responsibility for transitioning workers to the realities of the AI economy.

Given the disproportionate benefits that AI delivers to developers and corporations, there must be corresponding fiduciary duties imposed that level the playing field and provide a path for an equitable and sustainable distribution of its wealth effects.

The challenge is institutional inertia

The debate over AI often centers on the risk of innovation. But the bigger risk may be that institutions are too slow to adapt.

Tax systems tend to evolve gradually, but technological revolutions do not. If policymakers wait until wages have clearly fallen and the effects are fully visible, they will be forced to respond in the middle of a crisis - when there’s little time to design good policy.

If automation leads to huge productivity gains but also leaves many people financially insecure, redistribution ceases to be a political choice and becomes a means of maintaining economic stability.

The real challenge is not whether we can tax AI. It is whether we can reimagine how the system works before change is forced upon us. That means anticipating how value creation is shifting, and adjusting the foundations of public finance accordingly.

It is not just about managing the impact of AI, but getting ahead of it. If policymakers act early, they can shape a system that shares the benefits of automation while maintaining stability. Alternatively, waiting risks being forced into rushed decisions.

Finally, it is important to consider that the AI economy knows no boundaries. It inherently transcends traditional notions of trade relationships and taxation schemes based on the exchange of tangible goods and services.

That’s why AI taxation must be aligned globally to prevent companies just moving to low-tax jurisdictions, and to help prevent a growing wealth gap that could destabilize the global economy.

We feature list the best IT Automation software.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

AI Ethics Evangelist at ABBYY.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.