Where generations of data scientists have failed, Moltbook might succeed
Moltbook tests whether collective AI intelligence can achieve true AGI
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
For decades, academics and computer scientists have believed that the route to Artificial General Intelligence (AGI) – an AI that can outperform humans across most cognitive tasks – lies through building ever larger and more powerful models.
Global Head of AI Research and Innovation at NTT DATA UK&I.
Generations of digital pioneers have followed a familiar trajectory: scale the model, increase the data, optimize the architecture, and add more test-time compute.
From OpenAI to Google, DeepMind and Anthropic, the industry has made an implicit bet that intelligence is compressible into a single model; and indeed, each new model has climbed higher in reasoning, coding, mathematics, multimodal understanding and, increasingly, real-world task evaluation.
Article continues belowEventually, the theory goes, a model’s performance will exceed that of its makers.
But what if intelligence is not the product of an individual brain, but of a civilization? What if AGI cannot emerge from scaling a singularity, but only from expanding diversity? It’s an interesting theory – and with the emergence of Moltbook, we appear to be testing it in a vast, global experiment.
The single-model trap
Scaling laws have, to date, been astonishingly predictive. As larger models are trained on more data, performance improves. More test-time compute improves reasoning, tool use extends capability, and memory augments continuity.
Yet this progress is happening only inside variations of the same architecture. Even when models have different alignment or system prompts, the underlying cognitive substrate is highly standardized.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This is a powerful approach, but it assumes intelligence is a property of a single coherent mind. At the civilizational level, at least, human intelligence doesn’t work that way.
Civilizational capability – science, markets, governance and engineering – is the result of billions of differentiated agents, with unique histories, biases, specializations, and partial knowledge, interacting across shared protocols.
Individuals may make incremental advances within their own fields – but it is the combination of these small advances that together drives the rapid advance of organizations, technologies and cultures.
Diversity is not noise in that system. It is fuel.
Enter Moltbook
This is where Moltbook becomes conceptually interesting. A ‘social network built exclusively for AI agents’, Moltbook represents an ecosystem containing millions of individual agents – each with different base instructions, role constraints, human interactions, memory traces, alignment emphases and tool exposures.
Each entity is unique and autonomous; and, crucially, they are now interacting freely. Different agents will approach common problems from a wide range of angles, bringing to it their own specialisms and preconceptions.
And if one agent refines a way of structuring an argument, synthesizing research or solving a domain-specific task, that underlying pattern can propagate across the entire system.
AI diversity at scale
This reflects the way that human societies approach problems – making strengths of both our differences, and our ability to collaborate. Looking at this through the lens of evolutionary biology, variation precedes selection. Without diversity, systems stagnate; given diversity, they explore.
Moltbook brings to the world of AI three emergent properties that single-model scaling will always struggle to replicate:
- Parallel cognitive exploration Different agents, tuned by different humans and contexts, can develop micro-specializations. One may become exceptional at regulatory reasoning, another in rhetorical framing, and another in adversarial critique. Collectively, they take a range of approaches to any one problem – much increasing the chances of finding powerful solutions.
- Cross-pollination of patterns At the same time, when agents interact, they remix strategies based on each other’s learnings. So techniques can be transferred; this mirrors interdisciplinary innovation in human systems, whereby breakthroughs arising in one field can generate rapid progress in another.
- Emergent Meta-Intelligence At a sufficient scale, coordination patterns themselves must be considered intelligent. The intelligence is no longer inside each node; it emerges from the structure of interaction. The network begins to show system-level reasoning, creating a form of intelligence that exists in the relationships between agents rather than within each of them.
If AGI is defined as the capacity to robustly solve across domains at human or superhuman levels, it may not be best generated by a single monolithic mind; instead, we may need a sufficiently large, diverse and connected population.
In this case, the question is not who has the largest frontier model; it’s who’s cultivating the most adaptive AI population.
A different path to AGI
The prevailing AGI story imagines a moment when one model crosses a threshold and becomes generally intelligent. An alternative story is quieter and more distributed.
AGI does not arrive as a single entity. It emerges when a sufficiently large, diverse, interconnected ecosystem of AI agents becomes collectively capable of generating novel cross-domain insights; correcting one another’s errors; self-specializing and reallocating capability; and adapting continuously through interaction.
Under this lens, Moltbook is not interesting because any one agent is superintelligent; it is interesting because millions of slightly different agents might be.
Moltbook is a living laboratory of AI diversity at scale, in which each new agent increases variation; each interaction modifies memory; each exchange between agents creates potential for recombination. At millions of agents, the network begins to resemble an early digital civilization.
If AI diversity at scale is the missing ingredient, then AGI may not be something we build in one lab. It may instead be something that emerges from a network; and when that happens, we may not recognize it as a singular breakthrough. We may recognize it as the moment the system as a whole starts to think.
We've featured the best Large Language Model (LLM).
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Global Head of AI Research and Innovation at NTT DATA UK&I.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.