Why did Meta invest in Scale AI – and how will it change the AI models you use?
Changes in AI evaluation will need better human data

Meta’s move to take a significant stake in Scale AI isn’t just another strategic investment. It’s an admission: human data is the critical infrastructure needed to build better AI, faster.
For years, model architecture and compute have dominated the conversation. But we’re entering a new era, one where the differentiator isn’t how novel your transformer is, but how well your model reflects and responds to real human experience. That demands high-quality, diverse, and continuous human input throughout the development lifecycle.
CEO and Co-Founder of Prolific.
A vote of confidence in human data
Scale’s primary service—labelling data outputs using human annotators—has long been essential to AI. But it hasn’t always been glamorous. Data preparation was often seen as a backroom task, while shiny model architectures stole the limelight.
Meta’s investment sends a clear message. The training and evaluation of AI models depend on data that is not just abundant, but accurate, representative, and human-validated. It’s a strategic move that gives Meta both privileged access to Scale’s data infrastructure and a highly influential stake in a key player in the data annotation space.
But therein lies a broader concern: when a major tech company takes a significant stake in a service provider, potential conflicts of interest arise. For organizations in the same competitive landscape, this can raise doubts about alignment, priorities, and incentives, making continued reliance on that provider increasingly difficult to justify.
One thing’s for certain: your data partner has never mattered more. We’re entering a period of market shake-up, where diversification of suppliers and specialization in services will become increasingly valuable to AI builders.
Enter the experience era
Beyond the boardroom maneuvers, something much more fundamental is happening in AI development. We’ve entered the era of experience. It’s not enough for models to be technically sophisticated or capable of passing abstract benchmark tests. What matters now is how models perform in the real world, across diverse user groups and tasks. Are they trustworthy? Are they usable? Do they meet people’s expectations?
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This shift is being driven by an awakening among model developers: in a competitive landscape, it’s not just about who can build the most advanced model, but whose model people choose to use. The new frontier isn’t measured solely in benchmark scores or inference speed—it’s measured in experience quality.
That means the success of an AI model is increasingly dependent on human input throughout its lifecycle. We’re seeing a surge in demand for real-time, continuous human evaluations across multiple demographics and use cases.
Evaluating models in the lab is no longer enough. The real world, with all its complexity and nuance, is now the benchmark.
Why synthetic data isn’t the answer—at least, not yet
Some may argue that synthetic data will eventually replace the need for human annotators. While synthetic data has a role to play, particularly in cost-efficient scalability or simulating rare edge cases, it falls short in one critical area: representing human experience. Human values, cultural nuances, and unpredictable behavior patterns cannot be easily simulated.
As we grapple with AI safety, bias, and alignment, we need human perspectives to guide us. Human intelligence, in all its diversity, is the only way to meaningfully test whether AI systems behave appropriately in real-world contexts.
That’s why the demand for real-world, high-fidelity human data is accelerating. It’s not a nice-to-have. It’s essential infrastructure for the next wave of AI.
The humans behind AI
If human feedback is the engine powering better AI, then the workforce behind that feedback is its beating heart. The industry must recognize the people providing this essential input as co-creators of AI.
This begins with diversity. If AI is going to serve the world, it must be evaluated by people who reflect the world—the best and the breadth of humanity. That means including people from different cultures, socioeconomic backgrounds, and educational levels. It also means ensuring geographic diversity so models don’t just perform well in Silicon Valley but also in Nairobi, Jakarta, or Birmingham.
Equally important is expertise. As AI becomes more specialized, so too must its human evaluators. Educational AI systems should be evaluated by experienced teachers. Financial tools require scrutiny by economists or accountants. Subject matter experts bring context and domain-specific insight that generic crowd work can’t replicate.
But building this kind of human intelligence layer doesn’t just happen. It requires thoughtful infrastructure, ethical foundations, and a commitment to the people behind the data.
That means fair pay, transparency, and a smooth user experience that gives people easy access to interesting and engaging tasks. When contributors feel respected and empowered, the quality of insight they provide is deeper, richer, and ultimately more valuable. Treating evaluators well leads to better data—and better AI.
A turning point for the market
Meta’s investment in Scale may appear like another play in a long series of tech consolidations, but it’s something more: a signal that the era of human data as critical infrastructure for AI has truly begun.
For model developers, this is a call to action. Relying on one provider—or one type of data—no longer cuts it. Specialization and trust in your human data partners will define the winners in this next phase of AI development.
For the broader industry, this moment is an invitation to rethink how we build and evaluate AI. The technical challenges are no longer the only obstacle. Now we must consider the social contract: How do people experience AI? Do they feel heard, understood, and respected by the systems we build?
And for many, this moment validates the belief that human intelligence is not a constraint on AI progress, but one of its greatest enablers.
Looking ahead
The Meta/Scale deal will likely catalyze further consolidation in the human data space. But it also opens the door for more specialized and transparent providers to shine. We anticipate a surge in demand for high-integrity, experience-focused data partners—those who can provide rich, real-world feedback loops without compromising trust.
Ultimately, this isn’t just about who builds the most powerful model. It’s about who builds the most useful, trusted, and human-centric model. The future of AI is intuitive, inclusive, and deeply human. And that future is already taking shape.
We've featured the best AI website builder.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
CEO and Co-Founder of Prolific.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.