When machines remember us: Rethinking privacy in the age of humanoids

white robot hand extending to you
(Image credit: Unsplash)

As policymakers race to regulate AI, a more intimate form of artificial intelligence is emerging quietly, yet profoundly. The next revolution in technology will not arrive as an app or an algorithm. It will walk toward us, look us in the eye, and ask how it can help.

Dr. Najwa Aaraj

CEO of the Technology Innovation Institute.

Goldman Sachs forecasts that consumer sales will surpass one million units by 2035, a signal that this future is not speculative, but rapidly approaching. As their forms become familiar, their presence will test one of humanity’s oldest instincts: the desire for privacy.

New era of trust

Until now, our digital existence has unfolded through screens and sensors we could switch off. A phone slips into a pocket; a smart speaker rests quietly on a shelf. But a humanoid is different. It observes, learns, reasons and acts continuously.

It can read tone, posture, and emotion, capturing data far beyond what a microphone or camera could record. In the age of humanoids, privacy will no longer mean simply protecting what we say. It will mean defining what machines are allowed to know about who we are.

This shift demands a new kind of trust. For decades, technology companies have asked for our “consent” through lengthy forms and hidden clauses. Yet no checkbox can capture the complexity of interacting with a learning, adaptive robot.

When a humanoid helps an elderly patient stand, it must analyze posture, predict balance, and detect hesitation. Every gesture produces intimate data. But who owns those fleeting moments; the patient, the hospital, or the robot’s creator? And how can we ensure such data serves human dignity rather than convenience alone?

Privacy preserving technologies

Existing privacy laws were built for files, not faces, for static storage, not dynamic interaction. With humanoids, privacy becomes fluid, negotiated in real time through movement, proximity and context.

Policymakers will need adaptive regulatory frameworks that evolve as quickly as these systems do, incorporating continuous risk assessments and ethical design principles from the very start. This is privacy by architecture, engineering discretion so that it is not optional, but automatic.

At the core of this architecture lie cryptography and cryptographic protocols, the science that makes privacy enforceable by design.

They enable humanoids to learn and respond to human needs without revealing the underlying data. Rather than trusting that sensitive information won't be misused, cryptographic techniques ensure it cannot be accessed in the first place. This is the difference between policy promises and mathematical guarantees.

In a world where humanoids continuously observe, interpret, and act, such guarantees are essential. Encryption and privacy-preserving technologies can transform ethical intentions into operational safeguards, anchoring trust in the code itself.

Modern privacy engineering already offers tools for this vision. Techniques such as federated learning, homomorphic encryption, and secure multi-party computation allow AI systems to learn from local data without exposing it.

A humanoid can thus improve its assistance over time while keeping sensitive information within its own encrypted domain. Privacy, in this sense, is not just a social value, it is a scientific discipline advancing in parallel with robotics.

Yet, the code behind humanoids must reflect more than just technical function, it must embody social norms. In many cultures, cues like posture, gaze, and proximity signal respect or intrusion.

Robots that move among us must be attuned not only to our privacy but to our customs, boundaries, and emotional comfort. Trust will depend not just on what machines can do, but on how gracefully and respectfully they do it.

If we embed privacy and dignity at the heart of humanoid systems, through both code and conduct, these machines can help us reclaim control over data that today flows unchecked through digital platforms.

A care humanoid can allow elderly individuals to live independently without constant human oversight. A humanoid tutor can keep a child’s learning data safer than a cloud-based platform by processing it locally. The goal is not to reject these technologies, but to guide them toward humane, transparent, and ethical ends.

Respect, discretion and care

As a scientist and researcher, I see robotics as a mirror, reflecting not only our engineering ambition but our ethical imagination. At the Technology Innovation Institute, we are building physical artificial intelligence that must engage with the world in all its complexity.

This means designing not only for function, but for respect, discretion, and care. As we teach machines to perceive us, we are also redefining – with intention – what it means to be truly seen.

The task before policymakers, scientists, and citizens is to move from reaction to anticipation, to write the rules of coexistence before machines arrive at our doorsteps. Privacy, once a personal concern, must now become a shared design principle.

Humanoids are arriving at a defining moment for society. Their emergence will test our ability to govern technology with foresight, ethics, and compassion.

If we succeed, we will build a future where physical artificial intelligence safeguards, rather than sacrifices, human potential; proving that innovation and integrity can coexist by design.

We've featured the best automation software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

CEO of the Technology Innovation Institute.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.