Why are so many AI assistants female by default — and should we be worried about that?

Female robot/AI
(Image credit: Getty Images/NurPhoto)

Open ChatGPT and you can choose between different voices. Some feel obviously feminine, others masculine, and several are more neutral. They also have fairly neutral names too, like Ember, Sol and Juniper.

But it wasn’t always that way. For years, many AI-powered assistants arrived with a default setting: female. Although ChatGPT isn’t exactly the same kind of system as the early voice assistants that first entered our homes, think back to their names: Siri, Alexa, Cortana. Even when they weren’t explicitly gendered, the voices often were.

People don’t just ask these systems for the weather. They confide in them and rely on them for work. Some even abuse them. At the other end of the spectrum, some form deep emotional attachments to them. When conversational AI that can mean so much to us, is designed to sound human, and often specifically feminine, that choice can shape expectations about who serves, who assists and who holds authority.

Why did early AI assistants have female voices?

There isn’t one neat answer. Early voice assistants were developed at a time when much of the available speech data, including customer service recordings and telecommunications archives, was dominated by women’s voices. So that influenced early design and training decisions.

But helping roles were feminized long before they were digitized. Think telephone operators, secretaries and receptionists. Positions associated with assistance and emotional labor were historically performed by women, and those associations have proven to be really durable. Both in how tech companies have designed these products and in what we expect from them.

This is partly why companies have often justified defaulting to a female voice by citing research suggesting people find female voices more pleasant, more trustworthy or easier to engage with. What I find fascinating is that yes, there is research that supports aspects of this, alongside the broader cultural context. But the findings are not definitive. Preferences are shaped by social norms, expectations about authority and care, and ideas about which voices “fit” particular roles in particular contexts.

There’s also a widely repeated claim that humans prefer female voices from infancy. Babies hear their mother’s voice in the womb, the argument goes, so we’re wired to respond positively to female voices.

But Kate Devlin, Professor of AI and Society in the Department of Digital Humanities, King's College London, challenges that narrative. In her book Turned On: Science, Sex and Robots, she writes:

“The idea behind this is that babies respond to their mother ’s voice in the womb over all other voices. But isn’t that because, well, they’re inside their mother? I asked my friend, baby scientist Caspar Addyman, if this might be the case. ‘Babies do prefer female voices and faces,’ he told me. ‘But only in the first eight months or so. I’m not aware of any evidence for this beyond that period.’”

In other words, even if early preference exists, it may not explain adult behavior or how our preferences evolve over time.

More recent research further complicates the assumption that users strongly favor female assistants because they’re perceived as more trustworthy. A 2021 study found that while stereotyping can occur with gendered voice assistants, there were no significant differences in trust formed towards a gender-ambiguous voice versus a gendered voice. If trust doesn’t reliably hinge on femininity, the rationale for defaulting to it becomes harder to defend.

Media has played a role too. I’ve written before about how sci-fi influences how we treat AI today and many of our favorite sci-fi stories have long imagined AI in feminized forms. Think seductive operating systems, compliant digital companions, subservient robotic helpers. Male robots and AIs exist, of course, but the archetype of the “helpful female machine” persists.

If these defaults are rooted in older labor roles, inherited stereotypes and research that may no longer hold up across the board, why perpetuate them? Tech is rarely shy about reinvention. If we’re building the future, we could choose to build it differently and more equitably.

AI Assistant

(Image credit: Getty Images/RICCARDO MILANI )

Why does it matter?

This might sound trivial to some people. It’s just a voice and users can change it now anyway, right? But the issue here isn’t just how AI sounds. It’s what it symbolizes, reinforces and the feedback loops it creates.

A 2024 study titled The femininization of AI-powered voice assistants explains: “This bias can manifest in several ways and at different levels, such as training data bias, inclusive design challenges, stereotyped responses that reinforce gender prejudice, female voice default, passive or submissive tone, poor handling of harassment, and insufficient range of diverse voice options.”

Research increasingly suggests that gendered technology doesn’t just mirror stereotypes but it can entrench them, shaping expectations about who serves, who assists and who holds authority.

Today, users have more choice. Many assistants and chatbots still default to a female voice, but male and gender-neutral options are increasingly available. However, at the time of writing, there are still no clear regulatory standards addressing gender stereotyping in AI design.

There’s more at stake than voice settings alone. Expanding genuinely neutral options is one step. Increasing gender diversity within AI development teams is another. Design decisions often reflect who is in the room. And despite the fact that more people than ever are using, and being shaped by, AI systems, women remain underrepresented in AI development. Recent estimates suggest they hold roughly 22–26% of AI-related roles worldwide, and under 15% of senior AI leadership roles.

So maybe more than anything, this is a reminder that technology shapes culture, and culture shapes technology in return. If we want more equitable systems, in AI and beyond, that loop is worth interrupting.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.


TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.