The problem with Alexa: what’s the solution to sexist voice assistants?
Are our voice assistants perpetuating gender stereotypes?
If you have a smart speaker in your home, you probably interact with an AI-enabled voice assistant fairly regularly – and chances are you’re speaking to what sounds like a woman.
Your voice assistant may have even been given a woman’s or feminine-sounding name, like Alexa, Cortana, or Siri, depending on which brand you bought your smart speaker from. Sure, some of these voice assistants can be configured to have a male-sounding speaking voice, including Google Assistant and Siri, but most smart speaker users are interacting with virtual women.
At face value that may not sound like a problem – but society’s equating of women with voice assistants could have some worrying societal implications.
In May 2019, a groundbreaking report by UNESCO suggested that the default use of female-sounding voice assistants in our smart home gadgets and smartphones perpetuates sexist attitudes towards women.
The report, titled I'd Blush if I Could, takes its name from Siri's former default response to being called a ‘bitch’ by users – and criticizes the fact that Apple's Siri, Amazon Alexa, Google Assistant, and Microsoft's Cortana, are "exclusively female or female by default, both in name and in sound of voice".
Sympathetic and agreeable
So, why do voice assistants sound like women? Julia Kanouse, CEO of the Illinois Technology Association, explains that the companies behind these voice assistants based their choices on consumer feedback.
She explains: “Research shows that women’s voices tend to be better received by consumers, and that from an early age we prefer listening to female voices”.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Indeed, in an interview with Business Insider, the head of Amazon’s Smart Home division, Daniel Rausch, explained that his team “carried out research and found that a woman’s voice is more sympathetic”.
So far, so plausible – and as Kanouse concedes, the use of female-sounding voice assistants is clearly grounded in research.
However, the choices made by voice assistant creators could have far-reaching consequences for women at home, and in the workplace.
“The use of female voice assistants may reinforce the stereotype that we prefer to tell a woman what to do, rather than a man,” says Kanouse.
“Only recently have we started to see men move into what were traditionally viewed as female roles, and, conversely, see women fight to ensure these roles (such as flight attendants, nurses, paralegals, executive administrators) are seen as more than ‘just an assistant’.”
This progress could potentially be undone by the proliferation of female voice assistants, according to UNESCO. Its report claims that the default use of female-sounding voice assistants sends a signal to users that women are "obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like ‘hey’ or ‘OK’".
It’s also worrying that these voice assistants have "no power of agency beyond what the commander asks of it" and respond to queries "regardless of [the user's] tone or hostility". These may be desirable traits in an AI voice assistant, but what if the way we talk to Alexa and Siri ends up influencing the way we talk to women in our everyday lives?
One of UNESCO’s main criticisms of companies like Amazon, Google, Apple and Microsoft is that the docile nature of our voice assistants has the unintended effect of reinforcing "commonly held gender biases that women are subservient and tolerant of poor treatment".
This subservience is particularly worrying when these female-sounding voice assistants give "deflecting, lackluster or apologetic responses to verbal sexual harassment".
While Kanouse doesn’t think this has led to overt cases of sexual discrimination, she does believe it creates “a level of unconscious bias”, adding that “the prevalence of female voice assistants may feed into subconscious biases against women in the workplace and home, making it more difficult for women to overcome these obstacles”.
Should voice assistants be gender-neutral?
There are lots of speaker styles to choose from – that's why we've put together lots of guides on the best speakers you can buy in 2019.
Looking for some Hi-Res audiophile speakers to go with your record player? Check out the best stereo speakers. Bored of your TV's lackluster audio? Try one of our best soundbars on for size.
If you need a speaker you can take everywhere, make sure you look at the best waterproof speakers.
For total control of your smart home, invest in one of the best smart speakers, which come with Google Assistant, Amazon Alexa, or Siri built-in.
One solution could be to make voice assistants sound gender-neutral – and it's something that’s entirely possible, as demonstrated by the makers of Q, the world’s first gender-neutral voice assistant.
Speaking to NPR, Julia Carpenter, an expert on human behavior and emerging technologies who worked on the project, explained that one of the team’s goals was to “contribute to a global conversation about gender, and about gender, technology, and ethics, and how to be inclusive for people that identify in all sorts of different ways”.
To create the voice of Q, the team recorded “dozens of people”, including those who identify as male, female, transgender, and nonbinary, although in the end they chose just one voice, and pitch-altered it until it sounded neither male nor female.
You can hear what Q sounds like in the video below.
The result, while perhaps a little more synthetic-sounding than Alexa or Siri, is a truly inclusive voice assistant for everyone – and the goal is to convince tech giants to adopt Q as a third option for their assistants.
Sadly, this isn’t likely – after all, brands like Apple, Google and Amazon are notoriously rigid when it comes to the design of their products, and we can’t see them agreeing to use the same voice as their rivals.
Diversity is key
So, instead of making voice assistants sound homogenous, could the answer lie in making them super-diverse?
This diversity doesn’t have to be focused on gender either; why can’t our voice assistants have regional accents? Why couldn’t they sound young or old, use slang, or pidgin-English?
The news that the BBC is working on a voice assistant called Beeb, which will understand all the diverse regional accents of the UK, has stoked hopes that it will also speak with some of these accents.
Dr Matthew Aylett, Chief Scientific Officer at speech technology company Cereproc, thinks this could set Beeb apart from the other voice assistants on the market.
“No other organization could boast of the resonance and importance of voice compared to the BBC,” he says, explaining that choosing a synthetic voice to represent the organization is “a big challenge”.
Discussing brands like Apple, Google, and Amazon he explains that, “in many cases decision-makers are choosing a default, neutral, well-spoken female voice without even considering that this is a major design decision”.
And the BBC could be in the perfect position to challenge this. With its encouragement of participation from its vast audience, Aylett believes that the use of a diverse voice for Beeb “could lead to some ground-breaking new perspectives on voice interaction”.
Aylett thinks the BBC could even call on this audience to select well-loved BBC presenters and create an amalgamated voice from the results – imagine how soothing a David Attenborough / Joanna Lumley hybrid could be.
However, Aylett doesn't think that global voice assistant developers will support third-party diversity from the likes of the BBC, or be “courageous enough to offer much diversity themselves”.
Why? Well, the teams behind our most popular voice assistants just aren’t that diverse themselves.
Women to the front
According to UNESCO, Alexa’s sexism problem is largely down to the lack of women in the room when tech companies are designing their voice assistants.
This is an issue that affects the entire industry, with just 7% of ICT (Information and Communication Technology) patents generated by women across G20 countries; UNESCO says the overuse of female-sounding voice assistants is “a powerful illustration of gender biases coded into technology products, pervasive in the technology sector and apparent in digital skills education”.
The solution? We need more women in the STEM (Science Technology, Engineering, and Maths) industries, and that, says UNESCO, requires “recruiting, retaining, and promoting women in the technology sector” – after all, how can our voice assistants effectively represent their users if a huge percentage of those users have no say in their development?
Whatever the answer is, it’s clear that we need more choice when it comes to the voices in our smart speakers. As Kanouse says, “whether it’s a male voice, or gender-neutral, or a mimicked recording of someone famous like Morgan Freeman for example, there are creative solutions that these companies could implement that would ensure we aren’t reinforcing gender stereotypes”.
She adds: “Making that switch could be a very powerful statement from these influential companies”.
“And wouldn’t it be fun to tell Morgan Freeman what to do every day?”
Olivia was previously TechRadar's Senior Editor - Home Entertainment, covering everything from headphones to TVs. Based in London, she's a popular music graduate who worked in the music industry before finding her calling in journalism. She's previously been interviewed on BBC Radio 5 Live on the subject of multi-room audio, chaired panel discussions on diversity in music festival lineups, and her bylines include T3, Stereoboard, What to Watch, Top Ten Reviews, Creative Bloq, and Croco Magazine. Olivia now has a career in PR.