Samsung's Neon avatars are designed to be AI companions you'll mistake for humans

Samsung Neon
(Image credit: Future)

One of the much-hyped announcements at CES 2020 is Samsung Neon, a project with a simple goal: create completely original (i.e. not based on actors) digital avatars that converse and learn until they’re indistinguishable from humans. 

Such conversational and friendly AI would find plenty of applications in the hospitality and entertainment industries, and they could be useful for any customer service role, from bank tellers to baristas. But it’s the simple yet lofty aspiration of Neon – the first venture from Samsung semi-startup STAR Labs – that could set the project apart from prior AI bots: its founder and President/CEO Pranav Mistry wants these creations to be humanity’s friends.

Which explains why Mistry refers to these AI – called Neons, as a proper noun – as new beings waiting to be refined into existence. He also calls them 'Artificial Humans'.

“Neon is like a new kind of life,” Mistry stated in a press release. “There are millions of species on our planet and we hope to add one more. Neons will be our friends, collaborators, and companions, continually learning, evolving, and forming memories from their interactions.”

This unbridled futurism invites skepticism, and the press response since Neon was unveiled hasn’t been too kind. It’s pretty clear that Neon’s tech is in early stages: while STAR Labs’ booth on the CES show floor is lined with a variety of Neon avatars, they’re running through preset routines, and aren’t ready to have freeform conversations. 

During a demo presentation, Mistry asked one of the Neons – which resembled a punk-ish woman with a shaved head – some questions that got adequate responses, but her facial expressions and mouth movements were clunky and unnatural. And yet, right next to it, another instance of the same Neon was running through its preset routine with much more fluid motions. The comparison showed how far Neon has to go before people interact with its AI individuals as the project intends.  

Because what’s really important – what Neon needs to pull off – is illusion. To converse with Neons as we would with other humans, we need to believe that we’re chatting with something that can respond with enough context and natural cadence. 

(Image credit: Future)

How to make AI converse like humans

To Neon’s credit, the team both acknowledges that the project is in its early days (Neon started only four months ago), and has a strategy for making its creations into conversation partners. 

Phase one for Neon’s timeline begins with development of a central technology called the Core R3 engine, crudely broken down into Reality, Real-time and Responsive, which are sort of arch-directives Neon is following. 

The Reality directive means talking to a Neon should feel like talking to a human – including limiting the Neons to human knowledge. They’ll be programmed not to be encyclopedias – no instant internet scanning in the background. So if you ask a Neon a factual question, they’ll reply by asking if you want them to Google it. 

Compared to every smart assistant out there, this sounds counterintuitive, the equivalent of Neons having an arm tied behind their back. But this is what Mistry and his team want interactions with Neons to be like. Think about it this way: you give orders to Siri and Alexa; with Neon, Mistry wants you and an avatar to have conversations that develop memories and, essentially, a sort of proto-relationship.

Which doesn’t mean Neons will be dumb – they’ll just be loaded up with knowledge patterned to each Neon’s job-like role (at the CES booth, each Neon had a name and title like ‘Student’ or ‘Flight Attendant’). And to keep conversations going in Real-time (the second directive), that knowledge will be primarily local, avoiding the delay incurred as questions are beamed to the Cloud and back. That should enable them to respond to any question in 20 milliseconds, an interval humans can’t really perceive. 

The other half of achieving believability lies in the responses themselves – varied, unpredictable, natural. At least in how they seem to us. The third directive, Responsive, means Neons will gauge your emotions and vocal tone to respond intuitively. They have their own emotional range, too, which Mistry and his team displayed on a graph during their presentation – and in a further peek behind the curtain, they revealed a mental map of nodes representing the seven million responses they contain. If you ask a Neon the same question twice, they should respond differently.

That’s a lot of ambition, and far from what we saw Neons being capable of at STAR Labs’ CES booth. Toward the end of 2020, the group will augment their creations with SPECTRA, a tech cluster including intelligence, learning, emotions, and memory – all the things that should allow Neons to grow and become rounded digital individuals. 

Will that mean Neons will develop idiosyncrasies? Personalities? Biases? It’s too early to tell, and in truth, it doesn’t seem like the team knows. At this point, they’re following aspirations more than expectations, and CES was an introduction rather than a refined product reveal – STAR Labs doesn’t even plan to release a beta until the end of 2020. 

(Image credit: Future)

So what will Neons be used for?

Unveiling Neon at CES 2020 was an exploratory move – the team wanted the public to respond and tell them what they think Neons could be used for, STAR Labs senior technical content writer and neuroscientist Angie Chiang told TechRadar.

“In terms of applications, we need help. We’re not targeting one specific field, so we need people who are experts in their domains to help us make Neons useful to them,” Chiang said.

While installing Neons as customer service reps seems an obvious choice, essentially replacing chatbots with friendlier and personable Artificial Humans, there’s far more potential in deploying Neons in roles that could use more humanity when actual humans aren’t available. Chiang recalled her experiences volunteering in convalescence homes – and installing a Neon to keep the sick company as they recover could be an interesting development in palliative care. 

Neons could offer human-like interaction to, the ill, the elderly, the lonely. They could sub in for human employees at inconvenient times – like overnight station anchors when news breaks, Chiang suggests. They could be therapists or simple confidants (right out the gate, STAR Labs is assuring that privacy is built into the core of Neons – only you and your Neon have access to your interactions, and they won’t share your data without permission). 

They could be actors – but instead of simply churning out Neons and sending them off, STAR Labs will treat them as individuals that are licensed out. So perhaps a museum would secure a Neon for an exhibit, and opt to have it loaded up with relevant knowledge (one could even work as a museum guide).

There are a couple of concerns, though – first, that Neons could take jobs. That's possible, of course, as companies look to cut costs by replacing customer support with online chatbots and automated help lines. There's also the question of how Neons will affect work valuation – will Neons be cheaper than hiring out a human worker in particular fields?

But Chiang imagines Neons will augment the human experience, not replace it; 15 years ago, nobody used smartphones, and now they’re constant portals helping to connect us and improve our efficiency.

“People have always worried that technology will replace [the workforce in some] fields,” Chiang said. “But technology has always been advancing: I’m a scientist, so my thought is always that you build technology as advancement for humans. If you have a tool to advance yourself, why would you not use it?”

The second concern is whether Neons can be used to impersonate people, a la Deepfakes. This is a completely different technology, STAR Labs asserts in an FAQ, and Neons can’t be used to manipulate existing media. Nor can people copy themselves or others into Neons; while some of their current creations’ looks are based on real people, eventually, Neons will be completely original.

“We’re not making replicates of people. That’s not us,” Chiang said.

Which leads to the next question: Will Neons gain sentience, rise up, and replace humanity?

“I’m a neuroscientist: we haven’t even figured out the brain!” Chiang said. ”How do you even model something that you feel will be even more intelligent than the human brain when we don’t even fully know how the human brain works?”

(Image credit: Future)

Neons: making machines more human to stop humans from sliding into machines

Introducing Neon to a crowd of huddled media and tech industry professionals at STAR Labs’ CES booth, Mistry explained how his young daughter interacted with the smart assistant Alexa: she shouts at it.

Why? Because smart assistants, like a lot of AI today, are perfunctory and submissive. They serve, they help, and they don’t impose. And the language we use to interact with them is pure commands. Why would we bother with the same consideration we use when speaking with humans? That has an effect on us, argues Mistry.

“We are becoming more like machines rather than machines becoming more like humans,” Mistry said. “Just enabling speech on machines or UI that is human is not going to make them human. With Neon we want to break that barrier. We want to make our conversations with machines more human also.”

These are STAR Labs’ ambitions, and we’ll wait to see if the execution matches its lofty goals. It doesn’t have a business model yet or a planned rollout, and aside from the aforementioned beta, 2020 seems to be a year of research and development to see what people and industries want from Neons – and eventually, what Neons can be capable of.

Perhaps STAR Labs will share more by the time it launches its own event, Neon World 2020, although it hasn’t publicly announced exactly what that will be, or when it will happen. It, like the rest of Neon, is largely a mystery box, which has earned it plenty of scorn and skepticism from online press. But for her part, Chiang isn’t fazed.

“It’s funny, I read an article that, many years ago, people made fun of electric cars until Tesla came out and proved that it was useful,” Chiang said. ”Sometimes you just need to be the ones that introduce it.”

  • Check out all of TechRadar's CES 2020 coverage. We're live in Las Vegas to bring you all the breaking tech news and launches, plus hands-on reviews of everything from 8K TVs and foldable displays to new phones, laptops and smart home gadgets.  
David Lumb

David is now a mobile reporter at Cnet. Formerly Mobile Editor, US for TechRadar, he covered phones, tablets, and wearables. He still thinks the iPhone 4 is the best-looking smartphone ever made. He's most interested in technology, gaming and culture – and where they overlap and change our lives. His current beat explores how our on-the-go existence is affected by new gadgets, carrier coverage expansions, and corporate strategy shifts.