Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals
Meta's EMG wristband could arrive "in the next few years"
Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals.
Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).
He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.
Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.
Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.
Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.
The next next-gen XR advancement?
Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.
Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.
Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband.
Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.
Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.
These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.
Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.
YOU MIGHT ALSO LIKE
Hamish is a Senior Staff Writer for TechRadar and you’ll see his name appearing on articles across nearly every topic on the site from smart home deals to speaker reviews to graphics card news and everything in between. He uses his broad range of knowledge to help explain the latest gadgets and if they’re a must-buy or a fad fueled by hype. Though his specialty is writing about everything going on in the world of virtual reality and augmented reality.