Apple wants to connect thoughts to iPhone control – and there's a very good reason for it
Brain-controlled devices plus AI can connect people with disabilities to the world

- Apple announced plans to support Switch Control for Brain-Computer Interfaces
- The tool would make devices like iPhones and Vision Pro headsets accessible for people with conditions like ALS
- Combined with Apple’s AI-powered Personal Voice feature, brain-computer interfaces could allow people to think words and hear them spoken in a synthetic version of their voice
Our smartphones and other devices are key to so many personal and professional tasks throughout the day. Using these devices can be difficult or outright impossible for those with ALS and other conditions. Apple thinks it has a possible solution: thinking. Specifically, a brain-computer interface (BCI) built with Australian neurotech startup Synchron that could provide hands-free, thought-controlled versions of the operating systems for iPhones, iPads, and the Vision Pro headset.
A brain implant for controlling your phone may seem extreme, but it could be the key for those with severe spinal cord injuries or related injuries to engage with the world. Apple will support Switch Control for those with the implant embedded near the brain’s motor cortex. The implant picks up the brain’s electrical signals when a person thinks about moving. It translates that electrical activity and feeds it to Apple's Switch Control software, becoming digital actions like selecting icons on a screen or navigating a virtual environment.
Brain implants, AI voices
Of course, it's still early days for the system. It can be slow compared to tapping, and it will take time for developers to build better BCI tools. But speed isn’t the point right now. The point is that people could use the brain implant and an iPhone to interact with a world they were otherwise locked out of.
The possibilities are even greater when looking at how it might mesh with AI-generated personal voice clones. Apple's Personal Voice feature lets users record a sample of their own speech so that, if they lose their ability to speak, they can generate synthetic speech that still sounds like them. It’s not quite indistinguishable from the real thing, but it’s close, and much more human than the robotic imitation familiar from old movies and TV shows.
Right now, those voices are triggered by touch, eye tracking, or other assistive tech. But with BCI integration, those same people could “think” their voice into existence. They could speak just by intending to speak, and the system would do the rest. Imagine someone with ALS not only navigating their iPhone with their thoughts but also speaking again through the same device by "typing" statements for their synthetic voice clone to say.
While it's incredible that a brain implant can let someone control a computer with their mind, AI could take it to another level. It wouldn't just help people use tech, but also to be themselves in a digital world.
You might also like
- Apple is about to make Personal Voice faster and better, and update almost all of its other accessibility features
- The ultimate AI search face-off - I pitted Claude's new search tool against ChatGPT Search, Perplexity, and Gemini, the results might surprise you
- 4 ChatGPT features I hope Apple adds to Siri
- Apple Intelligence feels like the HomePod all over again
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.