We need ethical safeguards to stop our brains getting hacked, say the experts
Because of course you do
Brain-computer interfaces are pretty incredible. The idea that a chip can be inserted into your brain to allow you to control a computer has pretty far-reaching possible applications, ranging from allowing you to type a text message with your mind, to potentially ending paralysis.
If that last example feels like an overstatement, do I have a treat for you. One of the world's leading neuroscientists, Dr Niels Birbaumer, has recently demonstrated that by using brain-machine interfaces, quadriplegic patients (paralyzed in all four limbs) were able to eat using a brain controlled robotic limb.
The emergent technology has not gone unnoticed by the consumer side of the tech world. Both Facebook and Elon Musk have expressed an intention to develop some kind of brain-computer interface (BCI).
Facebook is talking about using a BCI in order to allow people to type using their minds, and Elon Musk’s Neuralink looks set to be a way of integrating human and machine intelligence, improving our memories and mental capabilities.
Obviously, we are at a very early stage with these technologies and a group of the world’s leading neuroscientists, including Dr Birbaumer, are calling for ethical guidelines to be implemented now. According to Jens Clausen from the Center for Ethics in the Sciences at the University of Tübingen:
"Technological advances in the BMI field are currently developing at such rapid rate that it is high time to define a legal and ethical framework"
Did you say brainjacking?
The scientists, based out of the University of Tübingen in Germany, published their paper entitled ‘Help, Hope and Hype’ in the respected publication Science. In it they called for security measures that they feel will be essential moving forward.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
One of the primary things that the paper calls for is for there to be a rule that ensures the ability to ‘veto’ an action with a non-brain given signal, for example an eye movement. If the interface is misreading signals from your brain, a signal from your brain could lead to an out of control robot limb.
In the more terrifying part of the paper, it talks about the need for encryption, to protect from hacking. Obviously, your robot limbs being hacked is a terrifying enough prospect, but the paper goes one step further and talks about “brainjacking” where the user’s brain can be taken over.
One thing that the paper also talks about - that definitely applies to us - is the need to mitigate how excited we get about talking about the potentials of this technology: “It would also be an ethical question to manage expectations and hopes of possible end-users and their relatives, because spectacular demonstrations may lead to exaggerated expectations”
It's a difficult thing not getting too excited about an emergent technology that has such diverse and impactful potential applications, but clearly the team thinks that in all areas that caution is advisable. And to be fair, it's difficult to disagree.
- Want to read about chip manufacturer ARM's developments in the field? Check out: Brain implant could end paralysis for some sufferers
Andrew London is a writer at Velocity Partners. Prior to Velocity Partners, he was a staff writer at Future plc.