Brain Machine Interfaces
This was originally published in June’17 issue of MCUG newsletter
Remember the Matrix? Neo had these plugs connected to him which helped him enter the Matrix, where he could move, feel, fly or even stop bullets in
First, The Brain Machine Interface?
Think of brain like a remote control controlling your body. When you think of moving your arm, the brain sends a signal through the nerves passing through your spinal cord. The spine and the intermediate nerves can be thought analogous to a channel. Now imagine that you tap the connections in between and send and receive artificial signals. You could project a movie directly to your visual cortex without having to wear a VR headset. Or even better, send all motor functions to a computer and send all sensory functions from computer to brain and you will be able to physically(in a way as all the brain perceives of the physical world is from the senses) enter a game. This sounds sci-fi-ish, but early uses of this technology have already shown results. This technology is known as Brain Machine Interface (BMI) or Brain Computer Interface (BCI).
BMI is the direct connection pathway between the brain and an external device. It performs two functions: recording neurons and stimulating neurons. It is not always necessary to have bidirectional communication between the machine and the brain. Hearing aids for example only need to stimulate the neurons. There are three broad criteria
- Scale: number of neurons that can be simultaneously recorded.
- Resolution: how detailed is the information the tool receives—there are two types of resolution; spatial (how closely your recordings come to telling you how individual neurons are firing) and temporal (how well you can determine when the activity you record happened)
- Invasiveness: is surgery needed, and if so, how extensively.
To understand how to interface the brain with any device we have to know how the brain works. So, here is an oversimplified explanation of the brain. Brain is a densely packed complex structure. It is made up of hundreds of billions of neurons. Each neuron’s axon has a negative “resting potential”, which means that when it’s at rest, its electrical charge is slightly negative. And it is connected to many dendrites of other neurons. These dendrites release neurotransmitters which raise or lower the charge on the neurons. If the neuron’s charge rises over a certain threshold the neuron triggers. Now if we have a network of such neurons we get a neural network. We have many such networks in our brain which are responsible for various tasks. Let’s call them modules for now.
This is how Ray Kurzweil explains this in his 2014 TED Talk, “ Consider a simple example. We’ve got a bunch of modules that can recognize the crossbar to a capital A, and that’s all they care about.They don’t care if a song is playing or you are seeing a movie, but when they see a crossbar to a capital A, they get very excited and they say “crossbar,” . That goes to the next level. Each is more abstract than the next one, so the next one might say “capital A.” That goes up to a higher level that might say “Apple.” Go up another five levels, and you’re now at a pretty high level of this hierarchy, and stretch down into the different senses, and you may have a module that sees a certain fabric, hears a certain voice quality, smells a certain perfume, and will say, “My friend has entered the room.” Go up another 10 levels, and now you’re at a very high level. You’ll have modules that say, “That was ironic. That’s funny. She is pretty.” You might think that those are more sophisticated, but actually what’s more complicated is the hierarchy beneath them.”
So now you would have an idea of how complicated the functioning of the brain is. The challenge lies in interfacing such a complex biological machine to a silicon based one. Well, researchers have found few means to interface the two and some of them are already in use.
How to read and stimulate neurons?
Consider the task of moving an artificial arm. You need to read the motor signals from the brain and send them to the computer which will tell the arm to move. This still does not solve the entire problem. The patient does not feel the artificial limb like it was his/her own arm, the way you would feel when say you lift your arm. This inability to be aware of the position of one’s limb is called ‘proprioception’. Overcoming this requires a feedback to the brain from the limb.
A commonly used but invasive (requires opening the skull) technique is to implant electrodes in the brain which can stimulate neurons, or record their activity. The non-invasive techniques can however only record.
A few examples of current generation BMI tools are, fMRI (functional magnetic resonance imaging), EEG (electroencephalography), ECoG (electrocorticography), LFP (Local Field Potential), Single-Unit Recording. A technology under research is ‘neural lace’. It is an ultra-thin mesh that can be implanted in the skull, forming a collection of electrodes capable of monitoring brain function. To insert neural lace, a tiny needle containing the rolled up mesh is placed inside the skull and the mesh is injected. As the mesh leaves the needle it unravels, spanning the brain. Another idea is nano ferrous dust that can be dissolved in water, which then reaches the brain through blood stream and under changing magnetic field heats up due to hysteresis. This then triggers its nearby neurons.
BMI technology is still in its infancy but, the future holds endless possibilities. We will able to restore the full motor functions of paraplegics or quadriplegics. Imagine having your brain connected to the internet. You will have access to all the information just by thinking. And to take it a little further, imagine being connected to an AI which helps you think when you need. You will be able to offload heavy thinking tasks to the AI without even realising it. Even communication between individuals will improve as we will not be limited to using words and be able to send emotions directly. You will be able to store, share and re-experience your experiences from the cloud. Live streaming will happen from person’s point of view and you will be able to hear and smell whatever they hearing or smelling. And imagine having ability tweak internally, like removing addiction, depression and anxiety. Imagine representing something uncomfortable like pain to something like an auditory bell. Probably schools and colleges will not exist in the future. People will be expert in every field. You will be able to make yourself smarter by just downloading the required skills. You can then say like Neo, “I know kung-fu”. And who knows? You might have subscription services like “On Demand Intelligence ”.