Blurring the lines between man and machine
Blurring the lines between man and machine
Professor Kevin Warwick is pushing the boundaries of artificial intelligence and cyborg technologies
How can artificial intelligence (AI ) improve healthcare?
AI can be used to learn what is going on in different parts of the body and to predict problems. This gives us the power to prevent problems before they arise or to counteract malfunctions which are detected by sensors.
Could you give us an example that will be part of the near future?
One immediate application is in the use of deep brain stimulation or DBS. This technology is already used in people with Parkinson’s disease, epilepsy or depression to stimulate the nervous system with electrical pulses in order to alleviate symptoms. AI allows us to take it a step further by predicting when stimulation is needed. This means we could apply DBS before the patient experiences symptoms.
What areas of future research are most exciting?
An interesting area is the use of cultured neural networks. Typically, we use neurons (brain cells) taken from rat embryos and connect them to a robot. Sensors from the robot stimulate the culture and we have observed different pathways in the cell culture changing the direction of the robot.
How do you do this?
Firstly, we separate the brain cells using enzymes and them lay them out on a multi-electrode array (essentially a small dish). Very quickly the neurons start connecting with each other. We have to feed the brain cells using minerals and nutrients. The growing brain, consisting of approx. 150,000 cells has to be kept in an incubator at a controlled temperature of 37 degrees C. After about 10 days the brain has lots of connections so we give it a body.
The brain is connected to its body, bi-directionally, via a Bluetooth link. Sensory signals from the robot body are sent to the brain for processing, we then take output signals from the brain and use these to change the direction of travel of the robot. What we are interested in is how pathways are formed in the brain as the robot brain learns how to move around without bumping into the wall - a simple task. We can also use human neurons but these take a lot longer to learn than those of rats.
Basically, therefore, we have a robot with a biological brain.
What’s really exciting is that when we observed how the robot and the cell culture interact over a month or two, the performance of the robot in navigating certain tasks improves.
From a medical point of view we can add chemicals (medicines) to the culture to kill off certain parts of the mini-brain we’ve created and then study how it responds. This could be useful for studying strokes and other conditions where areas of the brain shut down. We can also study what happens when fresh neurons are applied or when the culture is connected to a computer network that overcomes its injury. Adding new cells to certain areas of the brain that are not working properly could restore its function but this is not the kind of research you could do in live human subjects for ethical reasons. In human subjects the risk would be too great.
Could this accelerate neuroscience research?
Yes, the culture is like a small model brain. Of course, it’s an enormous step to go from that to a fully working brain but it gives us the freedom to try all sorts of things thatscientists could never try on live human subjects – it’s a fantastic experiment base. We can add drugs or cells and see what happens without crossing ethical barriers.
What is the future of the man/machine relationship?
Well, we hope it’s a friendly working relationship! I think the major developments we’re seeing is that robots – which can already do a lot of physical things that we cannot do, such as flying – will be doing what we regard as intellectual things. They will be better able to analyse, decide and communicate.
What are the ethical implications of blurring the lines between humans and technology?
All the experiments I do have ethical approval – which is only right and proper. However, often the ethical checks are not in the right place. If I want to implant a chip in my arm, the approval of health authorities and a willing surgeon will be required. But in terms of the big-picture implications of what it means to enhance someone’s physical capacity, there’s no ethical body that considers that.
What kind of physical enhancement do you foresee?
Extrasensory input is an intriguing area. Take someone who is blind: we can give them an implant in their brain or nervous system that adds a new ultrasonic sense. They would then be able to sense in the same way as a bat. And if you can do it for someone with a sensory problem you can also do it for someone without. My eyesight is normal for my age but I could have ultrasonic, infrared or x-ray senses.
It’s akin to cosmetic surgery. In some cases, the intervention is to overcome a problem but often it is a case of someone – who can afford it – seeking to enhance themselves. It will get more interesting when we are dealing with the brain; giving people intellectual powers that others don’t have. There are many potential benefits but it will also prompt important ethical discussions about how the technology should be used.
Tell us about Project Cyborg
An array was implanted into my arm which effectively connected me to a mechanical hand over the internet. I was in one country – the United States – and was controlling a robotic hand in England. I moved the hand and I felt what it was feeling. This showed that you can extend your nervous system over the internet.
When I had my implant, and was able to control the robot hand in another country using only signals from my brain, I felt extremely powerful. It means that, for example, a soldier could be safe at home but have body parts in another place on the battlefield. If the body parts get destroyed, okay it might be a little traumatic, but they are still alive.
It also means, for example, that astronauts don’t need to travel in space. Remember that as my brain was controlling the robot hand, sensory signals were sent back from the robot hand’s fingers to stimulate my brain – so I had a touch sense on a different continent. It’s amazing how quickly the human brain adapts and accepts such new inputs.