10.17

The Real Cyborgs

Arthur House

image
© Clara Molden/Telegraph Media Group Limited 2013

Arthur House is features editor of the Calvert Journal and founding editor of the online quarterly The Junket. He was previously a journalist at the Telegraph, a British newspaper in which this article originally appeared in 2014.

Ian Burkhart concentrated hard. A thick cable protruded from the crown of his shaven head. A sleeve sprouting wires enveloped his right arm. The 23-year-old had been paralysed from the neck down since a diving accident four years ago. But, in June this year, in a crowded room in the Wexner Medical Centre at Ohio State University, Burkhart’s hand spasmed into life.

At first it opened slowly and shakily, as though uncertain who its owner was. But when Burkhart engaged his wrist muscles, its upward movement was sudden and decisive. You could hear the joints — unused for years — cracking. The scientists and medical staff gathered in the room burst into applause.

The technology that made this possible, Neurobridge, had successfully reconnected Burkhart’s brain with his body. It was probably the most advanced intertwining of man and machine that had so far been achieved.

But such milestones are coming thick and fast. Quietly, almost without anyone really noticing, we have entered the age of the cyborg, or cybernetic organism: a living thing both natural and artificial. Artificial retinas and cochlear implants (which connect directly to the brain through the auditory nerve system) restore sight to the blind and hearing to the deaf. Deep-brain implants, known as “brain pacemakers,” alleviate the symptoms of 30,000 Parkinson’s sufferers worldwide. The Wellcome Trust is now trialling a silicon chip that sits directly on the brains of Alzheimer’s patients, stimulating them and warning of dangerous episodes.

922

5 A growing cadre of innovators is taking things further, using replacement organs, robotic prosthetics and implants not to restore bodily functions but to alter or enhance them. When he lost his right eye in a shotgun accident in 2005, the Canadian filmmaker Rob Spence replaced it with a wireless video camera that transmits what he’s seeing in real time to his computer. Last year, the electronic engineer Brian McEvoy, who is based in Minnesota, made himself a kind of internal satnav1 by fitting himself with a subdermal compass.

image
image
Canadian filmmaker Rob Spence lost his eye in a shooting accident. He has had his prosthetic eye fitted with a camera and is now producing a documentary called the Eyeborg Project.
In what ways is Spence’s use of technology similar to and different from that of Ian Burkhart (described at the beginning of this article)?
Fairfax Media via Getty Images

“This is the frontline of the Human Enhancement Revolution,” wrote the technology author and philosopher Patrick Lin last year. “We now know enough about biology, neuro-science, computing, robotics, and materials to hack the human body.”

The US military is pouring millions of dollars into projects such as Ekso Bionics’ Human Universal Load Carrier (HULC), an “Iron Man”–style wearable exoskeleton that gives soldiers superhuman strength. Its Defense Advanced Research Projects Agency (DARPA) is also working on thought-controlled killer robots, “thought helmets” to enable telepathic communication and brain-computer interfaces (BCIs) to give soldiers extra senses, such as night vision and the ability to “see” magnetic fields caused by landmines.

Ever since the earliest humans made stone tools, we have tried to extend our powers. The bicycle, the telescope and the gun all arose from this same impulse. Today, we carry smartphones — supercomputers, really — in our pockets, giving us infinite information and unlimited communication at our fingertips. Our relationship with technology is becoming increasingly intimate, as wearable devices such as Google Glass, Samsung Gear Fit (a smartwatch-cum-fitness tracker) and the Apple Watch show. And wearable is already becoming implantable.

In America, a dedicated amateur community — the “biohackers” or “grinders” — has been experimenting with implantable technology for several years. Amal Graafstra, a 38-year-old programmer and self-styled “adventure technologist,” has been inserting various types of radio-frequency identification (RFID) chips into the soft flesh between his thumbs and index fingers since 2005. The chips can be read by scanners that Graafstra has installed on the doors of his house, and also on his laptop, which gives him access with a swipe of his hand without the need for keys or passwords. He sells it to a growing crowd of “geeky, hacker-type software developers,” he tells me, direct from his website, Dangerous Things, having used crowdfunding to pay for the manufacturing (he raised almost five times his target amount).

923

image
image
Here is an X-ray showing the chips implanted in Amal Graafstra’s hands.
What are some uses right now for this technology, and what future uses can you imagine?
Amal Graafstra, founder of DangerousThings.com

10 Graafstra, a hyper-articulate teddy bear of a man, is unimpressed by wearable devices. “A wearable device is just one more thing to manage during the day. I don’t think people will want to deck themselves out with all that in the future,” he says, dismissing Samsung Gear Fit as “large, cumbersome and not exactly fashionable.” Instead, he envisages an implant that would monitor general health and scan for medical conditions, sending the information to the user’s smartphone or directly to a doctor. This would be always there, always on, and never in the way — and it could potentially save a lot of doctors’ time and money as fewer checkups would be necessary and health conditions could be recognised before they became serious.

Graafstra defines biohackers as “DIY cyborgs who are upgrading their bodies with hardware without waiting for corporate development cycles or authorities to say it’s OK.” But, he concedes, “Samsung and Apple aren’t blind to what we’re doing. Somewhere in the bowels of these companies are people thinking about implantables.” He mentions Motorola’s experiments with the “password pill,” which sends signals to devices from the stomach. (The same company has filed a patent for an “electronic throat tattoo” which fixes a minuscule microphone on the skin so users can communicate with their devices via voice commands.)

As robotics and brain-computer interfaces continue to improve and, with them, the likelihood that advanced cybernetic enhancement becomes widely available, several worrying questions emerge. Will those with the resources to access enhancements become a cyborg super-class that is healthier, smarter and more employable than the unenhanced? Will the unenhanced feel pressured into joining their ranks or face falling behind? And who will regulate these enhancements? In the wrong hands, cyborg technology could quickly become the stuff of dystopian science fiction. It’s all too easy to imagine totalitarian regimes (or unscrupulous health insurers) scraping information from our new, connected body parts and using it for their own gain.

924

. . .

Kevin Warwick can justifiably claim to be the world’s first cyborg. In the 1990s, Reading University’s visiting professor of cybernetics started implanting RFID chips into himself. In 2002, he underwent pioneering surgery to have an array of electrodes attached to the nerve fibres in his arm. This was the first time a human nervous system had been connected to a computer. Warwick’s “neural interface” allowed him to move a robotic hand by moving his own and to control a customised wheelchair with his thoughts. It also enabled him to experience electronic stimuli coming the other way. In one experiment he was able to sense ultrasound, which is beyond normal human capability. “I was born human,” Warwick has said, “but I believe it’s something we have the power to change.”

Cheerleaders for a cyborg future, like Prof. Warwick, call themselves “transhumanists.” Transhumanism aims to alter the human condition for the better by using technology (as well as genetic engineering, life extension science and synthetic biology) to make us more intelligent, healthier and live longer than has ever been possible — eventually transforming humanity so much it becomes “post-human.”

seeing connections

In science fiction films like Terminator and Blade Runner, technology has increased so much that it is nearly impossible to distinguish between human and robot. In the real world, scientists and robotics engineers have been working to build computers to pass what is called “The Turing Test,” which evaluates how realistic computer responses are to a series of questions posed by a human interrogator.

Facebook has been developing a version of the Turing Test to distinguish between robots and humans on its site. The computers are given twenty questions, five of which we have included here. Take the test to see if you are more human than a computer:

  1. John is in the playground. Bob is in the office.

    Where is John?

    playground

  2. John is in the playground. Bob is in the office. John picked up the football. Bob went to the kitchen.

    Where is the football?

    Where was Bob before the kitchen?

    playground; office

  3. John picked up the apple. John went to the office. John went to the kitchen. John dropped the apple.

    Where was the apple before the kitchen?

    office

  4. The office is north of the bedroom. The bedroom is north of the bathroom.

    What is north of the bedroom?

    What is the bedroom north of?

    office; bathroom

  5. Mary gave the cake to Fred. Fred gave the cake to Bill. Jeff was given the milk by Bill.

    Who gave the cake to Fred?

    Who did Fred give the cake to?

    What did Jeff receive?

    Who gave the milk?

    Mary; Bill; milk; Bill

None of the AI systems tested achieved 100 percent correct answers, although two averaged 93 percent, showing that machines can’t quite replicate the way humans process and understand language, but they are getting closer.

Now that you have taken a portion of the quiz, think about how these questions are intended to distinguish between robot and human. What does AI still have trouble doing?

925

15 One of the most prominent transhumanists is the inventor and philosopher Ray Kurzweil, currently director of engineering at Google, and populariser of the concept of the technological “singularity” — a point he puts at around 2045, when artificial intelligence will outstrip human intelligence for the first time. The predicted consequences of such a scenario vary wildly from the enslavement of humanity to a utopian world without war (or even, as a result of self-replicating nanotechnology, the transformation of the planet, or perhaps the entire universe, into something called “grey goo” — but that’s a whole other story).

Kurzweil, the award-winning creator of the flatbed scanner, also believes he has a shot at immortality and intends to resurrect the dead, including his own father. “We will transcend all of the limitations of our biology,” he has said. “That is what it means to be human — to extend who we are.”

Many transhumanists, particularly in Silicon Valley, where belief in the singularity has assumed the character of an eschatological2 religion, think that fusing with technology is our only hope of surviving the consequences of this great change.

“We’re not physically more competent than other species but in our intellectual capabilities we have something of an edge,” Warwick tells me. “But quite soon machines are going to have an intellectual power that we’ll have difficulty dealing with.” The only way to keep up with them, he believes, is to artificially enhance our poor organic bodies and brains. “If you can’t beat them, join them,” he says.

Professor James Lovelock, the veteran scientist and environmentalist, is considerably less alarmed than Warwick. “Artificial intelligence is never going to be able to intuit or invent things — all it can do is follow logical instructions. Perhaps in the future when computing systems operate like our brains, then there really would be a fight, but that’s an awful long way off.”

20 Many would disagree, however. IBM, Hewlett Packard and HRL Laboratories have all received many millions of dollars from DARPA to develop exactly what Lovelock fears: so-called “cognitive” or “neuromorphic” computing systems designed to learn, associate and intuit just like a mammalian brain. IBM brought out its first prototype in 2012.

Warwick may have been the first to experiment with cybernetics but the honour of being the world’s first government-recognised cyborg goes to the artist Neil Harbisson. Born with the rare condition of achromatopsia, or total colour blindness, Harbisson developed the “eyeborg” — a colour sensor on a head-mounted antenna that connects to a microchip implanted in his skull. It converts colours into sounds (electronic sine waves) which he hears via bone conduction.

Harbisson’s severe bowl cut and hard-to-place accent (his mother is Catalan and his father Northern Irish) only heighten the impression that he might have been beamed down from another planet.

Over time he has learned to associate every part of the spectrum with a different pitch until these associations have become second nature. “It was when I started to dream in colour that I felt the software and my brain had united,” he said in [a] TED talk in 2012.

Ten years ago, he won a battle with the British government to have the “eyeborg” recognized as a part of his body. It now appears in his passport photo.

25 He set up the Cyborg Foundation two years ago with his partner, Moon Ribas, a dancer and a fellow “cyborg activist” (she has a seismic sensor in her arm, which enables her to feel vibrations of varying intensity when an earthquake occurs anywhere in the world). She and Harbisson believe that everyone should have the right to become a cyborg. Like the biohackers, they propose that would-be cyborgs use open-source technology to design and make their own enhancements, rather than buying a finished product off the shelf.

926

It is hard, however, to see the majority of people adopting a DIY philosophy like this when state-of-the-art options become available commercially. In computer gaming, headsets using electroencephalogram (EEG) technology are being developed so that users can control games with their thoughts. “For example,” explains Zach Lynch, organizer of the first “Neurogaming” conference in San Francisco last year, “players can smash boulders by concentrating or scare away demons with angry facial expressions.” A British gaming company, Foc.us, is using technology that was first developed by DARPA to train snipers, to boost playing performance. According to Lynch, its “transcranial direct current stimulation device literally zaps your head with a miniscule electric pulse [which you can’t feel] during training to help make your brain more susceptible to learning.”

Chad Bouton, the inventor of the Neurobridge technology at Batelle Innovations that is enabling Ian Burkhart to move his hand again, believes that invasive brain-computer interfaces could also one day cross over into the non-therapeutic field.

“Talking about this bionic age that we’re entering,” he says on the phone from Ohio, “you certainly can imagine brain implants that could augment your memory.” Or give you direct access to the internet. “You could think about a search you’d like to make and get the information streamed directly into your brain,” he says. “Maybe decades from now we’ll see something like that.”

Prof. James Lovelock, who himself is fitted with a wi-fi controlled pacemaker, thinks these innovations come with dangers. He is chiefly “worried about the spam. If I had a cybernetic eye I wouldn’t want to wake up in the middle of the night with [an advert for] somebody’s used car flashing through my brain.”

30 Then there is the prospect of spying. Could insurance companies harvest biometric data from people’s enhancements, or paranoid governments use them to monitor their citizens? Amal Graafstra is adamant that his access-control chip is not at risk from such things, due to the close proximity (two inches or less) required to read it. “If the government was handing out these tags and requiring people to use them for banking, say, that would be pretty suspect,” he tells me. “But it doesn’t need to do that, because we have our phones on us all the time already” — a perfectly effective “tracking device,” as he puts it, should governments be interested in our movements.

Even assuming that cybernetic technology could be made safe from such dangers, opponents of transhumanism (sometimes termed “bioconservatives”) argue the medical principle, that technology should only restore human capabilities, not enhance them.

“The fascination with ‘enhancement’ is a way to convince healthy people that they are in need of treatment,” says Dr. David Albert Jones, director of the Anscombe Bioethics Centre in Oxford. “It is a wasteful distraction when we are failing to meet the basic needs of people with real health problems.”

He’s not against what he terms “human-technology interfaces” but, he says, they “should be developed to address the needs of people with disabilities, not to create a market for the self-regarding and the worried-well.” Many medical professionals would agree.

Yuval Noah Harari, an Israeli historian, worries about enhancements leading to unprecedented levels of inequality. “Medicine is moving towards trying to surpass the norm, to help [healthy] people live longer, to have stronger memories, to have better control of their emotions,” he said in a recent interview. “But upgrading like that is not an egalitarian project, it’s an elitist project. No matter what norm you reach, there is always another upgrade which is possible.” And the latest, most high-tech upgrades will always only be available to the rich.

35 But where does a case like Neil Harbisson’s fall? He couldn’t cure his colour blindness, so he developed an extra sense to make up for it. Is this restoration or augmentation? Where cyborg ethics are concerned, the lines are blurred.

927

Rich Lee was also drawn to biohacking in order to overcome a disability. Lee, a 35-year-old salesman from Utah whose wet-shave-and-goatee look is more 1990s nu-metal than cybernetic citizen of the future, is losing his vision, and last year was certified blind in one eye. He’s best known for having a pair of magnets implanted into his traguses (the nubs of cartilage in front of the ear-hole). They work with a copper coil worn around his neck, that he hooks up to his iPod, to become internal headphones. But he can also attach other things to the coil, such as wi-fi and electro-magnetic sensors, enabling him to sense things normally outside of human capability. By attaching it to an ultrasonic rangefinder, he hopes to learn how to echolocate, like a bat, so when he goes blind he will be still able to judge his distance from objects — essentially, to see in the dark. [. . .]

“It can flip very quickly,” says Kevin Warwick. “Take something like laser eye surgery. About 15 years ago people were saying ‘Don’t go blasting my eyes out’ and now they’re saying ‘Don’t bother with contact lenses.’ ”

In a sense, cyborg technology is nothing new — pacemakers, for example, have been around for decades. But recent advances have opened up new possibilities, and people are embracing them. Real cyborgs already walk among us. Soon, we may have to decide whether we want to join them.

Understanding and Interpreting

  1. Identify the uses of technology presented in this article that seem like something out of a science fiction story, and identify the technology that seems somewhat ordinary. What criteria did you use to judge the difference? If you had to guess, which “sci-fi” technology from this article is most likely to become ordinary to us in the future? Explain.

  2. In paragraph 8, Arthur House writes, “Ever since the earliest humans made stone tools, we have tried to extend our powers.” What technology described in this article would extend human powers?

  3. Based on the information in this article, write a definition of “transhumanism.” What are the movement’s chief aims? Explain how transhumanism has either a utopian or dystopian view of the future.

  4. Reread paragraphs 29–34 and identify the counterarguments to the merging of human and machine that House raises. To what extent does House seem to accept, reject, or qualify these counterarguments?

  5. Summarize the argument between the people who support only technology that is designed to restore human capabilities and those who are advocates of technology that would enhance these capabilities (pars. 31–38). What position, if any, does the author of this article take?

Analyzing Language, Style, and Structure

  1. What effect does the author achieve by starting the article with the demonstration of Ian Burkhart moving his hand? How might the effect have been different if House had begun with Amal Graafstra, the “biohacker”?

  2. What is House’s tone toward the merging of humans and machines? Focus on the evidence and the examples that he chooses to include to support your interpretation.

  3. Based solely on his word choice, do you think that House admires or is skeptical of the people involved in creating new connections between humans and machines?

928

Connecting, Arguing, and Extending

  1. House ends his article by stating, “Real cyborgs already walk among us. Soon, we may have to decide whether we want to join them.” Reading through the various levels of technology presented in the article, what would you be comfortable integrating with yourself? What would be a line you would not cross and why?

  2. Near the end of the piece, House asks about the distinction between restoration and augmentation, saying, “Where cyborg ethics are concerned, the lines are blurred” (par. 35). Write an argument identifying the ethical guidelines you would propose for regulation of machine-human integration, supporting your argument with evidence from this text and additional research.

  3. In 2005, Ray Kurzweil, who was mentioned in this article, wrote The Singularity Is Near: When Humans Transcend Biology. Research the “singularity,” a term for the point at which “artificial intelligence will outstrip human intelligence for the first time” (par. 15) and when humans will “transcend all of the limitations of our biology” (par. 16) to become indistinguishable from machines. According to your research, how likely is the singularity to occur, and what might be benefits and drawbacks?