917
Is It OK to Torture or Murder a Robot?
Richard Fisher
Richard Fisher is a journalist and editor, and currently is the Editor of BBC Future (www.bbc.com/
Kate Darling likes to ask you to do terrible things to cute robots. At a workshop she organised this year, Darling asked people to play with a Pleo robot, a child’s toy dinosaur. The soft green Pleo has trusting eyes and affectionate movements. When you take one out of the box, it acts like a helpless newborn puppy — it can’t walk and you have to teach it about the world.
Yet after an hour allowing people to tickle and cuddle these loveable dinosaurs, Darling turned executioner. She gave the participants knives, hatchets and other weapons, and ordered them to torture and dismember their toys. What happened next “was much more dramatic than we ever anticipated,” she says.
For Darling, a researcher at Massachusetts Institute of Technology, our reaction to robot cruelty is important because a new wave of machines is forcing us to reconsider our relationship with them. When Darling described her Pleo experiment in a talk in Boston this month, she made the case that mistreating certain kinds of robots could soon become unacceptable in the eyes of society. She even believes that we may need a set of “robot rights.” If so, in what circumstance would it be OK to torture or murder a robot? And what would it take to make you think twice before being cruel to a machine?
Until recently, the idea of robot rights had been left to the realms of science fiction. Perhaps that’s because the real machines surrounding us have been relatively unsophisticated. Nobody feels bad about chucking away a toaster or a remote-
5 For example, in a small experiment conducted for the radio show Radiolab in 2011, Freedom Baird of MIT asked children to hold upside down a Barbie doll, a hamster and a Furby robot for as long as they felt comfortable. While the children held the doll upside down until their arms got tired, they soon stopped torturing the wriggling hamster, and after a little while, the Furby too. They were old enough to know the Furby was a toy, but couldn’t stand the way it was programmed to cry and say “Me scared.”
It’s not just kids that form surprising bonds with these bundles of wires and circuits. Some people give names to their Roomba vacuum cleaners, says Darling. And soldiers honour their robots with “medals” or hold funerals for them. She cites one particularly striking example of a military robot that was designed to defuse landmines by stepping on them. In a test, the explosions ripped off most of the robot’s legs, and yet the crippled machine continued to limp along.
918
Watching the robot struggle, the colonel in charge called off the test because it was “in-
Killer Instinct
Some researchers are converging on the idea that if a robot looks like it is alive, with its own mind, the tiniest of simulated cues forces us to feel empathy with machines, even though we know they are artificial.
Earlier this year, researchers from the University of Duisburg-
Darling discovered the same when she asked people to torture the Pleo dinosaur at the Lift conference in Geneva in February. The workshop took a more uncomfortable turn than expected.
10 After an hour of play, the people refused to hurt their Pleo with the weapons they had been given. So then Darling started playing mind games, telling them they could save their own dinosaur by killing somebody else’s. Even then, they wouldn’t do it.
Finally, she told the group that unless one person stepped forward and killed just one Pleo, all the robots would be slaughtered. After much hand-
After this brutal act, the room fell silent for a few seconds, Darling recalls. The strength of people’s emotional reaction seemed to have surprised them.
Given the possibility of such strong emotional reactions, a few years ago roboticists in Europe argued that we need a new set of ethical rules for building robots. The idea was to adapt author Isaac Asimov’s famous “laws of robotics” for the modern age. One of their five rules was that robots “should not be designed in a deceptive way . . . their machine nature must be transparent.” In other words, there needs to be a way to break the illusion of emotion and intent, and see a robot for what it is: wires, actuators and software.
Darling, however, believes that we could go further than a few ethical guidelines. We may need to protect “robot rights” in our legal systems, she says.
15 If this sounds sound absurd, Darling points out that there are precedents from animal cruelty laws. Why exactly do we have legal protection for animals? Is it simply because they can suffer? If that’s true, then Darling questions why we have strong laws to protect some animals, but not others. Many people are happy to eat animals kept in awful conditions on industrial farms or to crush an insect under their foot, yet would be aghast at mistreatment of their next-
919
The following is from a press release put out by the University of Melbourne describing the research of animal behavior scientist Jean-
Robotic dogs are likely to replace the real thing in households worldwide in as little as a decade, as our infatuation with technology grows and more people migrate to high-
“Robots can, without a doubt, trigger human emotions,” Dr. Rault added. “If artificial pets can produce the same benefits we get from live pets, does that mean that our emotional bond with animals is really just an image that we project on to our pets?”
Write an argument for or against robo-
The reason, says Darling, could be that we create laws when we recognise their suffering as similar to our own. Perhaps the main reason we created many of these laws is because we don’t like to see the act of cruelty. It’s less about the animal’s experience and more about our own emotional pain. So, even though robots are machines, Darling argues that there may be a point beyond which the performance of cruelty — rather than its consequences — is too uncomfortable to tolerate.
Feel Your Pain
Indeed, harm to a victim is not always the only reason we decide to regulate a technology. Consider an altogether different kind of gadget: a few weeks ago the British Medical Association argued that smoking e-
To take another example: if a father is torturing a robot in front of his 4-
Somewhere down the line there’s also the possibility of a nasty twist: that machines really could experience suffering — just not like our own. Already some researchers have begun making robots “feel” pain to navigate the world. Some are concerned that when machines eventually acquire a basic sense of their own existence, the consequences will not be pleasant. For this reason, the philosopher Thomas Metzinger argues that we should stop trying to create intelligent robots at all. The first conscious machines, says Metzinger, will be like confused, disabled infants — certainly not the sophisticated, malign AI of science fiction — and so treating them like typical machines would be cruel. If robots have a basic consciousness, then it doesn’t matter if it is simulated, he says. It believes it is alive, it can experience suffering. Metzinger puts it like this: “We should refrain from doing anything to increase the overall amount of suffering in the universe.”
920
20 What’s clear is that there is a spectrum of “aliveness” in robots, from basic simulations of cute animal behaviour, to future robots that acquire a sense of suffering. But as Darling’s Pleo dinosaur experiment suggested, it doesn’t take much to trigger an emotional response in us. The question is whether we can — or should — define the line beyond which cruelty to these machines is unacceptable. Where does the line lie for you? If a robot cries out in pain, or begs for mercy? If it believes it is hurting? If it bleeds?
After reading this article, how do you think the author, Richard Fisher, would answer the question posed in the title? What evidence do you have to support your response?
In paragraph 4, Fisher identifies a specific class of robots as “social robots.” According to the article, what are the distinctions between this class of robots and others?
Fisher identifies one of the proposed ethical rules for building robots, “robots ‘should not be designed in a deceptive way . . . their machine nature must be transparent’ ” (par. 13). What evidence in this article supports the need for this guideline?
Fisher includes a quote from philosopher Thomas Metzinger (par. 19). Why does Metzinger recommend that humans avoid building intelligent robots?
Notice how Fisher starts with the story of the Pleo robot, moves away from it, and then returns to finish that story about halfway through the piece. What is a likely effect of that structural choice, and how effective is it in building his argument?
Look back at the opening paragraph. What words and phrases does Fisher use to describe the Pleo robot, and what is the effect of these word choices?
In paragraph 6, Fisher includes a quote from a U.S. military officer who called a test that was destroying a landmine-
To support his argument about the emotional connection humans have with robots, Fisher includes the results of a number of research studies. Look back at one that you think best supports his claim, and explain why. Then, identify one research study that you think is less effective in supporting Fisher’s claim, and explain why.
In paragraph 15, Fisher includes an analogy that the researcher Kate Darling makes in comparing robot rights to the rules that prevent animal cruelty. To what extent is this analogy relevant to the argument about robots, and how is it effective (or ineffective)?
While there are a number of places where Fisher makes appeals to logos, the strength of his argument really rests on pathos. Locate two or three of the strongest examples of pathos and explain how Fisher uses them and for what effect.
Fisher chooses to end with a series of rhetorical questions. What does he accomplish by ending the piece in this manner?
921
The author cites Kate Darling, who suggests that we may need a set of “robot rights” (par. 3). Write an argument in which you propose one right that we ought to be prepared to offer to robots. Along with rights come responsibilities, so you might include in your argument a responsibility that robots will need to follow as a result of their newfound right.
Write about a time that you have entered into an emotional attachment with an inanimate object, such as a stuffed animal when you were young, a favorite hat, or a lucky pencil. What aspects of that object led to your emotional connection, and how similar is your experience to what the researchers quoted in this article explain?
Conduct research in order to determine the likely number of years from now we can expect to have the intelligent, social robots that Fisher describes here. What are the technological leaps that have already been made and what still needs to be accomplished?
Answer the main question that Fisher ends his piece with: “whether we can—