Ethics Robotics Essay

The question of robotic ethics is making everyone tense. We worry about the machine’s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon.

Personally, I do not have such worries.

I am less concerned about robots doing wrong, and far more concerned about the moment they look at us and are appalled at how often we fail to do right. I am convinced that they will not only be smarter than we are, but have truer moral compasses, as well.

Let’s be clear about what is and is not at issue here.

I am less concerned about robots doing wrong, and far more concerned about the moment they look at us and are appalled at how often we fail to do right.

First, I am not talking about whether or not we should deploy robotic soldiers. That is an ethical decision that is in human hands. When we consider the question of automating war, we are considering the nature of ourselves not our machines. Yes, there is a question of whether the capabilities of robotic soldiers and autonomous weapons are up to the task, but that has to do with how well they work rather than what their ethics are.

Second, I am not talking about the “ethics” of machines that are just badly designed. A self-driving car that plows into a crowd of people because its sensors fail to register them isn’t any more unethical than a vehicle that experiences unintended acceleration. It is broken or badly built. Certainly there is a tragedy here, and there is responsibility, but it is in the hands of the designers and manufacturers.

Third, while we need to look at responsibility, this is not about punishment. Our ability or inability to punish a device is a matter of how we respond to unethical behavior, not how to assess it. The question of whether a machine has done something wrong is very different than the issue of what we are going to do about it.

This is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.

Finally, this is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals. I would put this kind of example in the category of “badly designed.” And given that most of the systems that manage printer queues in our offices are smarter than a system that would tend to do this, it is probably not something that should concern us.

These are examples of machines doing bad things because they are broken or because that’s how they are built. These are all examples of tools that might very well hurt us, but do not have to themselves deal with ethical dilemmas.

But “dilemma” is the important word here.

Situations that match up well against atomic rules of action are easy to deal with for both machines and people. Given a rule that states that you should never kill anyone, it is pretty easy for a machine (or person for that matter) to know that it is wrong to murder the owner of its local bodega, even if it means that it won’t have to pay for that bottle of Chardonnay. Human life trumps cost savings.

This is why Isaac Asimov’s “Three Laws of Robotics” seem so appealing to us. They provide a simple value ranking that — on the face of it, at least — seems to make sense:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The place where both robots and humans run into problems is situations in which adherence to a rule is impossible, because all choices violate the same rule. The standard example that is used to explain this is the Trolley Car Dilemma.

The dilemma is as follows:

A train is out of control and moving at top speed down a track. At the end of the track, five people are tied down, and will be killed in seconds. There is a switch that can divert the train to another track but, unfortunately, another person is tied down on that track, and will be crushed if you pull the switch.

If you pull the switch, one person dies. If you don’t, five people die. Either way, your action or inaction is going to kill people. The question is, how many?

Most people actually agree that sacrificing the lone victim makes the most sense. You are trading one against five. Saving more lives is better than saving fewer lives. This is based on a fairly utilitarian calculus that one could easily hand to a machine.

Unfortunately, it is easy to the change details of this example in ways that shift our own intuitions.

Imagine that the single victim is a researcher who now has the cure for cancer in his or her head. Or our lone victim could be a genuinely noble person who has and will continue to help those in need. Likewise, the victims on the first track could all be terminally ill with only days to live, or could all be convicted murderers who were on their way to death row before being waylaid.

In each of these cases, we begin to consider different ways to evaluate the trade-offs, moving from a simple tallying up of survivors to more nuanced calculations that take into account some assessment of their “value.”

Even with these differences, the issue still remains one of a calculus of sorts.

But one change wipes this all away.

There are five people tied down, and the trolley is out of control, but there is only one track. The only way to halt the trolley is to derail it by tossing something large onto the track. And the only large thing you have it hand is the somewhat large person standing next to you. There is no alternative.

You can’t just flip a switch. You have to push someone onto the track.

If you are like most people, the idea of doing this changes things completely. It has a disturbing intimacy that is not part of the earlier one. As a result, although most people would pull the switch, those same people resist the idea of pushing their fellow commuter to his or her doom to serve the greater good.

But while they are emotionally different, from the point of view of ethics or morality they are the same. In both cases, we are taking one life to save others. The calculus is identical, but our feelings are different.

Of course, as we increase the number of people on the track, there is a point at which most of us think that we will overcome our horror and sacrifice the life of the lone commuter in order to save the five, 10, one hundred or one thousand victims tied to the track.

And it is interesting to consider what we say about such people. What do we say to someone who is on their knees weeping because they have done a horrible thing in service of what was clearly the greater good? We tell them that they did what they had to do, and they did the right thing. We tell them that they were brave, courageous, and even heroic.

The Trolley Dilemma exposes an interesting problem. Sometimes our ethical and moral instincts are skewed by circumstance. Our determination of what is right or wrong becomes complex when we mix in emotional issues related to family, friends, tribal connections, and even the details of the actions that we take. The difficulty of doing the right thing does not rise out of us not knowing what it is. It comes from us being unwilling to pay the price that the right action often demands.

And what of our robot friends?

I would argue that an ethical or moral sense for machines can be built on a utilitarian base. The metrics are ours to choose, and can be coarse-grained (save as many people as possible), nuanced (women, children and Nobel laureates first) or detailed (evaluate each individual by education, criminal history, social media mentions, etc.). The choice of the code is up to us.

I would argue that an ethical or moral sense for machines can be built on a utilitarian base. The choice of code is up to us.

Of course, there are special cases that will require modifications of the core rules that are based on the circumstances of their use. Doctors, for example, don’t euthanize patients in order to spread the wealth of their organs, even if it means that there is a net positive with regard to survivors. They have to conform to a separate code of ethics designed around the needs of patients and their rights that restricts their actions. The same holds for lawyers, religious leaders and military personnel who establish special relationships with individuals that are protected by specific ethical code.

So the simple utilitarian model will certainly have overlays depending on the role that these robots and AIs will play. It would not seem unreasonable for a machine to respond for a request for personal information by saying “I am sorry but he is my patient and that information is protected.” In much the same way that Apple protected its encryption in the face of homeland security, it follows that robotic doctors will be asked to be HIPAA compliant.

Our machines need not hesitate when they see the Trolley coming. They will act in accord with whatever moral or ethical code we provide them and the value determinations that we set. They will run the numbers and do the right thing. In emergency situations, our autonomous cars will sacrifice the few to protect the many. When faced with dilemmas, they will seek the best outcomes independent of whether or not they themselves are comfortable with the actions. And while we may want to call such calculations cold, we will have to admit that they are also right.

Machine intelligence will be different than us, and might very well do things that are at odds with what we expect.

But they will be different than us, and might very well do things that are at odds with what we expect. So, as with all other aspects of machine intelligence, it is crucial that these systems are able to explain their moral decisions to us. They will need to be able to reach into their silicon souls and explain the reasoning that supports their actions.

Of course, we will need them to do be able to explain themselves in all aspects of their reasoning and actions. Their moral reasoning will be subject to the same explanatory requirements that we would demand of explaining any action they take. And my guess is that they will be able to explain themselves better than we do.

At the end of the movie “I, Robot,” Will Smith and his robot partner have to disable an AI that has just enslaved all of humanity. As they close in on their goal, Smith’s onscreen girlfriend slips, and is about to fall to her death. In response, Smith screams, “Save the girl!” and the robot, demonstrating its newly learned humanity, turns its back on the primary goal and focuses on saving the girl. While very “human,” this action is intensely selfish, and a huge moral lapse.

Every time I watch this scene, I just want the robot to say, “I’m sorry, I can’t. I have to save everyone else.” But then, I don’t want it to be human. I want it to be true to its code.


In addition to being chief scientist at Narrative Science, Kris Hammond is a professor of Computer Science and Journalism at Northwestern University. Prior to joining the faculty at Northwestern, Hammond founded the University of Chicago’s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. He currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Reach him @KJ_Hammond.

Good vs. bad. Right vs. wrong. Human beings begin to learn the difference before we learn to speak—and thankfully so. We owe much of our success as a species to our capacity for moral reasoning. It’s the glue that holds human social groups together, the key to our fraught but effective ability to cooperate. We are (most believe) the lone moral agents on planet Earth—but this may not last. The day may come soon when we are forced to share this status with a new kind of being, one whose intelligence is of our own design.

Robots are coming, that much is sure. They are coming to our streets as self-driving cars, to our military as automated drones, to our homes as elder-care robots—and that’s just to name a few on the horizon (Ten million households already enjoy cleaner floors thanks to a relatively dumb little robot called the Roomba). What we don’t know is how smart they will eventually become. Some believe human-level artificial intelligence is pure science fiction; others believe they will far surpass us in intelligence—and sooner rather than later. In either case, a growing number of experts from an array of academic fields contend that robots of any significant intelligence should have the ability to tell right from wrong, a safeguard to ensure that they help rather than harm humanity.

“As machines get smarter and smarter, it becomes more important that their goals, what they are trying to achieve with their decisions, are closely aligned with human values,” says UC Berkeley computer science professor Stuart Russell, co-author of the standard textbook on artificial intelligence.

But how, ex­actly, does one im­part mor­als to a ro­bot? Simply pro­gram rules in­to its brain? Send it to obed­i­ence class? Play it old epis­odes of Ses­ame Street?

He believes that the survival of our species may depend on instilling values in AI, but doing so could also ensure harmonious robo-relations in more prosaic settings. “A domestic robot, for example, will have to know that you value your cat,” he says, “and that the cat is not something that can be put in the oven for dinner just because the fridge is empty.”

But how, exactly, does one impart morals to a robot? Simply program rules into its brain? Send it to obedience class? Play it old episodes of Sesame Street?

While roboticists and engineers at Berkeley and elsewhere grapple with that challenge, others caution that doing so could be a double-edged sword. While it might mean better, safer machines, it may also introduce a slew of ethical and legal issues that humanity has never faced before—perhaps even triggering a crisis over what it means to be human.

The notion that human/robot relations might prove tricky is nothing new. In 1947, science fiction author Isaac Asimov introduced his Three Laws of Robotics in the short story collection I, Robot, a simple set of guidelines for good robot behavior. 1) Don’t harm human beings, 2) Obey human orders, and 3) Protect your own existence. Asimov’s robots adhere strictly to the laws and yet, hampered by their rigid robot brains, become mired in seemingly unresolvable moral dilemmas. In one story, a robot tells a woman that a certain man loves her (he doesn’t), because the truth might her feelings, which the robot understands as a violation of the first law. To avoid breaking her heart, the robot broke her trust, traumatizing her in the process and thus violating the first law anyway.

The conundrum ultimately drives the robot insane.

Although a literary device, Asimov’s rules have remained a jumping off point for serious discussions about robot morality, serving as a reminder that even a clear, logical set of rules may fail when interpreted by minds different from our own.  

Recently, the question of how robots might navigate our world has drawn new interest, spurred in part by accelerating advances in AI technology. With so-called “strong AI” seemingly close at hand, robot morality has emerged as a growing field, attracting scholars from philosophy, human rights, ethics, psychology, law, and theology. Research institutes have sprung up focused on the topic. Elon Musk, founder of Tesla Motors, recently pledged $10 million toward research ensuring “friendly AI.” There’s been a flurry of books, numerous symposiums, and even a conference about autonomous weapons at the United Nations this April.

The public conversation took on a new urgency last December when Stephen Hawking announced that the development of super-intelligent AI “could spell the end of the human race.” An ever-growing list of experts, including Bill Gates, Steve Wozniak and Berkeley’s Russell, now warn that robots might threaten our existence.

Their concern has focused on “the singularity,” the theoretical moment when machine intelligence surpasses our own. Such machines could defy human control, the argument goes, and lacking morality, could use their superior intellects to extinguish humanity.

Ideally, robots with human-level intelligence will need human-level morality as a check against bad behavior.

However, as Russell’s example of the cat-cooking domestic robot illustrates, machines would not necessarily need to be brilliant to cause trouble. In the near term we are likely to interact with somewhat simpler machines, and those too, argues Colin Allen, will benefit from moral sensitivity. Professor Allen teaches cognitive science and history of philosophy of science at Indiana University at Bloomington. “The immediate issue,” he says, “is not perfectly replicating human morality, but rather making machines that are more sensitive to ethically important aspects of what they’re doing.”

Ima­gine we pro­grammed an auto­mated car to nev­er break the speed lim­it. “That might seem like a good idea un­til you’re in the back seat bleed­ing to death.”

And it’s not merely a matter of limiting bad robot behavior. Ethical sensitivity, Allen says, could make robots better, more effective tools. For example, imagine we programmed an automated car to never break the speed limit. “That might seem like a good idea,” he says, “until you’re in the back seat bleeding to death. You might be shouting, ‘Bloody well break the speed limit!’ but the car responds, ‘Sorry, I can’t do that.’ We might want the car to break the rules if something worse will happen if it doesn’t. We want machines to be more flexible.”

As machines get smarter and more autonomous, Allen and Russell agree that they will require increasingly sophisticated moral capabilities. The ultimate goal, Russell says, is to develop robots “that extend our will and our capability to realize whatever it is we dream.” But before machines can support the realization of our dreams, they must be able to understand our values, or at least act in accordance with them.

Which brings us to the first colossal hurdle: There is no agreed upon universal set of human morals. Morality is culturally specific, continually evolving, and eternally debated. If robots are to live by an ethical code, where will it come from? What will it consist of? Who decides? Leaving those mind-bending questions for philosophers and ethicists, roboticists must wrangle with an exceedingly complex challenge of their own: How to put human morals into the mind of a machine.

There are a few ways to tackle the problem, says Allen, co-author of the book Moral Machines: Teaching Robots Right From Wrong. The most direct method is to program explicit rules for behavior into the robot’s software—the top-down approach. The rules could be concrete, such as the Ten Commandments or Asimov’s Three Laws of Robotics; or they could be more theoretical, like Kant’s categorical imperative or utilitarian ethics. What is important is that the machine is given hard-coded guidelines upon which to base its decision-making.

The appeal here is that the engineer retains control over what the robot knows and doesn’t know. But the top-down approach may have some serious weaknesses. Allen believes that a robot using such a system may face too great a computational burden when making quick decisions in the real world. Using Asimov’s first rule (don’t harm humans) as an example, Allen explains, “To compute whether or not a given action actually harms a human requires being able to compute all of the consequences of the action out into the distant future.”

So imagine an elder-care robot assigned the task of getting grandpa to take his meds. The trouble is, grandpa doesn’t want to. The robot has to determine what will cause greater harm: allowing him to skip a dose, or forcing him to take meds against his will. A true reckoning would require the robot to account for all the possible consequences of each choice and then the consequences of those consequences and so on, stretching off into the unknown.

Additionally, as the lying robot from Asimov’s story demonstrates, rigid adherence to ethical rules tends to lead to moral dilemmas. What’s a robot to do if every available course of action leads to human harm?

It’s great storytelling fodder, but a real-life headache for roboticists.

Stuart Russell sees another weakness. “I think trying to program in values directly is too likely to leave something out that would create a loophole,” he says. “And just like loopholes in tax law, everyone just jumps through and blows up your system.”

Since having our system blown up by robots is best left to Hollywood, an alternative called the bottom-up approach may be preferable. The machine is not spoon-fed a list of rules to follow, but rather learns from experience.

The idea is that the robot responds to a given situation with habituated actions, much like we do. When we meet a new person, we don’t stop and consult an internal rulebook in order to determine whether the appropriate greeting is a handshake or a punch to the face. We just smile and extend our hand, a reactive response based on years of practice and training. “Aristotle said that the way to become a good person is to practice doing good things,” says Allen, and this may be the best way to become a good robot too.

Ro­bots “could learn what makes people happy, what makes them sad, what they do to get put in jail, what they do to win medals.”

The bottom-up strategy puts far less computational strain on the robot because instead of computing all the possible “butterfly effect” repercussions of each action—whether or not a human might someday, somehow be harmed—the machine simply acts on its habituated responses. And this could lead to organic development of moral behavior.

But the bottom-up approach requires robots that can learn, and unlike humans they don’t start out that way. Thankfully the field of machine learning has taken great leaps forward of late, due in no small part to work being done at Berkeley. Roboticists have had success using reinforcement learning (think “good robot”/”bad robot”), but Russell invented another technique called inverse reinforcement learning, which takes things a step further. Using Russell’s method, a robot observes the behavior of some other entity (such as a human or even another robot), and rather than simply emulating the actions, it tries to figure out what the underlying objective is.

In this way the machine learns like a child. Imagine a child watching a baseball player swinging a bat, for example. Quickly she will decipher the intent behind the motions: the player is trying to hit the ball. Without intent, the motions are meaningless—just a guy waving a piece of wood.

In a lab down the hall from Russell’s office, Berkeley professor Pieter Abbeel has used “apprenticeship learning” (a form of inverse reinforcement learning) to give BRETT, the resident robot, the ability to learn how to tie knots, connect LEGO blocks, and twist the tops off water bottles. They are humble skills, to be sure, but the potential for more complex tasks is what excites Abbeel. He believes that one day robots may use apprenticeship learning to do most anything humans can.

Crucially, Russell thinks that this approach could allow robots to learn human morality. How? By gorging on human media. Movies, novels, news programs, TV shows—our entire collective creative output constitutes a massive treasure trove of information about what humans value. If robots were given the capability to access that information, Russell says, “they could learn what makes people happy, what makes them sad, what they do to get put in jail, what they do to win medals.”    

Get ready to install the robot filter on your TV.

He’s now trying to develop a way to allow machines to understand natural human language. With such a capability robots could read text and, more importantly, understand it.

The top-down and bottom-up techniques each have their advantages, and Allen believes that the best approach to creating a virtuous robot may turn out to be a combination of both.

Even though our best hope for friendly robots may be to instill in them our values, some worry about the ethical and legal implications of sharing our world with such machines.  

“We are entering a whole new kind of ecosystem,” says John Sullins, a Sonoma State philosophy professor specializing in computer ethics, AI, and robotics. “We will have to reboot our thinking on ethics, take the stuff from the past that still works, and remix it for this new world that’s going to include new kinds of agents.”

What about our human tendency to regard machines as if they possess human personalities?

“We already do it with our cars. We do it with our computers,” Sullins says. “We give agency inappropriately and promiscuously to lots of things in our environment, and robots will fool us even more.”

“So the per­son is now one-click buy­ing a bunch of crap from Amazon just to main­tain this sham friend­ship with a ma­chine. You get a lonely enough per­son and a clev­er enough ro­bot and you’re off to the bank.”

His concern is bolstered by a 2013 study out of University of Washington that showed that some soldiers working alongside bomb-diffusing robots became emotionally attached to them, and even despairing when their robots were destroyed. The danger, Sullins says, is that our tendency to anthropomorphize could leave us vulnerable. Humans are likely to place too much trust in human-like machines, assuming higher moral capability than the machines actually have. This could provide for-profit robotics companies an “opportunity to manipulate users in new and nefarious ways.”

He offers the example of a charming companion robot that asks its owner to purchase things as a condition of its friendship. “So the person is now one-click buying a bunch of crap from Amazon just to maintain this sham friendship with a machine,” he says. “You get a lonely enough person and a clever enough robot and you’re off to the bank.”

Then there’s the question of how we would define such robots. Would they be things? Beings? “Every roboticist has a different answer,” Sullins says. “What we’re talking about is a new category that’s going to include a wide range of technologies from your intelligent thermostat to R2D2—and everything in between.”

Sullins believes the arrival of these new robotic beings is going to throw ethics for a loop. “For thousands of years we’ve been kind of sleep walking through what morality and ethics means because we just assumed that the world was all about us, all about human relationships,” he says. “The modern world is calling that into deep question.”

And the ramifications won’t just be ethical, but also legal.

Ryan Calo, a law professor at the University of Washington specializing in cyber law and robotics, believes that moral machines will have a deeply unsettling effect on our legal system.

With the pervasion of the Internet into every aspect of our lives “we’ve grown accustomed to this promiscuous, loosey-goosey information eco-system in which it can be difficult to establish liability,” says Calo. “That’s going to change when it’s bones and not bits on the line. We’re going to have to strike a different balance when software can touch you.”

Calo serves on the advisory committee of the new People and Robots Initiative of CITRIS, the University of California-wide technology research center. He believes that the ability of robots to physically impact the world is just one of several issues legal experts will have to grapple with.

Last year two Swiss artists cre­ated an al­gorithm that pur­chased items at ran­dom from the In­ter­net. The al­gorithm even­tu­ally bought a few tab­lets of the il­leg­al drug Ec­stasy, and Swiss po­lice, un­cer­tain how to re­act, “ar­res­ted” the al­gorithm.

For instance, the law will have to confront what he calls “emergent behavior” meaning complex actions that are not easily predicted—even by a robot’s own developers. He gives the example of two Swiss artists who created an algorithm that purchased items at random from the Internet last year. The algorithm eventually bought a few tablets of the illegal drug Ecstasy, and Swiss police, uncertain how to react, “arrested” the algorithm.

Even if robots can one day make decisions based on ethical criteria, that does not guarantee their behavior will be predictable.

Another issue he calls social valence: the fact that robots feel like people to us. This raises numerous issues, for instance “How should privacy law react when everything around us, in our homes and hospital rooms and offices and cars, have things that feel like they’re people? Will we ever really be alone? Will we ever experience solitude?”

This might also lead to the extension of certain rights to robots, Calo argues, and even the prosecution of those who abuse them. “Should we bring scrutiny to bear on people who do things like ‘Torture-Me Elmo’?” he asks, referring to a spate of Youtube videos depicting Tickle-Me Elmo dolls that are doused with gasoline and burned as they disturbingly writhe and giggle. “Nobody cares when you dump your old TV on the street. How will they feel when you dump your old robot?”

The effect on the law will be exponentially more dramatic, Calo says, if we ever do develop super-intelligent artificial moral agents.

“If we do that, it’s going to break everything,” he says. “It’s going to be a fundamental sea change in the way we think about human rights.”

Calo illustrates the sort of dilemma that could arise using a theoretical situation he calls the “copy-or-vote paradox.” Imagine that one day an artificially intelligent machine claims that it is a person, that it is sentient, has dignity, experiences joy and pain—and we can’t disprove it. It may be difficult to justify denying it all the human rights enjoyed by everyone else. What happens if that machine then claims entitlement to suffrage and procreation, both of which are considered fundamental human rights in our society? And what if the machine procreates by copying itself indefinitely? Our democracy would come undone if there were an entity that could both vote and make limitless copies of itself.

“Once you challenge the assumption that human beings are biological, that they live and they die, you get into this place where all kinds of assumptions that are deeply held by the law become unraveled,” Calo says. 

Despite their warnings, both Calo and Sullins believe there is reason to be hopeful that if enough thought is put into these problems they can be solved.

“The best potential future is one in which we utilize the strengths of both humans and machines and integrate them in an ethical way,” Sullins says.

There is another potential future imagined by some enthusiastic futurists in which robots do not destroy us, but rather surpass our wildest expectations. Not only are they more intelligent than us, but more ethical. They are like us—only much, much better. Humans perfected. Imagine a robot police officer that never racially profiles and a robot judge that take fairness to its zenith. Imagine an elder-care robot that never allows grandpa to feel neglected (and somehow always convinces him to take his pills) or a friend who never tires of listening to your complaints.  With their big brains and even bigger hearts, such robots could solve all the world’s problems while we stare at our belly buttons.

But where does that future leave us? Who are we if robots surpass us in every respect? What, then, are humans even for?

As roboticist Hans Moravec once wrote, “life may seem pointless if we are fated to spend it staring stupidly at our ultra-intelligent progeny as they try to describe their ever more spectacular discoveries in baby-talk we can understand.”

Sullins has another vision, one in which humans at least have an active role:

“These machines are going to need us as much as we need them because we have a kind of natural nuance that technology seems to lack. A friend of mine used to liken it to a bird and a 747. The plane can get you across the planet in hours, but it certainly can’t land on a treetop. These machines will be good at taking us places quickly, but once we get there, the nuanced interactions, the art of life, that’s going to take our kind of brains.”

0 thoughts on “Ethics Robotics Essay

Leave a Reply

Your email address will not be published. Required fields are marked *