News & Politics  
comments_image Comments

Robots Programmed to Behave 'Morally'?

Programming robots to think and behave ethically has shifted from an Isaac Asimov fantasy to a thriving area of study.
 
 
Share

Photo Credit: Ociacia

 
 
 
 

Can a robot love? Can it think? How about kill?

These questions have been endlessly explored in sci-fi novels, but lately it's been a topic of international diplomacy. The United Nations probed present-day robot ethics last month at the four-day Convention on Certain Conventional Weapons meeting in Geneva.

The meeting brought together experts and government officials to talk about the opportunities and dangers of killer robots. No international agreement was reached, but the discussion made clear that autonomous robot technology moves much faster than the policies governing it.

Meanwhile, here in B.C., robotics experts are investigating the ethical implications inherent to firsthand interactions between humans and robots.

At the heart of the debate is the question of where to draw the line. Whether we're talking about killer, caregiving ("assistive") or industrial robots, the key issues are the same: How far are we willing to delegate human tasks to a machine? And is it possible to create machines that think and behave in ways that minimize harm to humans?

Within the last decade, programming robots to think and behave ethically has shifted from an Isaac Asimov fantasy to a thriving area of study. In 2010, Michael Anderson, a computer science professor at the University of Hartford, and Susan Anderson, a philosophy professor at the University of Connecticut, programmed what they called the first ethical robot.

Four years ago, the married couple fused Michael's expertise in robotics and programming with Susan's work in ethics to test the limits of what a machine could do. They used NAO, a programmable human-like robot launched by Aldebaran Robotics in 2008, to conduct their experiments. At a prototypical level, they wanted to show that it was possible to control a robot's behaviour using an ethical principle.

To do this, they examined a seemingly innocuous task: reminding a patient to take their medication. But that simple act has ethical implications. Is it ethical to have a robot remind a patient to take their medication? If so, how often should it occur? More importantly, how forceful should the robot be?

They then looked at different scenarios for the task. Say it was time to take the medication. NAO approaches its owner to remind them. The patient refuses. Should the robot insist? Should it come back later?

For each scenario, the Andersons determined an acceptable behaviour, or "right decision." Michael created an algorithm based on the sum of all these decisions, which then generated a general ethical principle, later encoded into NAO.

The researchers concluded that the ethical principle in this experiment was that "a health care robot should challenge a patient's decision -- violating the patient's autonomy -- whenever doing otherwise would fail to prevent harm or severely violate the duty of promoting patient welfare."

It meant that NAO had agency to decide whether a patient should be reminded to take their medication.

The Andersons recognize not everyone would support putting medical decisions in the hands of a machine. But they believe there are also ethical implications in not creating ethical robots that provide services society needs.

"Ethics is not only about what we, and robots, shouldn't do, but also what we, and robots, could and should do," said Michael. "If there is a need for certain machines and we can ensure that they will behave ethically, don't we have an obligation to create them?"

New research says no 'right' answer

Skepticism is not the only obstacle researchers like the Andersons face. Creating robots that know right from wrong implies that we, as humans, agree on what is ethical and what is not.

 
See more stories tagged with: