Wednesday, April 4, 2012

Roboethics: Three ways to make sure future robots have morals

8 hrs.

As robots become increasingly intelligent and lifelike, it?s not hard to imagine a future where they?re completely autonomous. Once robots can do what they please, humans will have to figure out how to keep them from lying, cheating, stealing, and doing all the other nasty things that us carbon-based creatures do on a daily basis. Enter roboethics, a field of robotic research that aims to ensure robots adhere to certain moral standards.

In a recent paper (PDF), researchers at the Georgia Institute of Technology discuss how humans can make sure that robots don?t get out of line.

Have ethical governors
The killing robots used by the military all have some sort of human component--lethal force won?t be applied unless a person makes the final decision. But that could soon change, and when it does, these robots need to know how to act humanely. What that means in the context of war is debatable, but some sort of ethical boundaries need to be set. Indiscriminate killing robots don?t help anyone.?

An ethical governor--a piece of the robot?s architecture that decides whether a lethal response is warranted based on preset ethical boundaries--may be the answer. A military robot with an ethical governor might only attack if a victim is in a designated kill zone or near a medical facility, for example. It could use a "collateral damage estimator" to make sure it only takes out the target and not all the other people nearby.

Establish emotions
Emotions can help ensure that robots don?t do anything inappropriate--in a military context and elsewhere. A military robot could be made to feel an increasing amount of "guilt" if repeatedly chastised by its superiors. Pile on enough guilt, and the robot might forbid itself from completing any more lethal actions.

Emotions can also be useful in non-military human-robot interactions. Deception could help a robot in a search-and-rescue operation by allowing it to calmly tell panicked victims that they will be fine, while a confused patient with Alzheimer?s might need to be deceived by a nursing robot. But future programmers need to remember: It?s a slippery slope from benign deception to having autonomous robots that compulsively lie to get what they want.

Respect humans
If robots don?t respect humans, we?re in trouble. That?s why the researchers stress that autonomous robots will need to respect basic human rights, including privacy, identity, and autonomy. If we can?t ensure that intelligent robots will do these things, we should refrain from unleashing them en masse.

Of course, humans don?t always act ethically. Perhaps we could use an ethical governor as well for those times when our brains lead us astray. The researchers explain: "We anticipate as an outcome of these earlier research thrusts, the ability to generate an ethical advisor suitable for enhancing human performance, where instead of guiding an autonomous robot?s ethical behavior, it instead will be able to provide a second opinion for human users operating in ethically challenging areas, such as handling physically and mentally challenged populations."

In the end, robots might make us more human.

Ariel Schwartz is a Senior Editor at Co.Exist. She has contributed to SF Weekly, Popular Science, Inhabitat, Greenbiz, NBC Bay Area, GOOD Magazine and more. Twitter ? Google+Email

More from Fast Company:

ron paul maine safe house jay z and beyonce baby cpac powell the last lecture kim jong un

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.