Rensselaer Professors Part of Team Funded To Teach Robots Right From Wrong

May 9, 2014

Image

Researchers from Rensselaer Polytechnic Institute, teamed with researchers from Tufts University, Brown University, and the U.S. Navy, are exploring what it takes to engineer a robot with morals.

Partnering under a Multidisciplinary University Research Initiative (MURI) awarded by the Office of Naval Research (ONR) under the direction of ONR Program Officer Paul Bello, the scientists will delve into the challenges of infusing autonomous robots with a sense of right and wrong, and the ability to promote the former and steer clear of the latter.

“Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” says principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts. “The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities.”

Selmer Bringsjord, Head of the Cognitive Science Department at Rensselaer, and Naveen Govindarajulu, the postdoctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings. Since the scientific community has yet to mathematize and mechanize what constitutes correct moral reasoning and decision-making, the challenge for Bringsjord and his team is severe. In Bringsjord’s approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today’s most advanced artificially intelligent and question-answering computers. If that check reveals a need for deep, deliberate moral reasoning, such reasoning is fired inside the robot, using newly invented logics tailor-made for the task.

“We’re talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don’t have to tell them what to do,” Bringsjord said. “When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite ruleset created ahead of time by humans can anticipate every possible scenario in the world of war.”

For example, consider a robot medic generally responsible for helping wounded American soldiers on the battlefield. On a special assignment, the robo-medic is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured femur. Should it delay the mission in order to assist the soldier?

If the machine stops, a new set of questions arises: The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier’s thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier extreme pain?

The ONR-funded project will isolate essential elements of human moral competence through theoretical and empirical research, and will — courtesy of Bringsjord’s lab — develop formal frameworks for modeling human-level moral logic. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.

Once the architecture is established, researchers can begin to evaluate how well machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions, and explain their decisions in ways that are acceptable to humans.

Bringsjord’s first step in designing ethically logical robots is translating moral theory into the language of logic and mathematics. A robot, or any machine, can only do tasks that can be expressed mathematically. With help from Rensselaer professor Mei Si, an expert in the computational modeling of emotions, the aim is to capture in “Vulcan” logic such emotions as vengefulness.

Bringsjord, Govindarajulu, and Si are preparing to demonstrate some of their initial findings at an Institute of Electrical and Electronics Engineers (IEEE) conference in Chicago in May. They will there be demonstrating two autonomous robots: one that succumbs to the temptation to get revenge, and another — controlled by the moral logic they are engineering — that resists its vengeful “heart” and does no violence.

The interdisciplinary MURI team consists of experts at: Tufts University (Matthias Scheutz), Brown University (Bertram Malle), and Rensselaer Polytechnic Institute (Selmer Bringsjord, Naveen Govidarajulu, Mei Si), with consultants from Georgetown University (John Mikhail) and Yale University (Joshua Knobe).

Press Contact Emily Donohue
Back to top