Moral machines and applied philosophy

Artificial intelligence has for many years been an active field of research and development. While we have a long way to go before the capabilities of the human brain could be replicated in silicon (if ever), machines with varying degrees of independent decision making ability are reality today.

Machine morality and ethics is a branch of applied philosophy that is now crossing over into everyday computer systems engineering and software development.

“Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view.”

So say Luís Moniz Pereira and Gonçalo Lopes of the Universidade Nova de Lisboa, and Ari Saptawijaya of the Universitas Indonesia in Depok, who are looking into artificial intelligence and the application of computational logic.

In their research, Pereira, Lopes and Saptawijaya have turned to a system known as prospective logic in order to model morality in a way that could ultimately be translated into functional computer code. Prospective logic can be used to simulate moral dilemmas, and determine the logical outcomes of possible decisions within a defined framework.

Prospective logic derives from a thought experiment introduced by philosopher Philippa Foot back in the 1960s. Foot’s original problem involves a trolley running out of control down a track to which five people are tied. You, the trolley driver, can flip a switch, which will send the trolley down a different path. But there is a single person tied to that track. What do you do?

Prospective logic programming can be used to consider the possible outcomes of such thorny scenarios, and demonstrate in logical terms what the consequences of the decisions might be. The next step in the process is to assign each outcome a moral weight, so that the prototype moral machine may be further developed to make the best possible judgement in the circumstances.

To some this may seem a rather cold, utilitarian way of rationalising decisions that can for us be emotionally charged. But one could argue that human beings act in such ways without realising it, and our emotional reactions are a consequence of internal rationalisation.

Aside from its value in robotics, machine ethics is of interest to cognitive scientists looking for new ways to understand moral reasoning in humans and other animals. Pereira and Saptawijaya add that such an understanding might help in developing intelligent tutoring systems for teaching morality to children.

Further reading

Luis Moniz Pereira and Goncalo Lopes, “Prospective logic agents”, International Journal of Reasoning-based Intelligent Systems 1, 200 (2009)

Luis Moniz Pereira and Ari Saptawijaya, “Modelling morality with prospective logic”, International Journal of Reasoning-based Intelligent Systems 1, 209 (2009)

Moral Machines blog