News

Designing an AI that Cares

01.12.2017
HKDI
Feature Story

The breakthrough that could change AI from being a plaything to being a playmate with which humans can have meaningful interactions may be about to come from a seemingly unlikely source

The trolley problem is an age-old ethical thought experiment that will be familiar to any undergraduate philosophy student. It goes like this; you are standing beside a railway line and notice a runaway train heading down the tracks. Ahead of the train you can see five people tied to the tracks. They are unable to escape, there’s no time to untie them. Luckily you are standing next to a junction on the railway line where there is a lever that controls a set of points. You can pull the lever, redirect the train down the other line and save the lives of the five people. But wait! You realise that tied to the other line there is another person, in saving the five people, you will condemn the sixth to be run over by the train.

So what does this have to do with AI?

As more car manufacturers join the race to produce self-driving cars, an ever growing cohort of designers are engaged with a real-world variant of the trolley problem. Should the self-driving AI be taught to swerve to avoid a dog if it increases the chance of hitting a pedestrian? Should the AI swerve off the road, destroying itself and injuring the car’s occupants in order to avoid collision with a schoolbus? In which case, who bears moral and legal responsibility; the car’s owner or the person who programmed the AI? Can the AI itself bear responsibility, or is it ridiculous to hold a set of algorithms morally culpable for its decisions? Before any AI can take to the road, all of these questions will need workable answers.

The implications of programming an AI to deal with ethical dilemmas are broader than may at first be apparent. Giving an AI a set of morals goes beyond autonomous vehicles, as AI begins to take control of more and more areas of human life.

In the case of the trolley problem what is the most ethical choice? If you do nothing, five people will die, but you will not be directly responsible, even though you could have prevented the deaths. If you pull the lever, you will save five lives, but will be directly responsible for killing one person who would otherwise have lived. Is your decision altered if saving five lives will cost four lives? How can your decision be rationalised? There are countless variations on the thought experiment, all of different degrees of absurdity. In reality, the whole thought experiment is absurd; this situation is extremely unlikely to occur in real life - and that is because this problem is intended not as a guide for moral conduct in itself, but as a tool for thinking about the highly abstracted field of ethics. This thought experiment is supposed to be used to test out the implications of moral theories. It is used to compare theoretical models against gut-feeling and human instincts about what is right and what is wrong.

Designers of AIs must provide solutions in ways that philosophers cannot. A philosopher looking at this problem will apply Kant’s Categorical Imperative and Mill’s Rule Utilitarianism. In desperation they may initiate disaster by trying out Rand’s Objectivism (which is, after all, a popular ethos among Silicon Valley tech entrepreneurs). But eventually the philosopher will leave the library knowing that the question is intractable. Designers, on the other hand, must find practical solutions. And designers are using the aforementioned gut-feeling and human instinct to find those solutions. For example, a team at MiT Media Lab have developed a website called Moral Machine that aims to crowdsource an ethical code for self-driving cars. Visitors to the website are presented with a series of variations on the trolley problem and must click on their preferred solution. It turns the problem into something more like a game, but over time, thousands of opinions have been canvassed. This work is ongoing, but the hope is that by crowdsourcing a solution a version of AI ethics can be arrived at that, while not being based on solidly rational philosophical foundations, will at least tally with most people’s intuitive notions of right and wrong.

So, let us speculate for a moment. Imagine that in solving these design problems, the developers of AI programs produce an AI that has a genuine sense of right and wrong. What could be said of this AI? Even hyperintelligent sentient AIs of science fiction are rarely credited with such abilities. Such an AI would be able to transcend the role of robot helper and could act as a real companion, an equal peer to its human creator. Imagine playing games against such an AI. Allow the AI the ability to cheat but let it decide whether or not to do so. Now, that AI would cease to be a plaything, and become a genuine playmate.


Others