A runaway train is hurtling towards five people tied to a railway track ahead. You are in a signal box and can pull a lever to switch the train to a siding upon which one man is standing unawares. Do you pull the lever and save the five but choose to kill the one? Or do you do nothing and allow the five to be killed? This famous question was posed in 1967 by the philosopher Philippa Foot.
Who knew car manufacturers were in the vanguard of ethical theory? Driver-less cars, projected to be the great new advance in motor manufacture, and known as autonomous vehicles (AVs), are facing a similar choice. Imagine the following scenario: a pedestrian steps into the path of an moving AV which (via its sensors) has to ‘decide’ whether to take immediate action which avoids the pedestrian but could kill its occupants. Should the AV be pre-programmed in this no-win situation to:
- Kill the pedestrian and save the passengers? Or
- Kill the passengers and save the pedestrian?
A group has been canvassed and the results have been published this week in the journal Science. 76% agreed that AVs should take the second option and risk the passengers whilst saving the pedestrian. They assert that it is for the greater good for AVs to be pre-programmed, prior to sale, in this way so as to minimise deaths. Pedestrians would undoubtably prefer this second option, but what about potential AV purchasers? Which owner wants a car with an inbuilt capability to manoeuvre in such a way that may cause harm to the occupants? Manufacturing firms have a financial interest in preferring the pre-set to be the first option. In fact the same people canvassed who wanted AVs to adopt the second option stated they would be less likely to buy AVs so programmed. Self interest now weighed more heavily than the greater good. Perhaps in future I will be able to purchase the pre-set self-preservation as an optional extra, along with my tinted windows.
So what is the AV software engineering department to do? A website has been created where people can discuss robotic ethical dilemmas, and in particular what is being (absurdly) called algorithmic morality. A term which is dangerously close to being an oxymoron. What moral objectives will be written into these algorithms? Are the AVs to be programmed to run over as many people as possible and so control overpopulation and save the planet? I don’t remember Herbie and Kit facing these challenging dilemmas.
The AV engineers should instead read the science fiction stories of the Russian Isaac Asimov. In his 1950 collection of short stories I, Robot he did all the necessary thinking on this topic. These are stories of robots gone mad, robots who secretly run the world, robots with a sense of humour, a robot that becomes a religious maniac logically deducing its superiority to non-rational humanity. In the story Liar! Azimov sets out his first law of robotics, a law that I propose should now be widely adopted: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The robot in Liar! develops telepathic abilities and, being subject to this first law, deliberately lies to avoid hurting humans’ feelings. It reaches the situation where it must speak but it can neither lie nor tell the truth without breaking the first law: so faced with this irresolvable logical conflict becomes catatonic. Azimov was a writer and so understands the power of the stories we tell. Everything is a story we tell ourselves about ourselves. It is only humans, and certainly not robots, who are able to do our necessary thinking.
Azimov was famously maladroit. Due to his poor physical dexterity he could neither swim nor ride a bicycle, and he referred to his driving his own car about Boston as ‘anarchy on wheels’.
Here is an extract from another story. “He increased his pace, and as the car devoured the street and leapt forth on the high road through the open country, he was only conscious that he was Toad once more, Toad at his best and highest, Toad the terror, the traffic-queller, the Lord of the lone trail, before whom all must give way or be smitten into nothingness and everlasting night. He chanted as he flew, and the car responded with sonorous drone; the miles were eaten up under him as he sped he knew not whither, fulfilling his instincts, living his hour, reckless of what might come to him.”
More Azimov and less robots please. More human anarchy and less scientific order please.