Last year Professor Stephen Hawking said that “the development of full artificial intelligence could spell the end of the human race.” The A.I. community has responded to this impending apocalypse by proposing this week that robots be given stories to read. This will allow, according to associate professor Mark Riedl of the Georgia Institute of Technology, robots to learn to take a first step towards moral reasoning, or “value alignment” as his scientific paper bafflingly calls it.“We believe story comprehension in robots can eliminate psychotic-appearing behaviour and reinforce choices that won’t harm humans” he said. The project is called Quixote and, apparently it works in this way:
- The robot gathers a correct sequence of actions by learning what is a normal or “correct” plot graph in the story.
- That data structure is converted into a “reward signal” that reinforces good behaviour and punishes bad behaviour.
- Quixote learns that it will be rewarded whenever it acts like the protagonist in a story, and punished if it acts randomly or like the antagonist.
An example of a story to be given to the robot Quixote is included in the paper (which can be read here: Quixote). It is the story of George Washington and the cherry tree. Aged six, little George fells a cherry tree in his garden with a hatchet. His father asks who has done it, and George who cannot tell a lie confess all and his father hugs him because his son’s love of truth is more important to him than any tree. I think we can all agree that no robot could read it without weeping. But what if our robot happened upon a copy of Macbeth? Or King Lear? Or Rumpelstiltskin? Or Hansel and Gretel? In Paradise Lost who is the the protagonist who the antagonist: God or the Devil? And Waiting for Godot should be kept well away from the robot library, as randomness is everywhere (and nowhere). Come to think of it, given young George is rewarded for his misbehaviour, will there be a cherry tree in the land safe from robotic hatchets? I can’t help thinking that Professor Hawking won’t be easily convinced by this exciting ethical development.
Regular readers of this blog will know that I regard narrative as a crucial part of what it means to be human. (See, for example my post: Where do stories come from?) Stories are an indispensable ingredient of any society. When haphazard things are transformed into a story, and thus made memorable over time, a social community can form. Narrative is a quintessentially communicative act. The art of storytelling is what gives humans a shareable world and stories are how people explain themselves to themselves and to others. It is debatable whether a merely biological life could even be considered a human life without narrative. So to this extent I can see why Professor Riedl has embarked down this road. I’m just not sure he’s fully thought it through.
He has called the project Quixote, no doubt because in Cervantes’ novel Don Quixote’s world view is constructed entirely through the books he has read. But the Don goes off his head in paragraph two of the novel. The motor of his madness is his belief, engendered from excessive reading, that everything he has read in fiction is literally true. One suspects that robots may have equal difficulty distinguishing the literal from the figurative. In fact in the scientific paper states that the stories to be given to the robot are those that “simplify their language to make learning easier, avoiding similes, metaphorical language, complex grammar, and negations.” That rules out, I would suggest, all literary texts ever written. Don Quioxte himself thinks that everything he encounters is from a chivalric romance: his world view is a dangerous delusion. Thus the things and people he meets take on forms in line with his obsession; windmills become giants, a barber’s basin a golden helmet. This is a realistic novel about a Spanish man, ill-suited by age, physique and economic circumstance, who ridiculously tries to turn his life into a romance of chivalry. He manages somehow to transcend the absurdity of his circumstances by preserving an essential dignity.
There is something inherently unstable about language that makes any search for moral lessons within stories problematic. On the one hand, stories can only be be communicated if there is a shared language. A well-ordered community seeks solidity and continuity, and one way of doing this is through sharing stories. This requires agreed definitions and settled meaning and linguistic constancy. But narratives are also congenial to ambiguity and plurality of meaning. Words are slippery customers: metaphor, ambiguity, simile, flamboyant punning, riddling, irony, poetic images, teasing word-play, troping and assonance betray the fact that narrative is woven with inherent linguistic instability. A story is undermined by the very language in which it is articulated, putting into jeopardy the stability and certainty that is sought.
Every human being is trapped within the prison-house of language. It is both a joy and a curse. Language may be a way out of our confusion, but it is also the cause of it. It may be how we understand the world, but also how we misunderstand it. A.I. robots may be welcome to enter this linguistic world if they are able, but let us not delude ourselves that this will “eliminate psychotic-appearing behaviour”.