On a list of the most impactful figures of the 20th century, several names jump out: Albert Einstein, Mahatma Gandhi and Franklin D Roosevelt on the positive side of the ledger and that trio of tyrants, Hitler, Stalin and Mao, who did unaccountable harm.
But in Machines Behaving Badly, Toby Walsh makes a convincing case that 1,000 years in the future (assuming humanity survives that long) the answer will be perfectly clear: Alan Turing. As a pioneer of computing and the founder of artificial intelligence, Turing will be seen as the driving intellectual force behind the “pervasive and critical technology” that will then invisibly permeate every aspect of our lives. The mathematician IJ Good, a fellow codebreaker at Bletchley Park during the second world war, famously predicted that the invention of an “ultraintelligent machine”, as imagined by Turing, would lead to an “intelligence explosion.”
“Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” Good wrote in 1965. Good went on to advise Stanley Kubrick on the making of the film 2001: A Space Odyssey, which introduced viewers to the wonders and dangers of just such an ultraintelligent machine, named HAL 9000.
Riveting though it is to speculate how artificial intelligence will have changed the world by 3022, Walsh focuses most of his attention on the here and now. The computers we have today may not yet match the intelligence of a two-year-old child, he argues, but AI is already being used for an impressive and ever-expanding array of purposes: detecting malware, checking legal contracts for errors, identifying birdsong, discovering new materials and (controversially) predicting crime and scheduling police patrols. Walsh’s aim is to make us think about the unintended consequences of using such powerful technology in all these, and other, ways.
As a professor of AI at the University of New South Wales, Walsh is enthusiastic about the power and promise of the technology. Computers can help automate away dirty, difficult, dull and dangerous jobs unsuited for humans. Indian police have used facial recognition technology to identify 10,000 missing children. AI is also being used to combat the climate emergency by optimising the supply and demand of electricity, predicting weather patterns and maximising the capacity of wind and solar energy. But Walsh insists we need to think very carefully about allowing such technology to intrude into every corner of our lives. The Big Tech companies that are deploying AI are motivated by profit rather than societal good.
The most interesting, and original, section of the book concerns whether machines can operate in moral ways. One of the biggest, and most fascinating, experiments in this area is the Moral Machine project run by the Media Lab at the Massachusetts Institute of Technology. This digital platform has been used to crowdsource the moral choices of 40mn users, interrogating them about the decision-making processes of self-driving cars, for example.
How do users react to the moral dilemma known as the “trolley problem”, dreamt up by the English philosopher Philippa Foot in 1967. Would you switch the course of a runaway trolley to prevent it killing five people on one track at the cost of killing one other person on an alternative spur? In surveys, some 90 per cent of people say they would save the five lives at the cost of the one.
But, like many computer scientists, Walsh is sceptical about the applicability of such neat moral choices and whether they could ever be written into a machine’s operating system. First, we often say one thing and do another. Second, some of the things we do we know we shouldn’t (ordering ice cream when we are on a diet). Third, moral crowdsourcing depends on the choices of a self-selecting group of internet users, who do not reflect the diversity of different societies and cultures. Finally, moral decisions made by machines cannot be the blurred average of what people tend to do. Morality changes: democratic societies no longer deny women the vote or enslave people, as they once did.
“We cannot today build moral machines, machines that capture our human values and that can be held accountable for their decisions. And there are many reasons why I suspect we will never be able to do so,” Walsh writes.
But that does not mean that companies that deploy AI should be left to run amok. Lawmakers have a responsibility to delineate where it is acceptable for algorithms to substitute for human decision-making and where it is not. Walsh is himself an active campaigner against the use of killer robots, or lethal autonomous weapons systems. To date, 30 countries have called on the UN to ban such weapons, although none of the world’s leading military powers are yet among them.
For a technologist, Walsh is refreshingly insistent on the primacy of human decision-making, even if it is so often flawed. “It might be better to have human empathy and human accountability despite human fallibility. And this might be preferable to the logic of cold, unaccountable and slightly less fallible machines,” he concludes.
Machines Behaving Badly: The Morality of AI by Toby Walsh, Flint £20, 288 pages
John Thornhill is the FT’s innovation editor
Join our online book group on Facebook at FT Books Café