DeepMind researchers have trained an AI system to find a popular policy for distributing public funds in an online game – but they also warn against “AI government”
4 July 2022
A “democratic” AI system has learned how to develop the most popular policy for redistributing public money among people playing an online game.
“Many of the problems that humans face are not merely technological, but require us to coordinate in society and in our economies for the greater good,” says Raphael Koster at UK-based AI company DeepMind. “For AI to be able to help, it needs to learn directly about human values.”
The DeepMind team trained its artificial intelligence to learn from more than 4000 people as well as from computer simulations in an online, four-player economic game. In the game, players start with different amounts of money and must decide how much to contribute to help grow a pool of public funds, eventually receiving a share of the pot in return. Players also voted on their favourite policies for doling out public money.
The policy developed by the AI after this training generally tried to reduce wealth disparities between players by redistributing public money according to how much of their starting pot each player contributed. It also discouraged free-riders by giving back almost nothing to players unless they contributed approximately half their starting funds.
This AI-devised policy won more votes from human players than either an “egalitarian” approach of redistributing funds equally regardless of how much each person contributed, or a “libertarian” approach of handing out funds according to the proportion each person’s contribution makes up of the public pot.
“One thing we found surprising was that the AI learned a policy that reflects a mixture of views from across the political spectrum,” says Christopher Summerfield at DeepMind.
When there was the highest inequality between players at the start, a “liberal egalitarian” policy – which redistributed money according to the proportion of starting funds each player contributed, but didn’t discourage free-riders – proved as popular as the AI proposal, by getting more than 50 per cent of the vote share in a head-to-head contest.
The DeepMind researchers warn that their work doesn’t represent a recipe for “AI government”. They say they don’t plan to build AI-powered tools for policy-making.
That may be as well, because the AI proposal isn’t necessarily unique compared with what some people have already suggested, says Annette Zimmermann at the University of York, UK. Zimmermann also warned against focusing on a narrow idea of democracy as a “preference satisfaction” system for finding the most popular policies.
“Democracy isn’t just about winning, about getting whatever policy you like best implemented – it’s about creating processes during which citizens can encounter each other and deliberate with each other as equals,” says Zimmermann.
The DeepMind researchers do raise concerns about an AI-powered “tyranny of the majority” situation in which the needs of people in minority groups are overlooked. But that isn’t a huge worry among political scientists, says Mathias Risse at Harvard University. He says modern democracies face a bigger problem of “the many” becoming disenfranchised by the small minority of the economic elite, and dropping out of the political process altogether.
Still, Risse says the DeepMind research is “fascinating” in how it delivered a version of the liberal egalitarianism policy. “Since I’m in the liberal-egalitarian camp anyway, I find that a rather satisfactory result,” he says.
Journal reference: Nature Human Behaviour, DOI: 10.1038/s41562-022-01383-x
More on these topics: