Oh, the AI-rony.
China on Tuesday signed off on a United Nations pledge to stop artificial intelligence from wreaking havoc on societies, including by banning the use of AI for “social scoring” systems — a practice Beijing itself has popularized in recent years and currently uses to score Chinese citizens based on their perceived trustworthiness.
There is growing global pressure to introduce binding rules for AI practices like social scoring and facial recognition in public places that are seen to endanger human rights and civil liberties.
But the U.N.’s Educational, Scientific and Cultural Organization’s (UNESCO) is the first international organization to get Beijing to sign up to principles that call for the end of pervasive mass surveillance using AI.
UNESCO’s 193 member countries approved a first-of-its-kind recommendation for AI ethics on Tuesday. At the heart of the text is a warning to governments to steer clear of dangerous use cases for the technology because they threaten civil rights.
“Whenever you are not certain that the development of certain technologies is going to have a negative impact but you assume that they might — don’t do it. It’s as simple as that,” UNESCO’s Assistant Director-General for Social and Human Sciences Gabriela Ramos, who has led the organization’s AI effort, told POLITICO’s AI: Decoded in an interview ahead of the deal.
The U.N. text calls on technologists to conduct ethical impact assessments and on governments to put in place “strong enforcement mechanisms and remedial actions” to protect human rights. It also nudges governments to dedicate public funds to promote diversity in tech, protect indigenous communities and monitor the carbon footprint of AI technologies.
The U.N. recommendations are voluntary. Ramos declined to comment on whether she thinks China, the creator of the social scoring system, would really ban its own system in line with the recommendation.
The fact that Russia and China want to engage is a good sign, Ramos said: “At the end, we need to be [held] accountable. And sometimes it’s even difficult to look into accountability and responsibility in the digital world.”
The U.S., home to the world’s biggest AI companies, is not part of UNESCO and not a signatory of the new recommendation.
But Ramos argued that peer pressure is a powerful tool and the U.S. could end up in a similar fight as the one on taxing digital platforms if it doesn’t engage with global rulemaking on ethical AI.
“You can say, ‘I don’t care, because I don’t want to tax my platforms.’ But if the rest of the world taxes platforms, then you need to get into a discussion,” she said.
UNESCO’s Ramos expects her organization’s voluntary recommendation to influence negotiations on the EU’s draft Artificial Intelligence Act, which would be the world’s first legally binding law on AI.
The bill, proposed in April, creates product safety rules for “high risk” AI that is likely to cause harm to humans. It also bans certain “unacceptable” AI uses, such as social scoring and the use of remote biometric identification in public places from law enforcement, unless it is to fight serious crime, such as terrorism.
UNESCO’s recommendation is “the code to change the [AI sector’s] business model, more than anything,” Ramos said.
“It is time for the governments to reassert their role to have good quality regulations, and incentivize the good use of AI and diminish the bad use,” she said.