This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this delivered to your inbox first, sign up here.
While the US and the EU may differ on how to regulate the technology, their lawmakers seem to agree on one thing: the West must ban AI-powered social scoring.
As they understand it, social scoring is a practice in which authoritarian governments, specifically China, rank people’s trustworthiness and punish them for undesirable behavior, such as stealing or defaulting on loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.
The EU is currently negotiating a new law called the AI Law, which will ban member states, and perhaps even private companies, from implementing such a system.
The problem is that “it’s essentially banning the void,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.
In 2014, China announced a six-year plan to build a system that would reward actions that build trust in society and penalize actions that do not. Eight years later, he has just released a bill that attempts to codify past social credit pilots and guide future implementation.
There have been some controversial local experiments, such as one in the small town of Rongcheng in 2013, which gave each resident an initial personal credit score of 1,000 that can be increased or decreased based on how their actions are judged. People can now opt out and the local government has removed some controversial criteria.
But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no all-seeing social credit system across the country with algorithms that rank people.
As my colleague Zeyi Yang explains, “The reality is that this terrifying system doesn’t exist, and the central government doesn’t seem very eager to build it.”
What has been implemented is mostly pretty low tech. It’s a “mix of attempts to regulate the financial credit industry, allow government agencies to share data with each other, and promote state-approved moral values,” Zeyi writes.
Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy that compiled a report on the issue for the US government, could not find a single case where data collection in China led to sanctions automated without human intervention. The South China Morning Post found that in Rongcheng, human “information collectors” walked around the city and recorded people’s bad behavior with pen and paper.
The myth comes from a pilot program called Sesame Credit, developed by the Chinese technology company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when most Chinese did not have a credit card, Brussee says. The effort was combined with the social credit system as a whole in what Brussee describes as a “Chinese whispering game.” And the misunderstanding took on a life of its own.
The irony is that while American and European politicians describe this as a problem arising from authoritarian regimes, In the West there are already systems that classify and penalize people. Algorithms designed to automate decisions are being implemented en masse and used to deny people housing, jobs and basic services.
For example, in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming criminals. They claim the aim is to prevent crime and help provide better and more targeted support.
But in reality, human rights groups argue, stigmatization and discrimination have increased. Young people who end up on this list face more police stops, home visits by authorities and stricter supervision from school and social workers.
meIt’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as policymakers in both the EU and the US strive to build a shared understanding of AI governance, they would do well to look closer to home. Americans don’t even have a federal privacy law that offers some basic protections against algorithmic decision-making.
There is also a great need for governments to conduct honest and thorough audits of the way authorities and companies use AI to make decisions about our lives. They may not like what they find, but that makes it all the more important that they look.
Deeper learning
A bot that watched 70,000 hours of Minecraft could unlock the next big thing in AI
Research firm OpenAI has built an artificial intelligence that has used 70,000 hours of videos of people playing Minecraft to play the game better than any other artificial intelligence before. It’s a breakthrough for a powerful new technique, called imitation learning, that could be used to train machines to perform a wide range of tasks by watching how humans do them first. It also raises the potential for sites like YouTube to be a vast and untapped source of training data.
Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, like Meta’s chief AI scientist Yann LeCun, think that watching videos will help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.
Bits and bytes
Meta’s game AI can make and break alliances like a human
Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces on a map. The game requires players to talk to each other and spot when others are bluffing. Meta’s new AI, named Cicero, managed to trick the humans into winning.
It’s a big step forward for AI that can help with complex problems like planning routes around busy traffic and negotiating contracts. But I won’t lie; it’s also a disturbing thought that an AI could fool humans so successfully. (MIT Technology Review)
We could run out of data to train AI language programs
The trend of building ever larger AI models means that we need even larger datasets to train them on. The problem is that we could run out of adequate data by 2026, according to a paper by researchers at Epoch, an AI research and forecasting organization. This should encourage the AI community to find ways to do more with existing resources. (MIT Technology Review)
Stable Diffusion 2.0 is out
Stable open source AI text-to-image streaming has been given abig face wash, and their outputs look much sleeker and more realistic than before. It can even be donehands. The pace of development of Stable Diffusion is impressive. Its first version was only released in August. We’ll likely see even more progress in generative AI well into next year.