While Biden has in the past called for stronger privacy protections and for tech companies to stop collecting data, the United States, home to some of the world’s biggest tech and AI companies, has so far been one of the only Western nations without clear guidance on how to protect its citizens from AI harm.
Today’s announcement is the White House’s vision for how the U.S. government, as well as technology companies and citizens, should work together to hold AI accountable. However, critics say the plan lacks teeth and that the US needs even stronger regulation around AI.
In September, the administration announced core principles for accountability and technology reform, including stopping discriminatory algorithmic decision-making, promoting competition in the technology sector, and providing federal privacy protections.
The AI Bill of Rights, the vision of which was first presented a year ago by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology, is a model on how to achieve these goals. . It provides practical guidance for government agencies and a call to action for technology companies, researchers and civil society to build these protections.
“These technologies are causing real harm to the lives of Americans, harm that runs counter to our core democratic values, including the fundamental right to privacy, freedom from discrimination and our basic dignity,” said a senior U.S. official. administration to journalists at a press conference.
AI is a powerful technology that is transforming our societies. It also has the potential to cause serious harm, often disproportionately affecting minorities. Facial recognition technologies used in the police and the algorithms that allocate benefits are not as accurate for ethnic minorities, for example.
The Bill of Rights aims to correct this balance. He says Americans should be protected from unsafe or ineffective systems; they should not face algorithmic discrimination and systems should be used as designed equitably; they should be protected from abusive data practices through built-in protections and have agency over their data. Citizens should also know that an automated system is being used on them and understand how it contributes to the results. Finally, people should always be able to disable AI systems for a human alternative and have access to solutions to problems.
“We want to make sure that we are protecting people from the worst harms of this technology, regardless of the specific underlying technological process used,” said a second senior administration official.