The new bill, called the AI Liability Directive, will add teeth to the EU AI Act, which will become EU law at the same time. The AI Act would require additional controls for “high-risk” uses of AI that have the greatest potential to harm people, including in policing, recruiting or healthcare systems.
The new liability bill would give individuals and businesses the right to sue for damages after being harmed by an AI system. The aim is to hold the developers, producers and users of the technologies accountable and require them to explain how their AI systems were created and trained. Tech companies that don’t comply risk class actions across the EU.
For example, job seekers who can show that an AI system for résumé screening has discriminated against them can ask a court to compel the AI company to give them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue.
The proposal still has to make its way through the EU legislative process, which will take at least a couple of years. Members of the European Parliament and EU governments will amend it and are likely to face intense lobbying from tech companies, which claim the rules could have a “chilling” effect on innovation.
Whether successful or not, this new EU legislation will have a ripple effect on how AI is regulated around the world.
In particular, the bill could have an adverse impact on software development, says Mathilde Adjutor, head of policy for Europe at technology lobby group CCIA, which represents companies such as Google, Amazon and Uber.
Under the new rules, “developers not only risk liability for software errors, but also for the software’s potential impact on users’ mental health,” it says.
Imogen Parker, associate director of policy at the Ada Lovelace Institute, an AI research institute, says the bill will shift power from companies and back to consumers, a fix she sees as particularly important given the potential for AI discrimination. And the bill will ensure that when an AI system causes harm, there is a common way to seek compensation across the EU, says Thomas Boué, head of European policy at tech lobby BSA, whose members include Microsoft and IBM.
However, some consumer rights organizations and campaigners say the proposals do not go far enough and will set the bar too high for consumers who want to file complaints.