With 2021 shaping up to be a decisive year for EU Regulation of AI, the public debate on this particular issue is already well underway. Following the European Parliament’s recent Report on a framework of ethical aspects of artificial intelligence, robotics and related technologies, Politico revealed in October that 14 EU Member States were alerting the Commission about the risks of over-regulation of AI. The non-paper of the 14 States has been published here.
1/ AccessNow and EDRi highlight weaknesses in the Commission’s risk-based approach
In their recent non-paper, 14 Member states caution against over-regulation of AI and “plead for an approach that puts innovation front and centre”. Among other key issues, those states worry that the European Commission’s proposed risk-based approach to regulating AI “will end up classifying too many AI systems as high-risk”. They thus argue for an “objective methodology” in assessing the risk raised by such systems.
While AccessNow and EDRi confirm the importance of an objective methodology, the NGOs emphasize that the final aim of such a methodology “should not be to limit the number of AI systems classified as high risk”. Indeed, they consider that identifying risks “is to better protect our rights, not to make things easier for companies at any cost”. They thus recall that, in their submission to the consultation on the Commission’s White Paper on AI (here for EDRi and here for AccessNow), they had highlighted weaknesses in the Commission’s risk-based approach, pointing out that “the burden of proof to demonstrate that an AI system does not violate human rights should be on the entity that develops or deploys the system, and that such proof should be established through a mandatory human rights impact assessment”.
Whilst they are in agreement with the 14 Member states regarding the fact that AI applications can increase risks to individuals, the NGOs criticise the proposed solutions. Indeed, they do not believe that companies will achieve trustworthy AI without a binding European legal framework, nor do they accept the idea that “unfettered innovation will always lead to societal benefits”. AccessNow and EDRi consider that soft law instruments are not sufficient, that “the idea that innovation is an unequivocal good in itself” should be abandoned and that relying on tech companies to self-regulate is not a viable solution.
2/ AccessNow and EDRi regret that “the European Parliament’s framework for ethical AI fails to draw a single red line”
AccessNow and EDRi then criticise the European Parliament’s recent report on “A Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies”. While asserting that the authors of the report “mistakenly assume that AI ethics principles will be sufficient to prevent harm and mitigate risks”, they also argue that the report fails to propose a legislative framework which would address the most important threats raised by artificial intelligence. Furthermore, the NGOs regret that the report does not consider that there might be any red lines, including any related to the applications presenting the highest risks.
On the issue of transparency, they call for a broad vision and the development of “public registers for AI systems used in the public sector”. On the issue of the fight against discrimination, AccessNow and EDRi do not want the problem of algorithmic bias to be limited to technical answers, and encourage “strong regulations to prevent AI systems from reinforcing and amplifying structural discrimination”. Finally, the two NGOs criticise the European Parliament’s Report for not addressing the proportionality of biometric surveillance applications properly. Indeed, if they note that the Report “emphasises the need for proportionality and substantial public interest to justify any biometric processing”, they regret that “it does not conclude that some uses of AI, such as use for indiscriminate or arbitrary surveillance of people’s sensitive biometric data in public spaces, are fundamentally and inherently disproportionate”.
The NGOs denounce what they call “the European Parliament’s laissez-faire approach to preventing the use of AI” which, according to them, infringe fundamental freedoms, including freedom of expression.
3/ AccessNow and EDRi propose three paths that regulators should follow
The article ends by suggesting three avenues that the EU could follow in order to ensure effective protection of rights. The NGOs advocate that the upcoming regulation should “put in place a framework to prevent use of AI from violating our rights and harming our societies, with the protection of our rights taking priority over other considerations”.
According to the two NGOs, regulators should thus:
“- develop a legal framework which effectively prohibits systems that by their very nature will be used to infringe fundamental and collective rights;
– incorporate mandatory, publicly accessible, and contestable human rights impact assessments for all uses of AI to determine the appropriate safeguards, including the potential for prohibiting uses that infringe on fundamental rights; and
– complement these efforts with stronger enforcement of existing data protection and other fundamental rights laws “