AI and Civil Liability: Welcomed but Perfectible Recommendations of the European Parliament

Artificial intelligence will be a major issue in the very near future, and Brussels has understood this. On October 20th, the European Parliament has adopted a series of three resolutions on how best to regulate artificial intelligence in order to boost innovation and confidence in the technology (Report 2020/2012(INL) – with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies; Report 2020/2015 (INI) – on intellectual property rights for the development of artificial intelligence technologies; Report 2020/2014(INL) – with recommendations to the Commission on a civil liability regime for artificial intelligence).

Beyond the ethical and intellectual property law aspects, one of the challenges lies in the determination of those who should be liable in case of harm caused by artificial intelligence (since it is not desirable to confer legal personhood on AI, as the European Parliament has decided and despite some different views on this.) 63% of those who have replied to the European Commission’s public consultation on the AI White Paper in June 2020 took position in favour of adapting national liability rules in order to ensure proper compensation in case of damage and a fair allocation of liability. A European regulation or directive would limit the risk of law shopping within the European countries, a risk which would be high, as it is with any new ethically questionable societal issue. The economic and geopolitical issues underlying AI also argue in favour of avoiding fragmented regulatory approaches at national level. It is precisely the tension between, on the one hand, this economic issue and, on the other hand, the protection of users that drives this resolution. The European Parliament’s initiative is therefore to be welcomed. It does, however, raise a number of reservations, as regards both its form and its substance.

 

“AI-system” or “automated decision-making”?

Firstly, the European parliament advised in its resolution that “using the term ‘automated decision-making’” rather than AI “could avoid the possible ambiguity of the term AI”. This substitution of the expression “automated decision-making” for “artificial intelligence” is not really convincing. It is certainly possible to argue that  the term “AI” in ambiguous[1], just as it is hardly questionable that the term “intelligence” is rather unfortunate, since what it refers to is so far from human intelligence.[2] However, it is doubtful whether the expression “automated decision-making” will succeed in imposing itself in everyday and legal language. Moreover, the European Parliament itself does not always abide by its own suggestion; it begins, for example, its list of definitions with that of “AI-system”, a term which is then used in the following chapter headings.

Beyond this vocabulary issue, the Parliament proposes, as an annex to its resolution, a Proposal for a Regulation on liability for the operation of Artificial Intelligence-systems (hereafter “the Proposal”). It also urges the Commission to assess whether the directive on liability for defective products should be transformed into a regulation. Indeed, the Proposal for a Regulation on liability for the operation of Artificial Intelligence-systems applies without prejudice to any claim based on regulations on product liability – which, as interpreted by the ECJ, has become of almost exclusive application (ECJ, 24 April 2002, n° C-183/00) – as well as to any other contractual liability and consumer protection rules. If the choice of a regulation, directly applicable in any Member State, is understandable in view of the desire to establish uniform standards throughout the EU, was this choice really necessary in view of what is undoubtedly the main objective: to encourage companies to invest in innovation, in particular in AI? Equal protection of consumers calls for uniform rules, whereas forum shoppingcan create competition between legal systems in order to offer the most attractive set of rules for investors and companies.

 

High-risk and other AI-systems

The European Parliament’s resolution then gives rise to some reflections on its content. The proposed system revolves around a double regime, one devoted to “high-risk AI-systems” and the other to… “other AI-systems”. The definition of high-risk AI-systems, which is purely functional, refers to an autonomously operating AI-system with “significant potential to cause harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected”. The vagueness and subjectivity of this definition are counterbalanced by a limitative list that ought to be revised at least every 6 months. The MEPs did not fill in this list; in order to find some examples, one has to go back to the Committee’s draft report of April 27th 2020, which refers to unmanned aircraft, vehicles with automation levels 4 and 5, autonomous traffic management systems, autonomous robots and autonomous public places cleaning devices. EU Commission’s White paper on AI (19th February 2020) already stated the differentiation between AI that are “high-risk and those which are not, and gave two criteria to identify “high-risk AI”: (i) the AI application  is employed  in  a  sector where,  given  the  characteristics  of  the activities  typically  undertaken,  significant  risks can  be  expected  to  occur (e.g. healthcare, transport, energy and parts of the public sector), and (ii) the AI application is used in such a manner that significant risks are likely to arise. By way of exception, the European parliament recommends[3] that an AI-system which has not yet been included in the “high-risk AI-system” list may nonetheless be subject to the high-risk AI-system regime if it has caused repeated incidents resulting in serious harm or damage; however, it is not specified who shall decide it – presumably the judge hearing the case – nor how the “repetition” of incidents will be assessed. Will two incidents suffice to characterise repetition?

 

Operator and backend operator

For each of these regimes, liability falls upon the operator, which should be understood as covering either or both the “frontend operator” (natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation) and the “backend operator” (natural or legal person who, on a continuous basis, defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system). This second definition seems particularly problematic if its criteria are indeed cumulative, because in practice the algorithm designer sometimes resorts to a data broker who provides him with the data. Could a person who has defined the features of the technology and has provided an essential backend support service, but has bought the data to another a company, nonetheless be considered a “backend operator”?

The operators – whether frontend or backend – of high-risk AI-systems are subject to a regime that is intended to be stricter than that of operators of other AI-systems. Their liability is strict, they cannot exonerate themselves by proving that they acted with due diligence. On the other hand, operators of other AI-systems are subject to fault-based liability and their fault is presumed; however, they shall not be held liable if they can prove that they have not committed a fault. So far, this is mere logic. As regards proof, the Proposal allows the victim to use the data generated by the AI-system; however, this method of proof is, in fact, very asymmetrical: the victim – presumably, often a layman – will not be able to analyse the data for the purposes of proof without recourse to an expert. Although the Proposal for a regulation provides for the possibility for the victim – as well as for the operator – to request the collaboration of the producer to provide information in order to establish liability, this safeguard seems insufficient.

Compensation

The logic is further undermined by the rules related to the amount and extent of compensation. As regards high-risk AI-systems, the Proposal provides for a specific regime limited to cases of death and harm caused to the health or physical integrity and “significant immaterial harm that results in a verifiable economic loss or of damage caused to property”. In other words, purely moral damage – such as loss of amenity (préjudice d’agrément in France, danno esistenziale in Italy), psychologic distress (danno morale in Italy), or even harm caused by anxiety or permanent awareness of danger resulting from exposure to a risk of damage (recoverable in France and possibly in Italy as part of the moral damage for fear) – seems to be excluded from compensation. However, moral damage resulting from physical injury (pretium doloris, Schmerzensgeld, danno biologico, aesthetic damage, etc.) should hopefully be compensated as part of the damage to health or physical integrity. A first inconsistency lies with the compensation ceilings that are proposed (two million euros in the event of death or harm to a person’s health or physical integrity, and one million euros in the event of significant immaterial harm resulting in verifiable economic loss or in the event of damage to property). That is contradictory to the universal principle of full compensation. Besides, liability for defective products (whose application is not prejudiced by the Proposal) does not provide for any ceiling. Second inconsistency: the Proposal refers back to national law for the amount and extent of compensation in the event of harm caused by another AI-system. As a consequence, in cases of harm caused by AI-systems not considered high-risk, where the applicable national law is particularly favourable to the victims (e.g. France or Belgium), the victim will be able to obtain compensation for losses that could not be repaired had the damage been caused by a high-risk AI-system! For the operator, it is therefore less “risky”, in terms of liability, to control an AI-system that is considered as “high-risk” for the user than another AI-system.

Much more could be said, in particular about apportionment of liability, but there is not enough room in these columns. Let us briefly conclude with the observation that, in the search for a balance between protecting citizens and encouraging companies to invest in AI-systems, the Brussels balance is very clearly towards the second objective.


[1] Are we talking of humanoids? Non-humanoid robots? Algorithms? A particular technology (machine learning / automatic learning, neural networks, …)? Is our good-old-fashioned calculator, heir of the “Pascaline”, an AI?

[2] J.-L. Dessalles, Des intelligences très artificielles, Odile Jacob, 2019.

[3] The recommendation is not enacted in the Proposal.

These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organizations.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email