European Parliament’s New Guidelines on Military and Non-Military use of AI & AI in Healthcare and Justice

On December 10, 2020, the Legal Affairs Committee of the European Parliament (EP) adopted new guidelines on the use of AI for military purposes and its use in the health and justice sectors.

It is not the first time that the EP have taken a stand on the use of AI for military purposes. In 2014, the EP adopted its first resolution aiming to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention”. This resolution “on the use of armed drones”, aimed to ensure that Member States do not perpetrate unlawful targeted killings with such technologies, or do not facilitate such killings by other States. 

In 2018, the EP recalled its position in a new resolution on autonomous weapon systems, and arguing for “an international ban on the development, production and use of lethal autonomous weapon systems enabling strikes to be carried out without meaningful human control, and for a start to effective negotiations for their prohibition”. With this resolution, the EU also aimed “to work towards the start of international negotiations on a legally binding instrument prohibiting” lethal autonomous weapon systems (LAWS), in line with another international group working on this matter: the Group of Governmental Experts on LAWS (GGE).

New guidelines: International Public Law and Military Uses of AI

On April 29, 2020, a working document was published by the rapporteur, Gilles Lebreton, which focused on the questions of “interpretation and application of International law in so far as the EU is affected in the areas of civil and military uses and of state authority outside the scope of criminal justice”. 

It mainly focuses on current GGE discussions and the principal legal questions surrounding LAWS.

On July 14, 2020, upon publication of its draft report, the Committee on Legal Affairs (JURI) called the EP for a motion for resolution. In the report, the rapporteur highlighted all the considerations at stake in the development or deployment of LAWS and the need for human control of these technologies, especially as the “decision-making process must be traceable, so that the decision-maker can be identified and held responsible where necessary”. The general scope of the report does not limit itself to LAWS but concerned “all military uses of AI, whatever they may be, including those involving the processing of information for military purposes, military logistics, ‘collaborative combat’ and real-time support for decision-making, as well as defensive systems and all weapons that use AI”. Indeed, within the report the JURIU considered that:

“LAWS are lawful only if subject to control sufficiently strict to enable a human to take over command at any time, and that systems without any human control (‘human out of the loop’) must be banned.”

Thus, according to the EP’s press release on the new guidelines adopted by the EP on December 10, 2020 on the matter, MEPs on again “agreed that lethal autonomous weapon systems (LAWS) should only be used as a last resort and be deemed lawful only if subject to human control, since it must be humans that decide between life and death”.

These new guidelines emphasized the EU’s will to take a leading role in promoting a global framework on the military use of AI, alongside the UN and the GGE as well as the international community.

New guidelines on State authority: examples from the areas of Health and Justice 

The second focus of these new guidelines is that of “state authority”, which might be challenged by AI. Looking at Gilles Lebreton’s first working document, there is not a specific example which illustrated in what kind of situation state authority might be challenged by AI. Health and Justice were highlighted specifically in the draft report published in July 2020 as they may constitute the main sectors in which “AI is increasingly being used” for public services.

In its new guidelines, the MEP noted that “the increased use of AI systems in public services, especially healthcare and justice, should not replace human contact or lead to discrimination” and insists that “when AI is used in matters of public health, (e.g. robot-assisted surgery, smart prostheses, predictive medicine), patients’ personal data must be protected and the principle of equal treatment upheld”.

According to the EP, in the domain of Justice, judges are using AI technology more and more in decision-making and use it to speed up proceedings. In that sense, if this technology is becoming more and more present in our lives, the report urges that the public be made aware of the use of such technology. In fact, it argued for “the public to be informed of all such uses of AI in the field of justice and for those uses not to lead to discrimination resulting from programming and to uphold the right of every individual to have access to a judge”. Besides this right to information, the EP highlighted that “safeguards need to be introduced to protect the interests of citizens. People should always be informed if they are subject to a decision based on AI and should have the right to see a public official. AI cannot replace humans to pass sentences. Final court decisions must be taken by humans, be strictly verified by a person and be subject to due process”.

As a last point, according to its press release, MEPs have also warned “of threats to fundamental human rights arising from the use of AI technologies in mass surveillance, both in the civil and military domains” and have called for a specific ban on “highly intrusive social scoring applications” (for monitoring and rating of citizens) by public authorities such as that currently used in China.



Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email