On August 30th, 2022, the Conseil d’Etat (French Council of State) released a report, commissioned by former Prime Minister Jean Castex on June 24th, 2021, proposing a landscape of AI technology deployed in the public sector, which examines the technical, operational, ethical and legal aspects of this issue.
In this report the Conseil d’Etat addresses how AI systems are defined. One of its main objectives is to expose, and dispose of, the myth of ‘singularity’ i.e. what the Conseil d’Etat sees as a sci-fi view of AI according to which “the intelligent machine surpasses and dominates humans”. Interestingly, the Conseil d’Etat claims that “[t]he risk is not that the machine will take over, but that the human will lose control by abdicating responsibility. This is why trusted public AI must be based on the principle of human primacy”.
From a more technical perspective, the Conseil d’Etat insists that the definition of AI systems should be harmonised at both the European and national levels. In particular, the Conseil d’Etat argues, in terms of this specific topic, that “[i]n order not to introduce terminological confusion into an already complex semantic universe, it is important that national law and internal reflection take up the main concepts and associated definitions as they will result from European Union law”. As regards the issue of the definition of an AI system, the Conseil d’Etat has identified three criteria: the nature of the system (which is a software), how it is designed (neural networks for instance) and its function (e.g. to generate certain outputs based on a given input).
The Conseil d’Etat then details the various purposes that AI systems fulfil in the public sector. It has identified five different categories of systems:
- Automation of repetitive tasks, such as the pseudonymisation of Court rulings
- Automation of the interaction between the user and the public sector customer facing agent: these systems include for instance the chatbots deployed by health care services.
- Systems that support public decision-making: such systems cover systems such as those used by fire departments to predict manpower needs, depending on the nature and number of incidents forecast.
- Monitoring systems: these are used, for instance, to prevent public order incidents, by means of biometric identification systems.
- Robotics when applied to the public sector: such a category is related to the deployment of autonomous vehicles by public services, among other systems.
The report also addresses the legal aspects of those AI systems that are deployed in the public sector. Accordingly, the Conseil d’Etat focuses on the need for public services to comply with the draft AI Act. In particular, the report highlights the dangers of deploying such high-risk systems, emphasising the need to adopt robust legislation that provides for the automated analysis of images taken in public places. Such ‘smart video devices’ have already been the subject of a report by the CNIL.
Secondly, the Conseil d’Etat recommends considering a proper legal framework on web scraping, which consists of analysing people’s publicly available online data. Such techniques have been used, for instance, to detect tax fraud, in accordance with law n°2019-1479.
According to the Conseil d’Etat, it is essential to adopt guidelines for the public sector in order to anticipate the entry into force of the AI Act. The main purpose of such an initiative would be to prepare the public sector for the enforcement of the AI Act.
Turning to a different topic, the Conseil d’Etat emphasises the need to create judicial procedures which will enable those who are victims of a wrongful decision made by the administration due to an AI system to resolve their issue. The Conseil d’Etat claims that the administration can be held liable for its decisions before an administrative Court if such decisions can be proven to have caused harm to citizens. From a criminal perspective, both the system’s designers and its user can be held responsible if the deployment of an AI system by a public organisation results in the commission of an offence or a crime. On the other hand, the Conseil d’Etat refutes that an AI system can ever be held liable, due to the fact that an error made by software is always attributable to a person.
The Conseil d’Etat also puts forward seven principles that it says will ensure trustworthiness in public sector AI. These principles are as follows:
- Human primacy: this principle promotes human-centric AI systems, and calls for human oversight mechanisms.
- Performance: the main goal of this principle is to ensure a minimum level of performance, to enable the deployment of an AI system in the public sphere.
- Equity and non-discrimination: the goal of this principle is to avoid discrimination and algorithmic biases.
- Transparency: individuals must be informed about what is to take place and must have access to information about the systems’ specification.
- Cybersecurity: the main aim of this principle is to ensure that an appropriate level of security is applied in order to avoid data breaches, and other threats.
- Environmental sustainability: this criterion should be taken into account when drawing up the AI strategy for the public sector.
- Strategic autonomy: according to this principle, France should develop its own resources on the field of AI.
This comprehensive report produced by the Conseil d’Etat represents a very positive contribution to the debate about how public services should implement and use AI-driven systems and how the risks posed by such deployments should be mitigated.