In a post published last week, we have presented the proposals regarding facial recognition advanced in the White Paper on internal security released on November 16, 2020 by the French Ministry of the Interior. The White Paper also includes several other important proposals for the use of AI for security purposes. This paper will present them briefly, also recalling (as we did in relation with facial recognition) the framework sometimes set for such uses and experiments by the French Data Protection Authority (CNIL).
We will discuss successively the proposals related to using AI in order to facilitate law enforcement operations; proposals related to biometric technologies other than FRT; and finally, a data policy adapted for the use of AI
1/ AI used to facilitate law enforcement operations
According to the White Paper, AI could facilitate in various ways law enforcement operations. It could be used in 3 different ways.
- Collection of information in open source and use of AI to detect situations of danger and urgency
In order to ensure greater responsiveness to security and emergency threats, the paper highlights the need to detect alert signals that would escape traditional reporting channels (such as phone calls to emergency services). The White Paper seems to indicate that an analysis of publicly available data from social networks would complement this effectively. In view of the amount of data at stake, the document proposes that experiments be carried out in terms of collecting open data from social networks in an automated way (p.220).
It should be reminded that the CNIL (French Data Protection Authority) was asked to rule on a different but related mechanism in September 2019. Indeed, an experiment was carried out to automatically collect data published on social networks, in order to tackle tax fraud. The CNIL was consequently able to rule on the very principle of automatically capturing data published on the Internet. On this occasion, although the CNIL expressed reservations about such a system, they gave a favorable opinion on the experiment. However, the CNIL made it clear that the amount of data collected should be strictly proportional to the aim of the proceeding.
The White Paper goes on to suggest that an analysis of the data available to public authorities could be used to manage their deployment and cites the example of fire departments that have already used such tools to improve their mode of intervention. This predictive analysis could be useful, according to the authors, in terms of managing the allocation of different forces in a more efficient way depending on the operation involved, but also to detect early warning signs that could reveal whether an operation is necessary (p.221).
Finally, in order to respond to basic requests from people in a “peak of activity” situation, the paper suggests the use of automated virtual assistants. The expected aim is to multiply call handling capacity, with users being able to switch back to a human call handler at any time.
The paper argues for a gradual transformation towards a “digital nomadism”, allowing officers and firefighters in the field to benefit from the same tools as those available in offices and increasing their safety. In this sense, connected devices are seen as “intelligent intervention gear”. According to the authors, connected jackets could be a useful equipment. They could have a role in connecting tools used by agents to available support but also eventually a role in transferring data to the central, once and if this becomes legally possible.
The White Paper also refers, among other aspects, to augmented reality goggles that could be used to automate licence plate control or to implement facial recognition technologies (p.227).
Finally, in terms of IoTs, the paper foresees that police vehicles will no longer just be means of transport but will become complete, autonomous workstations. These equipped vehicles would enable the deployment of sensors (such as cameras) and additionally integrate on-board analysis software to facilitate processing of data. The role of operational robotics is also discussed in the paper (especially in relation with dangerous operations, such as reacting to terrorist attacks), but is not explored in great detail.
- AI as a tool to reduce repetitive tasks.
The White Paper sees AI as a way to relieve agents of a number of repetitive tasks by automating them. The paper provides examples of the kind of tasks that could be facilitated by AI, including by developing transcription and translation tools or language detection.
According to the White Paper, AI could also be used to reduce the repetitive aspect of certain analytical tasks by using, for example, algorithms capable of filtering video images according to semantic criteria, or modelling communication, attack or fraud patterns. Finally, the paper sees AI as a tool to anonymise documents by masking certain parts of documents in order to enhance privacy and data protection.
2/ Biometric and AI technologies
The Ministry of the Interior’s White Paper looks at the development of biometric technology in law enforcement. The paper accordingly recalls how biometrics have been used, and the central role played by the judiciary and independent administrative authorities in controlling their implementation. In order to attain useful tools the paper considers that officers may not simply remain consumers of these technologies, but become significant players in terms of explaining operational difficulties and needs and facilitating research in accordance (p. 254).
The core principle developed by the paper regarding biometric data is a multi-biometric approach, enabling the various data to be cross-referenced so as not to multiply the number of records (p. 254). According to the authors of the White Paper, the principle of interoperability of biometrics data with other data has been established and developed by neighbouring countries (Germany and the United Kingdom) but also by the mechanisms of the European Union (notably the Schengen Information System). The paper argues for a similar approach within the organization of French law enforcement. To this end, the paper advocates for the interoperability of both national and European systems so as to develop a single interface which enables an officer to query all the databases from an office or via tools used in the field.
In terms of fingerprints and palm prints, the White Paper envisages an algorithmic comparison of the probative points of the fingerprint in order to gradually achieve (under some conditions related to effectiveness and the absence of errors) automation of the comparison procedure.
According to the paper, in terms of genetics, AI could assist experts by helping distinguish DNA when several DNA strands are present in the same sample, or when determining the probative value of a result from partial DNA or DNA from close relatives. The paper also explores the development of predictive DNA analysis, as an eventual tool helping to determine particular characteristics such as an estimate of the person’s age for example.
The White Paper also calls for the development of research into voice biometrics. The aim is to rapidly master the theory and practice of such technology so that it can be made operational firstly as an authentication (1c1) tool and then as an identification (1cN) tool. In order to achieve this goal, the paper calls for both the development of technical skills and the bringing together of a “capital” of varied vocal data.
3/ A compatible-AI data policy
The White Paper places the emergence of an ambitious data policy adapted to AI as a prerequisite for technological developments and calls to “better” structure and manage data “while respecting privacy”.
The paper advocates for a separation of the legal regimes that apply to data (p. 249). Indeed, according to the document, algorithms should be trained on the basis of data drawn from concrete operational situations to have efficient algorithms. The WP states that training data will not be stored once the training session is complete, which means ‘consequently’ that the data will not “have operational consequences for the persons concerned” as “new data, independent from the training session” will be produced during the operational session”The paper considers that, if common data protection law continues to be applied for operational purposes, algorithm training data should benefit from a specific legal regime. This regime would be based on the fact that this training data would not be conserved and then used into an operational context, as “new and independent data” will be produced. The paper proposes alternatives to ensure some level of security, such as, that such data could be kept by a trusted third-party or pseudonymised.
The White Paper also calls for a complete reworking of the files, in particular those from the TAJ (i.e. French criminal record database, Traitement des antécédents judiciaires) or biometric databases, in order to better organise them and ensure that records for the same individual are not grouped together (p. 256).
A third element developed in the White Paper is that of data exchanges. Whether it be acquiring data from different sources (public authorities, transport services, etc.) or from European partners, the paper advocates for the implementation of a data exchange strategy. Going further, the White Paper envisages a cooperation with industrials actors, for some types of data, in order to help develop better tools (p.250).
Furthermore, the White Paper notes the growing need for legal and technical expertise to assist in carrying out data impact assessments with respect to data processing.
To conclude, beyond the legal aspects, the White Paper highlights the role of ethics in research and implementation of biometric technologies. It therefore addresses concerns raised by the European Union Agency for Fundamental Rights (FRA) and the French Data Protection Authority (CNIL) (p.265) regarding the risks linked to biometric technologies. The authors of the White Paper thus plead for an unequivocal use of the technologies in order to ensure a protection of the population and sensitive sites. Starting from this foundation, the White Paper sets several guidelines in order to set up a protective framework before the technologies are deployed. It recalls the importance of the explainability of algorithmic decisions as well as the central role of supervision and audits. The issue of bias is also addressed by ensuring that the technical reduction of bias is achieved through a robust algorithm learning system. Finally, the place of human control is asserted by recalling that these technologies are only tools for supporting decisions. The White Paper ultimately hopes that, despite a certain degree of autonomy, the technologies can always be managed by a human being.