Safe City Project in Nice: Testing Facial Recognition

Like Singapore, Atlanta or London, the city of Nice recently undertook a series of trials using artificial intelligence technologies. It aims to improve “security” and “tranquility” in its streets through the deployment of facial recognition technology, using biometric data and deep learning. The city intends to become a model of “Safe city” in France. However, these projects and trials have sometimes led to criticism by the French Data Protection Authority, the CNIL.

A new centralized management tool

In 2010, the mayor of Nice said he wanted to make the city the laboratory for fighting crime. Currently, the city has nearly 2200 CCTV cameras squaring the streets, or 28 per square kilometer, making it the most video-monitored city in France, but it wants to go further.

In July 2017, as the local press reported, the mayor launched a collaboration with company Engie Ineo to test a new security centralized management tool. This system was one of the first of the kind to be used in France. Its aim is to establish a real center of control and remote command for police authorities. The software gathers all the city’s security information (incidents’ location, number of patrols available in the area, access to video surveillance but also to cameras deployed in the city and in the tramway…) in order to increase the data’s usability, on a simple touch screen, as a kind of interactive map.

Reporty, an application to report incivility

In early 2018, the municipality of Nice also announced the launch of a test phase for a smartphone application called “Reporty”. This application, created in Israel by the start-up today kwown as “Carbyne” of former Prime Minister Ehud Barak, allows any citizen to alert the municipal police by sending a video and/or a geolocated sound recording to “report”, in real-time, any incivilities, offenses and crimes. However, the French Data Protection Authority stated in March 2018 that the use of such an intrusive for privacy application was very problematic, lacked a sufficient legal basis and could not meet the proportionality test.

The “Safe City” Convention

The city also launched other initiatives. In June 2018, the mayor, Christian Estrosi, announced the conclusion of a “Safe City” partnership agreement with Thales. The latter is a company at the head of a consortium of fifteen companies specialized in the analysis of social networks, geolocation, biometrics and crowd simulation. This project, which will last three years, will test new data processing tools for police forces and other security stakeholders. This operational research project received a funding of € 10.9 million in the form of grants from BPI France (Public investment bank) and is supported by the Committee for the Security Industries Branch (CoFIS), under the supervision of the Prime Minister.

 

The agreement stated that: 

‘‘Going beyond massive data collection and management, the project aims to develop new algorithms, analyzing and correlating data in order to better understand situations and to develop predictive capabilities. In addition, the “Safe City” project will integrate new sources of data: for example, from social networks, from citizens and from traffic video systems”.

A facial recognition experiment during the Carnival of Nice

In the framework of the “Safe city” agreement, the first facial recognition test was deployed  during the famous Nice Carnival (from February 16th to March 2nd 2019). This test involved the deployment of facial recognition technology on public streets during the event. On this occasion,  CCTV cameras were used at one of the entrances of the event. The aim was to try out facial recognition technologies on a crowd under real conditions. Various scenarios were tested, such as a child lost in the crowd or locating a “person of interest”.

Since obtaining the consent of all participants in the Carnival, as required by the General Data Protection Regulation (GDPR), was too difficult, the system was only tried on fifty people participating to the experiment on a voluntary basis. Shortly before the Carnival, they each provided pictures of their faces, or were photographed for the occasion. These pictures were integrated into a database which also included pictures of missing, wanted or prohibited in an area persons. During the event, a facial recognition algorithm developed by the Israeli company Anyvision compared these pictures in real-time with the faces recorded by the surveillance devices at the entrance to the Carnival. Then, an automatic alert was launched when it found a match.

As the volunteers were not the only people to get scanned and having their biometric data processed by the software, the consent of the hundreds of people passing through the entrance of the Carnival was normally needed.  In that sense, information boards were displayed, and leaflets were distributed in front of the facial recognition test zone’s entrance. In addition, employees informed visitors that an experiment was underway, and provided colored bracelets when people gave their consent.

According to the city, the test was a success. In August 2019, officials of the city stated that “the 50 people in the sample were recognized by the algorithm”. The algorithm was even able to distinguish between two monozygotic twins.

In July 2019, the City of Nice send a report to the CNIL to get an opinion on the Carnival experiment. However, the French Data Protection Authority (DPA) responded by email, according to the French Journal Le Monde that there was a lack of information to complete the evaluation of the experimentation. In particular, the French DPA pointed out a lack of “quantified elements on the effectiveness of the technical device or the concrete consequences of a possible bias (related to gender, skin color …) of the software. This statement was in line with a study published by the MIT Media Lab (USA) in 2018, dubbed “Gender Shades” which pointed out that facial recognition bias increased significant differences in term of rate success according to gender and skin color of the participants.

The Nice Carnival test also raises another issue: a current lack of public information about the effectiveness of video surveillance and facial recognition tools. Usually, cities do not communicate a lot about this sensitive topic. According to Laurent Mucchielli, a well-known French sociologist established in Nice, video-surveillance systems would be ineffective to improve public security.

Facial recognition tests in High Schools

Another facial recognition test was conducted by the Provence-Alpes-Côte d’Azur Region (where Nice is located) following a decision adopted in December 2018. More precisely (facial recognition tests) FRTs were deployed at the entrance of a High School located in Nice and in another High School located in Marseille. The FRTs were used at the gates of these two establishments, granting or refusing access to the students of the two schools. The systems were put in place in February 2019 in order to ’’assist the personnel of the high schools and to fight against identity theft’’.

In a notice published on October 29th, 2019, the French DPA expressed its great concerns in relation with the implementation of such a system. The Commission stated that “all is not and will not be permitted with facial recognition’’. In particular, the CNIL considered that the specific use of FRTs was contrary to the principles of proportionality and minimization of data laid down by the General Data Protection Regulation (GDPR). According to the DPA, the objectives of increasing security and fluidity of traffic could have been achieved by means less intrusive for privacy and individual freedoms (such as ID control).  On September 6, the President of the Provence-Alpes Côtes d’Azur Region announced the suspension of these High School tests.

CNIL issues a Report on use of FRTs

Interestingly, the tests initiated by Nice, as well as the reactions and debates that followed them, pushed the CNIL to publish, on November 15th, 2019 a Report entitled: “Facial recognition: for a debate living up to the challenges”. In this report, the CNIL presented “the technical, legal and ethical aspects which must, in its view, be borne in mind when addressing” this complex issue”. 

It highlighted that, in the past, “it had the opportunity to recognise the legitimacy and proportionality of some uses”, and gave the example of the Nice Carnival, when it “allowed facial recognition technology to be tested under real conditions on a sample of volunteers, with no operational implications, to filter access to the carnival area”.

However,  the CNIL also emphasized that certain FRT uses “are forbidden in our society” recalling that: “It has recently made this clear with regard to implementing facial recognition authentication systems for children for the purpose of controlling access to schools – when the aims of securing and facilitating entry to schools can be achieved by equally effective but much less intrusive means in terms of privacy and individual freedoms, taking into account the special protection that children must be afforded”.

Conclusion

The “Safe City” project is strongly criticized by the French NGOs League for Human Rights and Quadrature du Net which fear the emergence of a mass surveillance society.

When it was denounced by the socialist Paul Cuturello during the Municipal Council of June 7, 2018, the mayor of Nice Christian Estrosi strongly defended it. He asserted that the project encouraged the development of the French industrial security sector and pointed out that ‘’so far, in these fields, we have sought Israeli, American and Asian technologies‘’.

The mayor also wanted to push the limits of the legal framework. He asked for the 1978 IT and Liberties law to be revised.

More recently, in a newspaper column published on December 24, 2019, the mayor of Nice Christian Estrosi and Bertrand Ringot, mayor of Gravelines, critisized the reactions of the CNIL. They wished to “warn the government of the CNIL’s permanent obstruction to the development of local digital experiments”. According to them, this situation underlines a double risk: “not having the tools to respond to the challenges of our society and seriously disadvantaging our country to international competition”.

Despite these attacks against the French DPA, the example of the city of Nice shows how much it is important for Smart and Safe Cites projects to be in compliance with the European and French legal frameworks. In France, more cities are tempted by digital platforms organized around tools for monitoring and controlling public spaces – Marseille, Nimes and Valenciennes – are a few examples. The “Nice precedent” shows that they should take data protection, privacy and other human rights requirements seriously into consideration.

End Note

[1] The author would like to thank Professor Theodore Christakis for his comments on previous drafts of this article. All errors are the author’s.

These statements are attributable only to the author, and their publication here does not necessarily reflect the view of the other members of the AI-Regulation Chair or any partner organizations.

This work has been partially supported by MIAI @ Grenoble Alpes, (ANR-19-P3IA-0003)

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email