CNIL’s position on “smart” cameras in public spaces

Due to the proliferation of “intelligent” video devices in public spaces, the French Data Protection Authority (CNIL) launched a public consultation on its draft position concerning the conditions for the deployment of so-called “smart” cameras in public spaces. Following several months of consideration, and various contributions from public and private actors, the Commission published its opinion last July.

The CNIL began by defining “smart” cameras as cameras that are equipped with automated image processing software that uses artificial intelligence to extract various types of information from the video streams. These systems are capable of detecting objects, tracking devices, detecting suspicious events, characterising individuals with or without identifying them (e.g. age group, gender, behaviour, etc.), and so on. This type of technology is becoming more and more widespread in public spaces since it is being deployed by both public and private actors to improve safety, carry out targeted advertising, conduct statistical analyses of traffic flows, etc. The DPA’s opinion strictly focuses on “smart” or “augmented” cameras deployed in public spaces, and does not concern biometric recognition systems such as facial recognition since it has already published several opinions on this topic, which include its use in airports. The CNIL has also excluded from the scope of its report the use of “augmented video” in non-public spaces, its use with a delay in time, or for sound detection or scientific research purposes.

As the Commission points out, the use of this technology is somewhat of a game changer in terms of video protection devices, since artificial intelligence enables analysis of the recorded video stream, whereas classic video devices only record images. These new capabilities raise new ethical and legal questions. The opinion presents the DPA’s analysis from an ethical, technical, and legal point of view.

Firstly, the CNIL considers that the fact that “smart” cameras can be deployed for numerous purposes (administrative, judicial, security, advertising, etc.) and in different infrastructures (mobile, fixed, portable, drone, etc.), calls for a case-by-case assessment of these systems. This case-by-case review should study the possible impacts they may have on human rights and the risks they may entail for the individuals concerned.

Furthermore, the Commission highlights the fact that video systems equipped with algorithmic processing enable automatic analysis of the captured images, equipped as they are to augment the information that can be extracted from the video stream. This automation can lead to personal data, which could be in some cases contain sensitive data, being processed on a massive scale. This widespread analysis of individuals in public spaces may not only lead to intrusive technologies being normalised and trivialised, but also to widespread surveillance of the population. However, as explained in the opinion, the risks posed by “augmented video” depend on the places where they are deployed, the categories of people exposed to them, and the purposes of the deployment.

For instance, in a scenario in which “smart” cameras are deployed for targeted advertising purposes in an area popular with minors, or where systems are used to make decisions involving a specific person, the risks and impacts would not be the same as those posed by systems deployed to produce aggregated information with anonymised data for statistical purposes.

The DPA then focuses on the legal issues surrounding “augmented video” technology. The Commission notes that to date, there are no provisions in the French Internal Security Code (Code de Sécurité Intérieure) that govern the conditions for the use of these devices. Consequently, the legality of algorithmic image processing must be analysed on a case-by-case basis. Nonetheless, its use must respect data protection regulations such as “privacy by design”, the right of opposition, and so on. Equally, they must be deployed in accordance with an appropriate legal basis, must fulfil an explicit and legitimate purpose, in respect of necessity and proportionality principles, and have undergone a data protection impact assessment. However, the Commission notes that for “augmented video” to be implemented legally, most deployments would require the existence of a legislative or regulatory instrument authorising them or providing them with a framework.

Finally, the CNIL recommends that these technologies should be deployed as little as possible and only used when they can offer the greatest advantage, in relation to the interests of the general public, and backed up by appropriate safeguards. In conclusion, to avoid the proliferation of this kind of technology and the risks that it may pose, a specific legal framework is key.

SCJ

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email