Council of Europe: New Resolution on the Role of AI in Policing and Criminal Justice Systems

On October 22, 2020, the Parliamentary Assembly of the Council of Europe adopted a provisional version of Resolution 2342 (2020) Justice by algorithm – the role of artificial intelligence in policing and criminal justice systems, based on the work of the Committee on Legal Affairs and Human Rights (see Doc.15156, report of the Committee on Legal Affairs and Human Rights and also Recommendation 2182 (2020)) 

Reflecting on the challenge of using artificial intelligence (AI) in police and criminal justice systems, the Assembly makes some important observations before giving its final guidance.

The Assembly begins by highlighting the proliferation of AI applications in several areas of human activity, including police and criminal justice systems. The Assembly warns that “the introduction of non-human elements into decision-making within the criminal justice system may thus create particular risks”. The Assembly reminds us of the main principles which must be observed in the use of AI – such as “public’s informed consent” and “effective, proportionate regulation”. The Assembly particularly emphasizes the importance of “core ethical principles”, recalling that “regulation of AI, whether voluntary self-regulation or mandatory legal regulation, should be based on universally accepted and applicable core ethical principles” (4). Noting that “a large number of applications of AI for use by the police and criminal justice systems” are currently “being considered in Council of Europe member States” (6), the Assembly highlights, in its Resolution, examples of situations where “the use of AI in policing and criminal justice systems may be inconsistent with the above-mentioned core ethical principles” (7).

Finally, the Assembly “calls upon member States, in the context of policing and criminal justice systems, to: 

  • adopt a national legal framework to regulate the use of AI, based on the core ethical principles mentioned above;
  • maintain a register of all AI applications in use in the public sector and refer to this when considering new applications, so as to identify and evaluate possible cumulative impacts;
  • ensure that AI serves overall policy goals, and that policy goals are not limited to areas where AI can be applied;
  • ensure that there is a sufficient legal basis for every AI application and for the processing of the relevant data;
  • ensure that all public bodies implementing AI applications have internal expertise able to evaluate and advise on the introduction, operation and impact of such systems;
  • meaningfully consult the public, including civil society organisations and community representatives, before introducing AI applications;
  • ensure that every new application of AI is justified, its purpose specified and its effectiveness confirmed before being brought into operation, taking into account the particular operational context;
  • conduct initial and periodic, transparent human rights impact assessments of AI applications, to assess, amongst other things, privacy and data protection issues, risks of bias / discrimination and the consequences for the individual of decisions based on the AI’s operation, with particular attention to the situation of minorities and vulnerable and disadvantaged groups;
  • ensure that the essential decision-making processes of AI applications are explicable to their users and those affected by their operation;
  • only implement AI applications that can be scrutinised and tested from within the place of operation;
  • carefully consider the possible consequences of adding AI-based elements to existing technologies;
  • establish effective, independent ethical oversight mechanisms for the introduction and operation of AI systems;
  • ensure that the introduction, operation and use of AI applications can be subject to effective judicial review”.


Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email