Europol’s Report : ChatGPT and Cybercrime Issues

On March 27th, 2023, the European Union Agency for Law and Enforcement Cooperation (EUROPOL) released a report on the Impact of Large Language Models on Law Enforcement. Having considered the outcomes that resulted from a series of workshops organised by the Europol Innovation Lab on how criminals can abuse LLMs such as ChatGPT, as well as how it may assist investigators in their daily work, the report provided useful recommendations on enhancing  law enforcement preparedness.

The report starts by presenting large language models (LLM) and more specifically, the ChatGPT model which is used as a case study. According to Europol’s report, the ChatGPT model suffers  certain limitations :

  • The training data dates back to September 2021
  • The answers generated may be biased and are often quite basic in nature;
  • Small or ambiguous prompts may  lead the model to believe it does not know the answer at all or misunderstand what the user wants to know.


The ChatGPT model does not answer questions that have been classified as harmful or biased, which is far from being a functional limitation but is instead a self-imposed limitation that forms part of the model’s content moderation policy. Europol’s report however reveals that this safety mechanism can still be circumvented easily in some cases with the correct prompt engineering, which means ‘refining the precise way a question is asked in order to influence the output that is generated by an AI system’. It is therefore possible to use this tool for criminal purposes.

Indeed, it is stated in the following chapter of  Europol’s report that ‘the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime’. These types of crime may cover cases that encompass fraud, impersonation, social engineering and cybercrime. Regarding the issue of cybercrime, the ChatGPT is capable of producing code in a number of different programming languages and therefore, of additionally creating  basic tools that may be used for a variety of malicious purposes. According to  Europol’s report, ‘the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures’.

The recommendations provided by Europol’s report regarding national law enforcement agencies are as follows :

  • To raise awareness to ensure that any potential loopholes are discovered and closed as quickly as possible;
  • To better predict, prevent, and investigate different types of criminal abuse;
  • To develop law enforcement officers’ skills in order to make the most of models such as ChatGPT;
  • To engage with the technology sector’s stakeholders to ensure that safety mechanisms are improved;
  • To conduct appropriate procedures and safeguards to ensure that sensitive information remains confidential and potential biases are investigated and addressed.


The publication of Europol’s report on the impact of large language models on law enforcement comes at a critical time when debates are taking place in the European Union and internationally regarding the legal and ethical aspects of ChatGPT, as well as the legal implications of the use of AI-generated works in relation to copyright law.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email