On November 19, 2020, the United-Nations Interregional Crime and Justice Research Institute (UNICRI) Centre for AI and Robotics released its latest collaborative report on Malicious Uses and Abuses of Artificial Intelligence with the aim of providing law enforcers, policymakers and other organisations with information on existing and potential attacks involving and leveraging AI.
‘‘In line with the goal to contribute to the body of knowledge on AI, this report, a joint effort among Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol, seeks to provide a thorough and in-depth look into the present and possible future malicious uses and abuses of AI and related technologies’’.
According to the report, ‘‘building knowledge about the potential use of AI by criminals will improve the ability of the cybersecurity industry in general and law enforcement agencies in particular to anticipate possible malicious and criminal activities’’. It will also help ‘‘to prevent, respond to, or mitigate the effects of such attacks in a proactive manner’’.
The document also proposes recommendations on how to mitigate these risks and consists of four main sections:
- Current malicious or abusive use of AI (for which there are documented cases and research)
- Future malicious or abusive use of AI (for which there is no evidence or literature yet but the examination of current trends in underground forums ‘‘can provide insights about what the malicious abuse of the AI threat landscape might look like in the near future’’)
- Recommendations (this section proposes a non-exhaustive list of recommendations on how to enhance preparedness to address the current and potential evolution of malicious and abusive use of AI)
- A Case Study A Deep Dive Into Deepfakes
The main ‘‘recommendations’’ proposed in the document, and from the regulatory perspective, are:
- AI for Good:
- Harness the potential of trustworthy AI technology as a technically robust crime-fighting tool.
- Promote responsible AI innovation and the exchange of best practice in public forums.
- Further research:
- Enhance cyber resilience to present and future malicious uses or abuses of AI by developing specific, in-depth, forward-looking threat assessments and by continuously mapping the landscape of AI threats.
- Adopt risk management approaches to classify the threats stemming from current and future uses and abuses of AI and prioritize the response measures accordingly.
- Secure AI design frameworks:
- Promote the development of robust AI systems that follow security by design and privacy-by-design principles.
- Develop specific data protection frameworks to enable continuous development of, experimentation with, and training about AI systems in a secured and controlled environment (the sandbox concept) to ensure maximum accuracy of AI systems.
- Encourage the adoption of a human-centric approach.
- Set out technical standards to promote cybersecurity by design.
- De-escalate politically loaded rhetoric on the use of AI for cybersecurity purposes that can obstruct the ability of the cybersecurity industry and law enforcement authorities to respond to and stay ahead of cybercriminal activity.
- Promote the development of internationally applicable and technologically agnostic policy response measures to prevent the potential malicious use of AI without impeding the innovative and positive applications of AI.
- Acknowledge that the implementation of AI for cybersecurity and crime-fighting purposes can leave an impact on individual rights; address these concerns by systematically fostering informed public debates in order to generate public consent and develop appropriate measures.
- Ensure that the use of AI for cybersecurity purposes, including the use of AI by law enforcement authorities, follows ethical guidelines and is subject to effective oversight to provide for long-term sustainability.