On November 8th, 2022, the Information Commissioner’s Office (British DPA) published a document entitled ‘How to use AI and personal data appropriately and lawfully’, which is a guide to how data controllers should use AI systems in accordance with the law and in particular with people’s fundamental rights. This publication also contains a ‘frequently asked questions’ section which addresses certain specific issues that data controllers may have to deal with.
The ICO insists that data controllers should adopt a risk-based approach which requires that a data protection impact assessment be carried out when the projected use of AI poses a high level of risk to people. The ICO also emphasises that AI system outputs should be explainable to people subjected to them.
The British DPA also underlines the need to comply with data accuracy and data minimisation principles, considering that data controllers must ensure that “the data you use is accurate, adequate, relevant and limited”.
Another concern of the British DPA is mitigating discrimination and biases that can result from the use of AI systems. This concern is related to the issue of data accuracy, since, according to the ICO, discrimination can be due to “imbalanced datasets and datasets reflecting past discrimination”.
In the fifth guideline the ICO states that data controllers should “take time and dedicate resources to preparing the data appropriately”. Once again, this guideline is quite closely related to the data accuracy principle which requires, as the ICO puts it, that data remain “accurate, up-to-date, and relevant”.
The British privacy watchdog also underlines in its guidelines that ensuring a proper level of security for AI systems is critical. This security principle requires that certain technical and organisational measures be taken, and this may also require that a risk assessment be carried out.
In its document, the ICO emphasises the human dimension of AI systems. In particular the ICO asserts that people have the right not to be subject to a fully automated decision when such a decision may have significant consequences. Under this principle, it is critical that human reviewers be properly trained so that they understand how the AI system works, and be capable of overriding a decision made by the AI system if it appears to be incorrect.
Additionally, the ICO addresses the issue of accountability with regard to the deployment of an AI system when an external supplier is involved. This sharing of responsibility is important, and can be a tricky issue when an international transfer of data is involved.
Furthermore, in the ‘FAQ’ part of the document, the ICO mainly addresses all the above-mentioned principles, providing some concrete examples and scenarios. It claims for instance that AI systems do not have to be perfectly accurate in order to be deployed. In such situations a data controller can take effective steps “to ensure that any incorrect outputs of the AI system can be quickly remedied”.
The ICO then addresses the issues of data bias and discrimination avoidance, data minimisation, and measures related to automated decision making which may have legal or similarly important consequences.
Interestingly, the ICO also underlines that AI is not regulated per se, which means that the deployment of AI systems is neither authorised nor prohibited. Data controllers have to adopt a risk-based approach: while AI systems that entail too high a risk to people’s rights must be prohibited, systems which do not pose a high risk may be deployed.
The British watchdog highlights however that data controllers do not necessarily have to ask people for their consent since data controllers can rely upon other legal bases than data subject’s consent. Moreover, publishing an AI policy document is not mandatory. Finally, as previously mentioned, a chain of responsibility must be delineated in order that the respective functions of the AI user/deployer and third party system supplier be made clear.
This document represents a way forward in terms of understanding precisely how data controller must proceed to deploy an AI system lawfully. Other DPAs in Europe have already issued similar guidelines on more specific issues, such as the CNIL’s report on smart camera devices.