On June 6, 2024, the European Center for Digital Rights (Noyb) filed a complaint to 11 European Data Protection Authorities (Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain) about Meta’s intention to change its privacy policy regarding, among other things, the use of personal data to train its current and future Artificial Intelligence (AI) technologies.
Meta has begun informing European users about its planned privacy reforms, which will go into effect on June 26. As well as highlighting the fact that Facebook’s parent company provides “no indication of the purposes of such systems,” Noyb argues that Meta violates “at least Article 5(1) and (2), 6(1) and (4), 9(1), 12, 13, 17(1)(c), 18, 19, 21(1) and 25” of the General Data Protection Regulation (GDPR).
According to Noyb, ‘Meta has no legitimate interest under Article 6(1)(f) GDPR that would override the interest of the complainant (or any data subject) and no other legal basis to process such vast amounts of personal data for totally undefined purposes’. As a result of following the three-step test, Noyb observed that:
- Meta neither claims – let alone proves – that it pursues any legitimate interest recognizable under Article 6(1)(f) GDPR
- The mere use of a broad category of various technologies constitutes co-called “means” not a legitimate interest in itself.
- Compared to the legitimate interests named in the GDPR or accepted in case-law, the mere extraction of personal data to use for commercial gain is not a “legitimate interest”.
- Finally, Meta tries to process an enormous pool of personal data, which (at least partly) contains personal data that cannot be processed based on a “legitimate interest”.
- Meta attempts to process personal data far beyond anything that is “strictly necessary” for the (undisclosed) potential purposes.
- This can also be demonstrated by the many existing AI systems that were trained and run on a much smaller dataset.
- Meta fails the balancing test due to a) initial unlawful collection of personal data, b) exceptionally large and unlimited amount of personal data (including non-public data), c) highly risky nature of the technology involved, d) impossibility to object once one’s data is has already been used, e) disproportionate market power that Meta exercises over its users, f) existence of a further processing clearly unrelated to the original one, g) scope of processing well beyond the expectations of the data subject and, h) lack of compliance with the (minimum) industry standards.
Regarding the other violations of the GDPR provisions, Noyb argues that Meta has not revealed the specific objective that justifies the processing of personal data. Furthermore, it would appear that data subjects only have the right to object based on the new privacy policy. The required “concise, transparent, intelligible and easily accessible” information “using clear and plain language” is also missing. Finally, the data subject’s right to be forgotten cannot be upheld since, according to Meta, the process of “ingesting” personal data into “artificial intelligence technology” is irreversible.
In light of Meta’s request to irreversibly process the complainant’s personal data from June 26th, 2024, Noyb has asked the 11 Data Protection Authorities (DPAs) to urgently issue a decision “to prevent the imminent processing of the complainant’s – and 300 million EU/EEA residents’ – personal data without the consent of these data subjects”; “fully investigate the matter under Article 58(1) GDPR”; and; “prohibit the use of personal data for undefined ‘artificial intelligence technology’ without the opt-in consent of the complainant – and indeed other data subjects”.
Noyb’s complaint seems to have had an impact, as on June 12th, 2024 the Hamburg Commissioner for Data Protection and Freedom of Information (HmbBfDI) published a press release addressing concerns about Meta’s new privacy policy. The press release kindly invites Meta users to consider whether they wish to exercise their legal right to object before the deadline of 26 June 2024, in relation to the processing of their personal data to train AI systems. HmbBfDI took this action due to the fact that once a large language model has been trained using personal data, the operation cannot be easily undone, hence why it is important that users give this matter careful consideration.
For others who may also be affected by Meta’s new privacy policy, the German DPA suggests that they object via Meta’s online form, which you can find here. Given that Meta also uses data purchased from third parties to train its AI models, it would be helpful to understand the extent of the impact on users. Furthermore, the HmbBfDI has expressed concern that Meta remains “vague about what constitutes AI technology”. It also notes that European DPAs are “currently clarifying whether the legitimate interest, as proposed by Meta, or the explicit consent of the data subject must be used as a legal basis”.
On June 10th, 2024, the French DPA (CNIL) published a second set of practical information sheets and a questionnaire on the development of AI systems, which are subject to public consultation until September 1, 2024. The use of the legitimate interest legal basis for the development of an AI system is one of the issues addressed in these documents. While the CNIL acknowledges that legitimate interest is often used as a legal basis for the development of AI systems, it stresses the need to comply with certain conditions and to implement sufficient safeguards.
For the time being, Meta has decided to delay the training of its LLMs “using public content shared by adults on Facebook and Instagram”. This decision has been welcomed by the Irish Data Protection Commission (DPC).
T.K.