The Politics of Regulation in the Age of AI

The development of Artificial Intelligence (AI) is extremely fast. This technology has benefits: it improves calculation, diagnoses and predictions (medical diagnoses, weather forecasts or road traffic), optimizes production capabilities; reinforces cybersecurity… However, the radical transformation of our societies the development of AI entails, as the attention economy hijacks our minds, also presents various risks: the development of private algorithms not subject to French or European regulation; the opacity of algorithms; the invasion of privacy and breach of confidentiality; loss of data; discrimination; or the unresolved question of responsibility in case of machine errors. Therefore, the need for a regulation on AI, which is, moreover, requested by various actors, appears obvious and urgent. This regulation should be aimed at defining a framework, which establishes general principles around AI and sets rules for its development.

Viewing the regulation on AI with French and European lenses, three considerations appear to be fundamental: investment and innovation, digital sovereignty and ethical and human considerations.

The European commission has published its strategy on data and a White Paper on AI in February 2020. In the latter documents, the Commission believes investments constitute one of the keys to accompany the development of AI and that they should be targeted on strategic sectors such as: environment, health, transport and defense and security. Hence, the EU plans to increase public and private investments to at least €20 billion annually over the next decade. In addition, an investment in schools and professional trainings has to be made as well to enable a “re-skilling” of European citizens on AI subjects. SMEs and startups should also benefit from a specific support, to limit the burden a regulation on AI could represent. In the end, the innovation capabilities should be reinforced to create an environment fostering innovation and experimentation, with solid reference tests and experimentation facilities.

National sovereignty and independence are also at stake when it comes to AI, which has long been a major focus for tech leaders across industries. Big corporations across every sector, from retail to agriculture, are trying to integrate machine learning into their products. At the same time, there is an acute shortage of AI talent, as evoked earlier. This combination is fueling a heated race to scoop up top AI startups, many of which are still in the early stages of research and funding. Developing our own AI applications, technologies and infrastructure, as well as building and promoting a European model of regulation worldwide based on our European values is crucial to guarantee our digital sovereignty. To do so, we must analyze data, which fuels AI. We need to evaluate how data is created, and how it can be used to better serve our economy and our citizens. This implies sovereign cloud solutions and easier transfers of data, which can be achieved with the creation of common data spaces. At the European scale, this could take the shape of a common market of data. In addition, Europe needs to grasp the potential that the exploitation of “non-personal data”, or industrial data, represents.

Finally, in which concerns the acknowledgement of ethical and human considerations, the European Union, and France, in particular, are very active. The High level Group of Experts on AI, nominated by the European commission in April 2019, has, hence, defined seven sets of criteria to ensure a trustworthy AI : human monitoring, robustness and security, respect of private life and data governance, transparency, diversity, non-discrimination and equity, social and environmental well-being, and accountability. In line with this logic, in its White Paper on AI, the Commission calls for an “ecosystem of trust” enabling the respect of fundamental rights, security, and regulatory stability. France is also particularly involved on this matter, with regard to the launch, with Canada, in 2019 of the Global Partnership on AI (GPAI), which organizes an independent global expertise on AI and focuses on its ethical regulation.

France is also one of the champions of the working group on AI of the United Nations’ High-level Panel on Digital Cooperation (HLP), which aims at finding solutions to mitigate the risks of AI, notably by requiring the explainability of the decisions and the outcomes of autonomous intelligent systems, and by making sure that humans should be ultimately accountable for their use of such technologies. Indeed, conformingly to France’s vision, humans should be held responsible in priority in case of damage caused by the use of AI systems. This can be illustrated by the French data protection law of 1978, which states that a court’s decision cannot be based solely on automated processing of information.

Civil society will require more and more transparency and accountability, especially on algorithms. Therefore, governments need to pursue their efforts in terms of a regulation on AI. In order for this regulation to be efficient, it has to take into account in priority and simultaneously the three considerations previously mentioned.

These statements are attributable only to the author, and
their publication here does not necessarily reflect the view of the members of the AI-Regulation Chair or any partner organizations.

Like this article?
Share on Facebook
Share on Twitter
Share on Linkdin
Share by Email