-
29/03/2024
On March 21st, 2024, the United Nations General Assembly adopted for the first time a non-binding Resolution that encourages the development of regulatory and governance principles and frameworks and includes standards to ensure that safe, secure and trustworthy AI systems are created.
-
06/02/2023
The National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce released, on January 26, 2023, the AI Risk Management Framework (AI RMF 1.0.) in collaboration with the private and public sectors.
-
14/10/2022
On September 28th, 2022, the European Commission released two proposals, the aim of which is to regulate civil liability in relation to AI-enabled systems, drawing from the Commission’s White Paper1 considerations on the use of such systems: a revised version of the Defective Product Liability Directive (PLD)2 and a Directive that adapts non-contractual civil liability rules to Artificial Intelligence (AI Liability Directive)3. The combination of these proposals with that of April 21st, 2021, Laying Down Harmonized Rules On Artificial Intelligence (AI Act)4, will result in the national liability frameworks being adapted to the digital age, the circular economy and global value chains.
-
09/10/2020
According to an article by Politico on 8 October 2020, 14 EU Member States sent an unofficial document to the European Commission asking it not to over-regulate artificial intelligence.
-
24/08/2020
On August 21st 2020, HLEGAI released their new deliverable on the sectoral considerations on the policy and investment recommendations for Trustworthy AI.
-
05/06/2020
The EIT Health Consultative Group, which includes as a member Professor Theodore Christakis (AI Regulation Chair at MIAI and co-director of the Grenoble Alpes Data Institute), submitted this report to provide informed views on the European Commission’s Data Strategy and AI White Paper.