The National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce released, on January 26, 2023, the AI Risk Management Framework (AI RMF 1.0.) in collaboration with the private and public sectors.
An AI risk management system is a continuous iterative process planned and run throughout the entire lifecycle of an AI system and requires regular systematic updates. The goal of the AI RMF is to improve the ability of organisations to manage the various risks that AI systems pose, by promoting a trustworthy and responsible development and use of AI systems. The framework is intended to be used by AI actors across the AI lifecycle on a voluntary basis.
First, the AI RMF addresses the potential harms related to the use of AI systems and stresses the need for improving risk management processes by discussing how organisations can deal with the risks related to AI. The inability to appropriately measure AI risks, the degree of tolerance against risks, the non-optimized allocation of available resources, the treatment of AI risks in an isolated manner from other critical risks (e.g., cybersecurity and privacy) are presented by the AI RMF as the principal challenges for AI risk management.
Second, considering that the enhancement of AI trustworthiness can reduce negative AI risks, the AI RMF addresses the characteristics of trustworthy AI. Against this background AI systems should be valid and reliable, safe, secure and resilient, transparent, explainable and interpretable, privacy-enhanced, and fair. Privacy-enhanced methods are of particular interest since the AI RMF acknowledges that “privacy-enhancing technologies (“PETs”) for AI, as well as data minimizing methods such as de-identification and aggregation for certain model outputs, can support design for privacy-enhanced AI systems”. This is due to the fact that data sparsity, for example, can affect privacy-enhanced techniques and “result in a loss in accuracy, affecting decisions about fairness and other values in certain domains”.
It is worth remembering that the Commission’s proposal for an EU AI Act, which has been under consideration negotiated since April 2021 through an ordinary legislative procedure (co-decision) at both the Council of the EU and the European Parliament, requires that providers of high-risk AI systems establish, implement, document and maintain a risk management system to ensure compliance with the requirements set out in chapter 2 of the AI Act (Articles 8 and 9 of the AI Act proposal).