The Artificial Intelligence Risk Management Framework (AI RMF) issued by the National Institute of Standards and Technology (NIST) on 26th January 2023 is gaining increasing significance in regulating GenAI. The framework is built on four primary functions—governance, mapping, measurement, and management—which assist organisations in developing and evaluating trustworthy AI systems.
On 26th July 2024, NIST issued document NIST-AI-600-1, the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile supplement, explicitly focusing on generative AI's unique risks. The growing importance of the AI RMF is evidenced by numerous American laws and executive orders that incorporate or reference it, including the White House's October 2023 executive order on safe AI development and the State of California's September 2023 order, which developed state guidelines based on the NIST framework. Colorado's new law, Consumer Protections for Artificial Intelligence, protects organisations that comply with the AI RMF. The framework identifies 12 specific risk categories for GenAI, including hallucination and data privacy risks. Jonathan Tam notes that the framework provides a solid starting point for organisations' compliance efforts.
The NIST framework defines 11 characteristics of trustworthy AI, including validity, reliability, safety, and transparency. This voluntary framework helps organisations assess risks, develop responses to them, and establish appropriate governance structures. In 2024, NIST established the United States AI Safety Institute and the related AI Safety Institute Consortium to develop the AI RMF further.
Sources:
1.

2.

3.
