The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
AI risk management is a key component of responsible development and use of AI sys-tems. Responsible AI practices can help align the decisions about AI system design, de-velopment, and uses with intended aim and values. Core concepts in responsible AI em-phasize human centricity, social responsibility, and sustainability. AI risk management can
requirements and best practices, and reflects risk management priorities. Consistent with other AI RMF profiles, this profile offers insights into how risk can be managed across various stages of the AI lifecycle
2022年3月17日 · This initial draft of the Artificial Intelligence Risk Management Framework (AI RMF, or Framework) builds on the concept paper released in December 2021 and incorporates the feedback received.
2022年7月8日 · The Playbook is based on AI RMF 1.0 (released on January 26, 2023). It includes suggested actions, references, and related guidance to achieve the outcomes for the four functions in the AI RMF: Govern, Map, Measure, and Manage.
2021年12月14日 · AI risk management is as much about offering a path to minimize anticipated negative impacts of AI systems, 18 such as threats to civil liberties and rights, as it is about identifying opportunities to maximize positive impacts.
2023年1月26日 · The framework equips organizations to think about AI and risk differently. It promotes a change in institutional culture, encouraging organizations to approach AI with a new perspective — including how to think about, communicate, measure and monitor AI risks and its potential positive and negative impacts.
2022年8月18日 · The AI Risk Management Framework (AI RMF) can help organizations enhance their understanding of how the contexts in which the AI systems they build and deploy may interact with and affect individuals, groups, and
2023年1月24日 · Guidance on human factors and human-AI teaming in the context of AI risk management. NIST intends to investigate how human-AI teams should best be configured to reduce likelihood of negative impacts/harms to individuals, groups, communities, and society.
2024年7月26日 · This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, pursuant to President Biden's Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.