Implementing The NIST Artificial Intelligence Risk Management Framework – Govern – New Technology – United States

admin
4 Min Read

To print this article, all you need is to be registered or login on Mondaq.com.

The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, published in January 2023, was designed to equip organizations with an approach that increases the trustworthiness of AI systems and fosters the responsible design, development, deployment, and use of AI systems.

The NIST AI Risk Management Framework will serve as a foundational component when developing an AI regulatory compliance program that meets the requirements of emerging AI laws. We have previously written about the EU AI Act in an article titled “An Introduction to the EU AI Act,” where we focused on applicability, timing, and penalties related to the EU AI Act. We also wrote about the requirements of Chapter 2 of the EU AI Act, titled “Requirements for High-Risk AI Systems.”

Given the complexity of the NIST AI Risk Management Framework, we are publishing a series of articles focused on implementing the framework. This article is focused on the first of the four core functions which is called Govern. NIST defines the Govern function as a cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.

The Govern function includes six categories and 19 subcategory controls as listed in Table 1 below.

Along with the NIST AI Risk Management Framework, NIST also provides an AI Risk Management Playbook (AI RM Playbook) which contains supporting actions and considerations for each subcategory control. The AI RM Playbook provides suggestions on what should be assessed and documented relative to the Govern function within the NIST AI Risk Management Framework.

Examples include:

The Govern functions and specific actions from the AI RM Playbook are related to the requirements for high-risk AI systems per Chapter 2 of the EU AI Act and therefore illustrate the effectiveness of using NIST to assess your AI systems and processes.

After assessing and documenting activities that involve AI systems against the Govern function, organizations should review and identify the appropriate AI compliance management activities to remediate gaps and demonstrate AI compliance readiness and maturity. The AI RM Playbook provides suggested actions relative to the Govern function. Examples include:

The AI compliance risk profile is different for every organization and will require expertise in both conducting privacy risk assessments and the unique challenges that using AI systems presents. It is important to evaluate AI compliance risk or gaps relative to an accepted privacy framework such as the NIST AI Risk Management Framework, and then prioritize which compliance activities should be implemented to comply with relevant regulations such as the EU AI Act.

In our next article, we will focus on implementing the Map function of the NIST AI Risk Management Framework.

Footnotes

Share This Article
By admin
test bio
Please login to use this feature.