NIST pushes framework to foster trusted AI
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published new guidance to aid organizations in the design, development, deployment and use of artificial intelligence (AI).
As America’s leading authority on biometrics standards and benchmarking, NIST’s guidance on trustworthy AI could have a major influence on the industry.
According to the Artificial Intelligence Risk Management Framework (AI RMF 1.0), AI technologies can benefit society, but their potential harms cannot be ignored. Risks connected with AI include biases that can affect people’s lives in several ways, from negative experiences with chatbots to rejections on job and loan applications.
“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” comments Commerce Deputy Secretary Don Graves.
“It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”
More specifically, the document comprises two sections. The first one discusses the aforementioned risks connected to AI and suggests a list of characteristics of “trustworthy” AI.
On the other hand, the second half of the report highlights four core functions to help businesses address the risks of real-world AI applications. These are “govern, map, measure and manage,” respectively.
The “govern” function looks at establishing a “culture of risk management” that is “cultivated and present” and the “map” one aims to recognize the context of specific AI applications and associated risks.
The “measure” function aims to ensure that identified risks are assessed, analyzed or tracked and the “manage” one refers to prioritizing risks and ensuring they are acted upon based on project impact.
“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” explains NIST Director Laurie Locascio.
“It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”
The AI RMF 1.0 was mandated in a Congressional directive from January 2021. It includes 400 sets of formal comments NIST received from around 240 organizations in the drafting stages of the framework.
NIST has also published an AI RMF Playbook, a series of guidelines to help navigate and implement the framework.
The agency plans to collaborate with the AI community to update the framework regularly. NIST also said it would establish resources for trustworthy and responsible AI.
Meanwhile, in Europe, legislators are also working toward the creation of an artificial intelligence-focused regulatory framework designed to foster the deployment of ethical AI.
Commonly known as the AI Act, the proposed legislation is in the process of being amended to include a more accurate definition of what remote biometric identification is and what can be defined as a high-risk criterion for the classification of AI systems.