Home Framework Can NIST advance “trustworthy AI” with a new version of the AI ​​Risk Management Framework?

Can NIST advance “trustworthy AI” with a new version of the AI ​​Risk Management Framework?

0

Couldn’t attend Transform 2022? Discover all the summit sessions now in our on-demand library! Look here.


Is your AI trustworthy or not? As the adoption of AI solutions increases at all levels, consumers and regulators expect greater transparency about how these systems work.

Organizations today not only need to be able to identify how AI systems process data and make decisions to ensure they are ethical and unbiased, but they also need to measure the level of risk posed by these solutions. The problem is that there is no universal standard for creating reliable or ethical AI.

However, last week the National Institute of Standards and Technology (NIST) released an expanded draft of its AI Risk Management Framework (RMF) which aims to “address risks in the design, development , use and evaluation of products, services and systems”.

The second draft builds on its initial March 2022 version of the RMF and a December 2021 concept paper. Feedback on the draft is due September 29.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.

register here

The RMF defines trustworthy AI as “valid and reliable, safe, fair and biases are managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy is enhanced.”

NIST’s Evolution Towards “Trustworthy AI”

NIST’s new voluntary framework provides metrics organizations can use to assess the reliability of the AI ​​solutions they use every day.

The importance of this cannot be underestimated, especially when regulations such as the EU’s General Data Protection Regulation (GDPR) give data subjects the right to ask why an organization has taken a particular decision. . Failure to do so could result in a hefty fine.

Although the RMF does not mandate best practices for managing AI risk, it does begin to codify how an organization can begin to measure the risk of deploying AI.

The AI ​​risk management framework provides a model for making this risk assessment, said Rick Holland, CISO at digital risk protection provider Digital Shadows.

“Security leaders can also leverage the six characteristics of trustworthy AI to assess purchases and incorporate them into request for proposal (RFP) templates,” Holland said, adding that the model could “help defenders better understand what has always been a ‘black box’ approach.”

Holland notes that Appendix B of the NIST Framework, titled “How AI Risk Differs from Traditional Software Risk,” provides risk management professionals with practical guidance on how to conduct such risk assessments. related to AI.

The limits of the RMF

While the risk management framework is a welcome addition to support internal enterprise controls, there is still a long way to go before the concept of risk in AI is universally understood.

“This AI risk framework is helpful, but it’s only a scratch on the surface of real AI data project management,” said Chuck Everette, director of cybersecurity defense at Deep. Instinct. “The recommendations here are of a very basic framework that any experienced data scientist, engineer, and architect would already be familiar with. It’s a good baseline for those getting started with building AI models and collecting data. data. ”

In this sense, organizations using the framework should have realistic expectations of what the framework can and cannot achieve. At its core, it’s a tool to identify deployed AI systems, how they work, and the level of risk they pose (i.e. whether they are trustworthy or Nope).

“The NIST RMF guidelines (and playbook) will help CISOs determine what to look for and what to question regarding vendor solutions that rely on AI,” said Sohrob Jazerounian, head of AI research at cybersecurity provider Vectra.

The drafted RMF includes guidance on suggested actions, references, and documentation that will enable stakeholders to fulfill the “map” and “governance” functions of the IA RMF. The finalized version will include information on the two remaining RMF functions – “measure” and “manage” – will be released in January 2023.

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more about membership.