NIST has an AI Risk Management Framework

This is cool:

«Public bias bounties could be a standard part of algorithmic risk-assessment programs in companies. The National Institute of Standards and Technology, the U.S.-government entity that develops algorithmic-risk standards, has included validation exercises, such as bounties, as a part of its recommended algorithmic-ethics program in its latest AI Risk Management Framework. Bounty programs can be an informative way to incorporate structured public feedback into real-time algorithmic monitoring.»

Although I guess we’d have to take into account the cost of verifying every claim of a bounty.

(NIST is involved? Neet!)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.