A little-known federal agency, The National Institute of Standards and Technology, was tapped by the Biden administration to set testing parameters for ensuring generative AI systems are safe, secure, trustworthy and socially responsible
BOSTON — No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it’s paramount AI systems are safe, secure, trustworthy and socially responsible.
But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration’s task of setting standards for AI safety a major challenge.
To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST’s tools and measures define products and services from atomic clocks to election security tech and nanomaterials.
At the helm of the agency’s AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.
Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.
This interview with Tabassi has been edited for length and clarity.
Q: Emergent AI technologies have capabilities their creators don’t even understand. There isn't even an agreed upon vocabulary, the technology is so new. You’ve stressed the importance of creating a lexicon on
Read more on abcnews.go.com