Accountants to the AI rescue
In search of ideas for ensuring a safe future with increasingly powerful AI, people have looked to lawmakers, coders, scientists, philosophers and activists.
They may be overlooking the most important inspiration of all: Accountants.
New polling shared first with DFD finds that a wonky policy idea enjoys surprising popularity among American adults: requiring mandatory safety audits of AI models before they can be released.
Audits as a way to control AI don’t literally involve accountants; they’re an evolving idea for how to independently assess the risks of a new system. Like financial audits, they aren’t exactly sexy, especially when more dramatic responses like bans, nationalization, and new Manhattan Projects are on the table. That may explain why they have not played an especially prominent role in policy discourse.
“It’s under-represented, under-understood,” said Ryan Carrier, a chartered financial analyst who advocates for AI audits.
But the Artificial Intelligence Policy Institute — a new think tank focused on existential AI risk — found that when it asked about 11 potential AI policy responses in head-to-head preference questions, respondents chose the AI safety audit idea over others two-thirds of the time (making it second only to the vaguer response of “Preventing dangerous and catastrophic outcomes”).
AI “audits” have a unique layer of complexity. Because even the designers of large language models don’t fully understand their inner workings, the models themselves cannot be audited directly, said Ben Shneiderman, a professor emeritus of computer science at the University of Maryland and the author of “Human-Centered AI.”
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.