Harnessing Market Forces: UMD Team Incentivizes AI Companies to Prioritize Safety
Tech companies are racing to build the best artificial intelligence (AI) models, but amid the intense competition, safety issues—like user privacy and biased data—often take a back seat. Ramping up government regulation is one way to address these concerns, but regulators have struggled to keep up with the rapid pace of AI development.
Recognizing the urgency of the issue, a team of University of Maryland researchers is developing a system that motivates tech companies to compete not only on capability, but on responsibility as well.
The UMD team has proposed the first-ever auction-based AI regulation framework that incentivizes safety. Their innovative solution is based on a fundamental economics principle: companies respond to market incentives.
“We realized that we need a market-driven regulatory framework, one that aligns safety with AI companies’ business goals,” says Furong Huang, an associate professor of computer science who is leading the UMD team. “Instead of fighting AI companies, we let ‘market forces’ work for us.”
Here’s how it works: Companies submit AI models to a regulator for approval along with a monetary bid, which represents the money they’ve spent on their model’s compliance level. The regulator sets a minimum compliance threshold, but also rewards higher compliance levels. It randomly pairs up each submitted model and then rewards the more compliant model. Consequently, instead of striving to just pass the bar, AI developers compete to exceed it.
The math proves it works. The UMD team modelled AI regulation as an all-pay auction, in which all participants pay the amount they bid, regardless of whether they win the auction or not. Their analysis proved that AI developers will submit models that exceed compliance standards. The results show a 15% increase in participation rates and a 20% rise in compliance spending. This outperforms simpler regulatory approaches that just set minimum standards.
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.