TRAILS Faculty Launch New Study on Perception Bias and AI Systems

Researchers study AI-generated information and human perception bias with a $150K seed grant from TRAILS.
Descriptive image for TRAILS Faculty Launch New Study on Perception Bias and AI Systems

Perception bias is a cognitive bias that occurs when we subconsciously draw conclusions based on what we expect to see or experience. It has been studied extensively, particularly as it relates to health information, the workplace environment, and even social gatherings.

But what is the relationship between human-based perception bias, and information that is generated by artificial intelligence (AI) algorithms?

Researchers from the Institute for Trustworthy AI in Law & Society (TRAILS) are exploring this topic, conducting a series of studies to determine the level of bias that users expect from AI systems, and how AI providers explain to users that their systems may include biased data.

The project, led by Adam Aviv, an associate professor of computer science at George Washington University, and Michelle Mazurek, an associate professor of computer science at the University of Maryland, is supported by a $150K seed grant from TRAILS.

It is one of eight projects that received funding in January when TRAILS unveiled its inaugural round of seed grants.

Mazurek and Aviv have a long track record of successful collaborations on security-related topics. Mazurek, who is the director of the Maryland Cybersecurity Center at UMD, says they’re both interested in how people make decisions related to their online security, privacy and safety.

She believes that decision-making based on AI-generated content—particularly how much trust is placed in that content—is a natural extension of the duo’s previous work.

Click HERE to read the full article

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.