AI-Generated Misinformation is Everywhere. ID’ing It May Be Harder Than You Think

UMD experts explain the emotional and cognitive challenges, offering strategies to avoid them.

Artificial though it may be, the concept of “intelligence” doesn’t seem to jibe with a computer-generated image of uniformed cats toting assault rifles.

Yet that visual slur, which supports a debunked story about immigrants in Ohio eating pets, has become a signature image from the 2024 U.S. presidential election. It was created using artificial intelligence (AI) by supporters of Republican nominee Donald Trump and circulated online by the former president himself.

As the first election to play out in the era of widespread access to generative AI—which can create seemingly original content—disreputable websites and our social media feeds have featured a deluge of fabricated and misleading text and visuals.

To learn more about the role of this technology in the contest, Maryland Today spoke to three University of Maryland faculty experts: College of Information Professor Jennifer Golbeck, who studies social media algorithms and the spread of extremist content; linguistics Professor Philip Resnik, who pursues computational approaches to language and psychology with a joint appointment in the Institute for Advanced Computer Studies; and journalism Assistant Professor Daniel Trielli M.Jour. ’16, who examines how search engines amplify misinformation and intentional disinformation, among other topics.

With AI technology rapidly developing, there are few controls on how it’s used, they said, leaving voters responsible for separating good information from bad.

What’s your marquee example of questionable AI use in the 2024 election?

Golbeck: Taylor Swift, for sure. It’s astonishing to me that Donald Trump thought he could get away with posting those Swifties for Trump images. I don’t think anyone thought they were real images of Taylor Swift, but that wasn’t the point. Whether or not it’s the reason she came out and endorsed Kamala Harris, it gave her the opportunity to say, ‘My image was used in AI, I’m concerned about this, and now I need to tell you who I’m really going to vote for.’”

Resnik: The Biden robocall in New Hampshire (created by a political consultant associated with an obscure primary opponent’s campaign) that was designed to suppress turnout among Biden supporters in the Democratic primary was an indicator of what the future might bring.

Trielli: One of the biggest is the blatantly false charge that Haitian immigrants to Ohio were breaking a social norm by eating people’s pets. The point of this was not to have a local effect and flip Ohio, but to create a general, nationwide anti-immigrant sentiment for political gain. The AI-generated images that came out of that were not portrayed as being real, but they show how disinformation campaigns often work, by creating a negative emotional vibe against someone or something.

How are most of us encountering AI-generated mis- or disinformation about the election?

Trielli: The chances of encountering it, particularly in social media, are close to 100%. AI can generate emotionally appealing content with great speed, so there is a lot of it, and no matter who we are, someone we follow is going to share some of it. Beyond misinformation and disinformation, however, we are seeing a lot of generative AI content that might not be designed to make you believe something that’s untruthful happened, but to cause you to make a connection between a person and an idea, positive or negative.

Resnik: AI has the ability to tirelessly amplify false messages on social media, and there is work in political psychology and psychology more generally showing that even false messages break through and gain acceptance if they’re repeated enough and speak to people’s biases enough.

However, we don’t always encounter it as a blatant, whole-cloth fabrication. My former student Pranav Goel Ph.D. ’23 showed in his dissertation how you can take a legitimate news story and find something in it—maybe an oversimplified headline—that supports a narrative that may not be present in the entire piece, but can be amplified as misinformation. He showed in a very systematic way how this was happening with news sources associated with both the right wing and with progressives.

Click HERE to read the full article

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.