How to Identify AI Imposters in Video, Audio and Text as Deepfakes Go Mainstream
Carl Froggett worked as one of Citibank’s chief information security officers for over two decades, protecting the bank’s infrastructure from increasingly sophisticated cyberattacks. And while criminal trickery, from low-tech paper forgery to rudimentary email scams, has long plagued banking and the wider world of business, deepfake technology powered by generative AI is something previously unseen.
“I am very worried about deepfakes being used in business,” said Froggett, who is now the CIO of Deep Instinct, a company that uses AI to combat cybercrime.
Industry experts say that boardrooms and office cubicles are quickly becoming a battlefield where cybercriminals will routinely deploy deepfake technology in the attempt to steal millions from companies, and as a result, they are a good testing ground for efforts to spot AI imposters before they are successful in scams.
“The challenge we have is that generative AI is so realistic,” Froggett said.
Generative AI video and audio tools are getting deployed, and getting better, quickly. OpenAI released its video generation tool Sora in February and, at the end of March, it introduced an audio tool called Voice Engine that can realistically recreate an individual speaking from a 15-second soundbite. OpenAI said it launched Voice Engine to a small group of users given the dangers that the technology it poses.
Real-world examples of business deepfakes are increasing, Nirupam Roy, an assistant professor of computer science at the University of Maryland said, and it’s not just about the criminal bank account transfers. “It is not difficult to imagine how such deepfakes can be used for targeted defamation to tarnish the reputation of a product or a company,” he said.
Roy and his team have developed a system called TalkLock that works to identify both deepfakes and shallowfakes—which he describes as relying “less on complex editing techniques and more on connecting partial truths to small lies.”
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.