How a deep-fake experiment inspired Coalition's new AI offering

“I cloned a journalist’s voice in 20 minutes”

How a deep-fake experiment inspired Coalition's new AI offering

Cyber

By Gia Snape

The rise of artificial intelligence (AI) powered scams is rapidly shifting the cyber threat landscape. “Deep fakes” – voices, images or videos manipulated to mimic an individual’s likeness – have become so realistic that many people would struggle to identify what’s real from what’s not.

That was the case for one voice-cloning experiment conducted by Tiago Henriques (pictured).

“I managed to successfully clone the voice of a journalist in just 20 minutes,” said Henriques (pictured), vice president of research at active cyber insurance provider Coalition.

On an NBC Nightly News segment last year, Henriques tested the alarming ease with which publicly-available AI programs can replicate voices that can be exploited for malicious purposes.

He fed old audio clips of reporter Emilie Ikeda to a voice-cloning program. He used AI to convince one of Ikeda’s colleagues to share her credit card information during a phone call.

The experiment highlights the mounting dangers posed by the proliferation of deep fake technologies. It also helped inspire Coalition to develop an affirmative artificial intelligence (AI) endorsement to its US surplus and Canada cyber insurance policies.

“That’s what clicked for us,” Henriques said. “Because if we can do it even though we’re not really trying, people who do this full-time will be able to do it on a much bigger scale.”

Deep-fake scams and AI-driven cyber threats on the rise

Since the voice-cloning experiment, Henriques admitted that generative AI and similar technologies have advanced rapidly and become more sophisticated. The landscape has become increasingly treacherous with the advent of large language models popularized by ChatGPT.

“Last year, I needed to gather about 10 minutes of audio to clone the journalist’s voice successfully. Today, you need three seconds,” he said. “I also had to collect different types of voices, like if she was angry, sad, or anxious. Now, you can generate all sorts of expressions in the software, and it can say whatever you want it to.”

From funds transfer fraud to phishing scams, the possibilities for exploiting these AI-generated voices are endless. Henriques stressed that the rapid advancement of AI technology underscores the urgency for robust risk mitigation strategies, especially employee training and vigilance.

“It’s important, but it’s also incredibly hard,” Henriques said. “We’ve had years and years of employee training, and we saw the number of phishing victims come down. But with the ultra-high-quality phishing campaigns, I don’t see things getting better.

“We need to work to teach employees that these things are happening and have better cybersecurity controls. This is a technology problem that needs to be solved by fighting fire with fire.”

‘No silver bullet’ against AI-driven cyber threats

Despite the looming specter of AI-driven cyber threats, Henriques remains cautiously optimistic about the future and calls for a balanced approach to addressing emerging threats.

“On certain fronts, I’m slightly more worried than others. I think people are overhyping it,” Henriques reflected. “I don’t think we’ll wake up tomorrow and have an AI that has found 1,000 new vulnerabilities for Microsoft. I think we’re far from that.”

What keeps Henriques up at night, however, is the increase in voice and email scams such as the one he helped produce. But he also noted a silver lining: technologies are getting better at detecting synthetic content.

“The future of this is that we either get better at detecting these through technology or find other ways to fight this through information security behaviour,” he said.

Insurance carriers will also continue to innovate as cyber threats evolve. Coalition’s affirmative AI endorsement, for one, broadens the scope of what constitutes a security failure or data breach to cover incidents triggered by AI. This means that policies will acknowledge AI as a potential cause of security failures in computer systems.

Henriques stressed that this trend should be on brokers’ radars.

“It’s important that brokers are paying attention, asking clients if they are using AI technologies, and ensuring that they have some type of AI endorsement,” he said.

Do you have something to say about AI-driven cyber risks? Please share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!