'A tactic scammers are increasingly adopting': This new hoax is 'getting harder to detect'

'A tactic scammers are increasingly adopting': This new hoax is 'getting harder to detect'
Image: Shutterstock

Image: Shutterstock

Frontpage news and politics

Business Insider Senior correspondent Amanda Hoover put the scary possibilities of AI to the test by using an AI voice generator and some entry-level personal information to scam her own bank.

It almost worked—and it does often work with others.

“It's a tactic scammers are increasingly adopting. They take advantage of cheap, widely available generative-AI tools to deepfake people and gain access to their bank accounts, or even open accounts in someone else's name,” Hoover writes. “These deepfakes are not only getting easier to make but also getting harder to detect. Last year, a financial worker in Hong Kong mistakenly paid out $25 million to scammers after they deepfaked the company's chief financial officer and other staff members in a video call.

READ MORE: 'Kill them all': Shocking details about Fetterman’s behavior raise concerns over his health

She reached out to several of the largest U.S. banks, asking what they're doing to detect and shut down deepfake fraud. Several did not respond. Citi declined to share details of its fraud detection methods, while JPMorgan Chase officials told her the bank is "committed to staying ahead by continuously advancing our security protocols and investing in cutting-edge solutions to protect our customers."

“But spotting deepfakes is tricky,” she says. “Even OpenAI discontinued its AI-writing detector shortly after launching it in 2023, reasoning that its accuracy was too low to even reliably detect whether something was generated by its own ChatGPT. Image, video, and audio generation have all been rapidly improving over the past two years as tools become more sophisticated: If you remember how horrifying and unrealistic AI Will Smith eating spaghetti looked just two years ago, you'll be shocked to see what OpenAI's text-to-video generator, Sora, can do now. Generative AI has gotten leaps and bounds better at covering its tracks, which is great news for scammers."

Hoover nabbed information from her own bank using an AI voice read off information like her debit card number and the last four digits of her Social Security number—both often available online and sold on the dark web (in fact, your own info is likely already out there, waiting for a buyer to purchase). She used her robot to generate “friendly phrases” asking the bank to update her email address, or change PIN, which is a common tactic in cases of identity theft or hack attacks.

The voice even piled on the charm, saying things like "I'm doing well today, how are you?" when talking to the unsuspecting human on the other line. And the software is already low risk and low cost enough for scammer organizations to launch a massive number of attacks, to test the security of countless banks.

READ MORE: 'Losing the base': MTG goes on massive rant against Trump

An official at Citi advises people to “be suspicious of urgent requests and unexpected calls — even if they're coming from someone who sounds like a friend or family member. Try to take time to verify the caller or contact the person in a different way. If the call seems to be from your bank, you may want to hang up and call the bank back using the phone number on your card to confirm it.”

An executive at Ferrari was able to catch a scammer deepfaking the company CEO's voice when he asked the caller what book he had recommended just days earlier. Hoover advises “limiting what you share on social media and to whom” to reduce the public information that can make you a target. And use tools like two-factor authentication and password managers that store complex and varied passwords.

Hoover’s own self-made AI hacks weren’t very successful. The robot failed to change her debit card PIN and email address but she did get the amount of her account balance. Unlike Hoover’s bank, others might allow people to change personal information, like emails, over the phone, which gives scammers easy access to the account.

“Whether my bank caught on to my use of a generated voice, I'm not sure, but I do sleep a little bit better knowing there are some protections in place.”

READ MORE: No 'floor for Trump's approval rating': CNN analysts detail data that's 'scaring the White House'

Read the full Business Insider article here.

{{ post.roar_specific_data.api_data.analytics }}
@2025 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.