17.4 C
New York
Sunday, September 8, 2024

AI’s Secret Weapon: How Visa Thwarted $40 Billion in Fraud

All copyrighted images used with permission of the respective Owners.

Visa Fights Back Against AI-Powered Fraud with AI of Its Own

The rise of artificial intelligence (AI) has brought about a wave of advancements in various fields, but it has also empowered cybercriminals with new tools for fraud. Payments giant Visa is leveraging AI and machine learning to counter this threat, employing sophisticated algorithms to detect and prevent malicious activities.

Key Takeaways

  • Visa uses AI to combat fraud, preventing $40 billion in fraudulent activity in the past year.
  • Criminals are using AI to generate fake credit card numbers and perform automated enumeration attacks.
  • Visa’s AI analyzes over 500 attributes of each transaction, assigning a real-time risk score to flag suspicious activity.
  • The rise of generative AI, particularly tools like ChatGPT, is fueling new forms of scams including convincing phishing messages, voice cloning, and deepfakes.
  • Experts warn that these AI-powered scams are becoming increasingly sophisticated and cost-effective for criminals.

AI-Driven Fraud: A Growing Threat

Cybercriminals are increasingly turning to generative AI, voice cloning, and deepfakes to create more convincing scams. James Mirfin, global head of risk and identity solutions at Visa, highlighted the use of AI in romance scams, investment scams, and pig butchering, where criminals manipulate victims into handing over their money.

"They’re using some level of artificial intelligence, whether it’s a voice cloning, whether it’s a deepfake, whether it’s social engineering. They’re using artificial intelligence to enact different types of that, " Mirfin explained.

Generative AI tools like ChatGPT have made it easier for criminals to create authentic-looking phishing messages, deceiving victims into divulging sensitive information. Okta, a leading identity and access management company, has reported that scammers now need less than three seconds of audio to clone someone’s voice, enabling them to impersonate loved ones or even convince unsuspecting bank employees to transfer money.

Paul Fabara, chief risk and client services officer at Visa, stated in the firm’s biannual threats report, "With the use of Generative AI and other emerging technologies, scams are more convincing than ever, leading to unprecedented losses for consumers."

The Cost of AI-Enabled Scams

Deloitte’s Center for Financial Services predicts that AI-powered fraud could lead to $40 billion in losses in the U.S. by 2027, a stark increase from $12.3 billion in 2023. This is fueled by the fact that criminals can target multiple victims at a time using the same AI-driven tools, making their efforts more cost-effective.

High-profile cases have highlighted the potential impact of these scams. In Hong Kong, a firm lost $25 million to a fraudster who used a deepfake to impersonate the company’s CFO. A similar incident in Shanxi province, China, saw an employee tricked into transferring 1.86 million yuan ($262,000) after a deepfake video call with a fraudster posing as her boss.

Visa’s Fightback: AI Against AI

Recognizing the increasing threat, Visa has invested an impressive $10 billion in technology over the past five years to counteract fraud and enhance network security. The company employs a sophisticated AI-powered system that analyzes over 500 attributes of each transaction, assigning a risk score in real time. This helps identify suspicious activities, including enumeration attacks where criminals use automated tools to test stolen credit card details.

Visa’s AI system is constantly learning and evolving, adapting to new fraud patterns as they emerge. As Mirfin emphasized, "If you see a new type of fraud happening, our model will see that, it will catch it, it will score those transactions as high risk and then our customers can decide not to approve those transactions."

Furthermore, Visa employs AI to assess the likelihood of fraud in token provisioning requests, targeting criminals who use social engineering techniques to obtain tokens for fraudulent transactions.

The Future of Fraud: A Constant Battle

The use of AI in fraud is likely to become even more sophisticated in the future, requiring continued investment and innovation from cybersecurity experts and payment processing companies. Visa’s commitment to leveraging AI to counter these threats demonstrates a proactive approach to protecting consumers and businesses from fraud.

However, the battle against AI-powered fraud is a constant one. As criminals continue to refine their techniques, so too must companies like Visa develop new solutions to stay ahead and mitigate the risk of significant financial losses. The future of cybersecurity will likely be marked by ongoing advancements in AI and a race to stay one step ahead of those who seek to exploit it for malicious purposes.

Article Reference

Sarah Thompson
Sarah Thompson
Sarah Thompson is a seasoned journalist with over a decade of experience in breaking news and current affairs.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

Activist Ancora Pushes Forward Air for Strategic Review: Will the Company Deliver?

Activist Investor Ancora Calls for Sale of Forward Air, Citing Debt and Bloated Expenses Forward Air Corporation, a leading provider of asset-light transportation services, is...

Sony Unveils Next-Gen PlayStation 4: What You Need to Know

PlayStation Pro: A Mid-Life Upgrade or a Sign of a Shifting Industry? Sony's surprise announcement of the PlayStation 4 Pro, a more powerful console capable...

Work Like Buffett, Retire Never: Is This The Future of Aging?

Bill Gates: Working Less Than Full-Time "Sounds Awful," Plans to Keep Busy for Decades Bill Gates, the 68-year-old co-founder of Microsoft, has no plans to...