Digital Transformation » AI » The AI cybercrime wave has now reached 87% of global businesses

The AI cybercrime wave has now reached 87% of global businesses

A new arms race is unfolding in cybersecurity, and artificial intelligence is at its core. Attackers are using AI to supercharge deception, making fraud more convincing, more scalable, and harder to detect. The numbers are staggering: 87% of global organizations faced an AI-powered cyberattack in the past year, according to SoSafe’s Cybercrime Trends 2025 report. And the threat is only accelerating.

“AI is dramatically scaling the sophistication and personalization of cyberattacks,” said Andrew Rose, Chief Security Officer at SoSafe. “While organizations seem to be aware of the threat, our data shows businesses are not confident in their ability to detect and react to these attacks.”

The Multichannel Attack Era

The financial sector has long been a prime target for cybercriminals, but AI is rewriting the playbook. Attacks no longer rely solely on phishing emails. Instead, they deploy a blend of deepfake voice calls, AI-generated video messages, and social engineering tactics across multiple communication platforms. The report found that 95% of cybersecurity professionals have observed an increase in these multichannel attacks over the last two years.

A case in point: attackers recently targeted WWP’s CEO through a sophisticated AI-powered assault. Using WhatsApp to establish initial trust, then Microsoft Teams for further interaction, they culminated the attack with a deepfake voice call, successfully extracting sensitive financial data.

“Targeting victims across a combination of communications platforms allows them to mimic normal communication patterns, appearing more legitimate,” Rose explained. “Simplistic email attacks are evolving into 3D phishing, seamlessly integrating voice, videos or text-based elements to create AI-powered, advanced scams.”

AI’s Double-Edged Sword

AI is not just an attack vector—it is also expanding the attack surface within organizations. As businesses race to implement AI-driven tools, they may be unknowingly opening themselves up to new vulnerabilities.

“Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls,” said Rose.

“Many firms create AI chatbots to provide their staff with assistance, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and gather useful corporate insights.”

Despite this, 55% of businesses surveyed have yet to fully implement controls to mitigate the risks associated with their in-house AI solutions. The financial sector, which increasingly relies on AI-driven forecasting, risk analysis, and fraud detection, must be especially vigilant.

The Cost of Complacency

Organizations are uniquely positioned to champion cybersecurity resilience. Their oversight of enterprise risk, compliance, and budget allocation means they must take a proactive stance on AI security. Key actions include:

  • Embedding AI security into risk management: AI-driven threats should be a standing item in enterprise risk discussions, with finance leaders collaborating closely with CISOs and CIOs to assess exposure.
  • Investing in multichannel threat detection: Given the rise of deepfake-enabled fraud, companies must go beyond traditional email security measures and deploy AI-powered defense tools capable of detecting synthetic media and behavioral anomalies.
  • Training employees on AI threats: SoSafe’s report underscores that even the most advanced security technology is ineffective without employee vigilance. Equipping teams with the knowledge to identify AI-driven deception is paramount.
  • Reassessing third-party risk: With supply chain vulnerabilities cited as one of the top emerging cyber threats, organizations must scrutinize vendor security practices, ensuring external partners are not inadvertently exposing them to AI-enabled attacks.

A Balancing Act

AI is both the problem and part of the solution. While cybercriminals leverage AI to launch more advanced attacks, AI-driven cybersecurity tools are also emerging as a crucial line of defense. However, as SoSafe’s CEO Niklas Hellemann warns, technology alone is not enough.

“While AI undoubtedly presents new challenges, it also remains one of our greatest allies in protecting organisations against ever-evolving threats,” Hellemann said.

“However, AI-driven security is only as strong as the people who use it. Cybersecurity awareness is critical. Without informed employees who can recognise and respond to AI-driven threats, even the best technology falls short.”

Share
Was this article helpful?

Comments are closed.

Subscribe to get your daily business insights