There's no denying the impact that artificial intelligence (AI) is having on cyber risk. Thanks to hackers' increasing use of generative AI tools such as ChatGPT, phishing email attacks increased by 202% in the second half of 2024.1 Credential phishing attacks — which prompt victims to enter their login information to gain access — increased by 703% in the same period. And 65% of phishing attacks are now targeting organizations instead of individuals.2 AI technology is now used in up to 82.6% of phishing emails,3 in part because hackers can create and deploy phishing emails up to 40% faster thanks to generative AI and automation.4

As the World Economic Forum says: "AI is changing the way we live, work and govern. It has immense potential to improve our world but also creates threats we cannot afford to ignore. Traditional methods of verifying identity — passwords, username and knowledge-based verification — are no longer sufficient. AI enables fraudsters to exploit vulnerabilities at scale in these outdated systems."5

Cyber risk now involves everything from AI-driven deepfakes that allow criminals to bypass facial recognition and verification systems, to AI-powered language models that enable phishing scams at scale that are more convincing than ever before.

The Implications for Insurers

For Cyber insurers and their customers, the primary takeaway is that AI is enabling threat actors to more efficiently and quickly execute their crimes, and they no longer need to be experts to profit from these activities.

"AI is lowering the barrier to entry to becoming a cybercriminal," says Nick Yonce, RPS area assistant vice president. "From autonomous malware, to deepfakes, to ransomware-as-a-service, it's just easier to get into the game. Before, if you wanted to be a cybercriminal, you had to figure out how to do it all yourself. Now you can outsource it, which is causing the frequency of attacks to rise. While the total payout might be slightly less, it's happening more frequently because it's easier to get started."

For carriers, Yonce says, the challenge is in continually tailoring policies to address these constantly evolving AI-driven risks. For instance, data poisoning is a type of cyber attack that involves corrupting the data used to train AI models, to manipulate the output to the benefit of a threat actor. As large companies increasingly set up internal, proprietary AI systems to help with research, transcription and other tasks, these types of attacks can have a profound impact on businesses as a whole.

"Aside from even a cyber claim, this could become a professional loss," Yonce says, "a failure of the management and the board. And that's just one example of a new type of attack. Just keeping a finger on the pulse of AI and how to be able to keep everything secure is going to become an ongoing challenge for insureds and carriers alike."

AI Coverage Gaps

However, the risks that businesses face when using, creating and implementing AI still aren't clearly addressed in today's cyber policies. While attacks via AI are generally covered, the liability arising from biased AI models or data hallucinations typically isn't covered.

"Cyber policies are doing a good job of either implicitly covering or specifically stating that they're covering attacks that are perpetrated via AI means," says RPS National Cyber Practice Leader Steve Robinson. "But what hasn't really been clear, and still isn't, is what happens when an organization creates or changes an AI model and that model later has bias in it or hallucinations in the data. That isn't contemplated in a traditional Cyber insurance policy."

In some cases, an Errors and Omissions (E&O) policy could address those risks — and we're starting to see more E&O policies with exclusions for AI risk — but Robinson believes that will be an area of increased development in the years ahead. Will we see that added to Cyber insurance policies, will it be more specifically added to E&O policies, or will it become its own niche product? That is still up for debate, as we're seeing pockets of development on all of these fronts.

"There are several use cases," Robinson says. "There are the developers of AI technology itself and their E&O policies, so I think we're going to start to see AI-specific liability added to those. And then there are the users. While they might not be tech companies, to the extent that they're utilizing those models in their businesses, the liability that's created when the outputs don't do what they say they're going to do will be another area where businesses oftentimes won't have insurance coverage at all." He adds, "Maybe they don't need it, but if they do, what's going to fill that gap?"

Learn more about what's next for the Cyber market in the 2025 Cyber Market Outlook.

GET THE REPORT


Sources

1"The 2024 Phishing Intelligence Report," SlashNext, 18 Dec 2024. PDF file

2Baker, Eliot. "Phishing Trends Report (Updated for 2025)," Hoxhunt, accessed 19 Aug 2025.

3"Phishing Threat Trends Report," KnowBe4, Mar 2025. PDF file.

4"One in Five People Click on AI-Generated Phishing Emails, SoSafe Data Reveals," SoSafe, 24 April 2023.

Hall, Blake. "How AI-Driven Fraud Challenges the Global Economy — and Ways to Combat It," World Economic Forum, 16 Jan 2025.