Artificial intelligence (AI) isn't an emerging risk anymore; it's a present one. And unlike past technological shifts, AI doesn't fit neatly into a single coverage line. It's reshaping exposure across cyber, professional liability, employment practices and management liability, often all at once.
For insurers, brokers and insureds alike, that breadth is what makes AI different. Previous innovations tended to introduce risk in isolated pockets of the organization. AI, by contrast, is being embedded across business functions simultaneously — from hiring and HR to client service, data analysis and decision-making. As a result, its risk footprint is wider, more complex and more difficult to underwrite.
"We're going to start seeing AI-related risks across all lines of coverage," says Lindsey Dean, senior attorney and Claims director at RPS. "Right now, most policies are silent as to how they respond to AI."
That silence is becoming harder to ignore.
Technology Moving Faster Than Coverage
One of the defining challenges with AI risk is speed. Organizations are adopting AI tools rapidly, often faster than internal governance, risk management or insurance programs can adapt. In many cases, AI is being introduced incrementally in the form of a chatbot here, an automated workflow there, without a holistic assessment of downstream exposure.
At the same time, policy language has not kept pace. Most executive lines policies were drafted before widespread AI adoption and don't explicitly address how AI-related claims should be treated. Coverage intent is often unclear, exclusions are inconsistent, and sublimits are just beginning to appear.
For now, many underwriters are watching rather than reacting. But claims activity is already emerging, and the legal system is beginning to test how traditional liability frameworks apply to AI-driven decisions.
Professional Liability in an AI-Driven World
As organizations embed AI into service delivery, professional liability exposures are changing in fundamental ways.
AI-generated outputs are increasingly being used to inform decisions, recommendations and strategies that clients rely on in real-world contexts. In some cases, AI is augmenting human judgment. In others, it's replacing parts of it altogether. Either way, the potential for errors and resulting errors and omissions (E&O) claims is evolving.
The challenge is not just whether the AI produces a wrong answer, but whether clients understand how that answer was generated, what its limitations are and who ultimately bears responsibility when something goes wrong.
Underwriters are paying close attention.
"A lot of the E&O underwriters are becoming aware of the issue," says Ron Kiefer, RPS area executive vice president and Professional Liability Product lead. "They know their clients are using AI to some degree now, and they're watching it."
At the same time, professional liability itself is changing, not just because of AI but because of who is being pulled into the market. Contractual requirements are driving entirely new classes of insureds to purchase E&O coverage, even when traditional professional exposure is minimal.
"I've received multiple requests for professional liability from trade contractors that would never buy this insurance otherwise," Kiefer says. "That's new for this market."
Risk managers are increasingly pushing liability downstream, extending professional liability requirements to vendors, subcontractors and service providers that historically operated without E&O coverage. When AI tools are layered on top of those contractual obligations, even informally, the risk profile becomes more complex. For insurers, this means underwriting portfolios that include insureds with limited loss history, limited understanding of professional liability and growing reliance on AI-enabled tools.
An Evolution in Claims
While some AI-related risks are still theoretical, employment practices liability is already producing real-world claims.
Organizations have experimented with AI-driven hiring tools designed to streamline recruitment, screen resumes or identify top candidates. In practice, some of those tools have produced biased outcomes, favoring certain demographic groups over others, leading companies to shelve them after internal reviews or external scrutiny.
But hiring algorithms are only one piece of the exposure.
Deepfake technology has introduced entirely new forms of workplace misconduct. AI-generated images, audio and video can be used to harass employees, impersonate executives or spread false information within an organization, often with devastating consequences.
"Employees may use deepfake technology to create fake images or leave offensive voicemails in the workplace," Dean explains. "This type of conduct can give rise to workplace harassment claims."
These claims are no longer hypothetical. RPS's 2026 Executive Lines Market Outlook Report cites a recent case in which a California jury awarded $4 million to an employee after explicit deepfake images were circulated in the workplace. That verdict underscores a broader trend — employment practices liability is expanding into areas that policies were never originally designed to contemplate.
Shadow AI and Unseen Risk
Perhaps the most insidious AI-related exposure in the executive lines is one many organizations don't even realize they have.
Even companies without formal AI strategies are exposed through unsanctioned employee use, commonly referred to as "shadow AI." Employees increasingly rely on publicly available AI tools to draft documents, analyze data, summarize information or compare contracts, often without understanding how those tools handle data.
"Shadow AI risk accompanies the increased usage of these AI tools by employees in the workplace," says Steve Robinson, National Cyber practice leader at RPS.
That risk spans multiple domains. Sensitive or proprietary information may be uploaded into public models, triggering data privacy violations. Client information may be mishandled, creating regulatory exposure or negligence claims. Intellectual property may be compromised without the organization realizing it until much later.
Robinson offers a simple example: An employee uploads a client's insurance policy into a public AI platform to compare forms or coverage. In that moment, a data breach may have already occurred.
These risks often sit at the intersection of cyber, professional liability and management liability, making them especially difficult to assign to a single policy or coverage line.
A Shift for Executive Lines
What makes AI fundamentally different from other emerging risks is its scope. AI isn't just adding incremental exposure to the executive suite. It's reshaping how decisions are made, how services are delivered, how employees interact and how data moves through organizations. That transformation cuts across nearly every executive line at once.
As a result, underwriting standards are evolving unevenly. Some carriers are introducing AI-related questionnaires. Others are experimenting with exclusions or sublimits. Many policies remain silent, leaving coverage intent unclear until a claim occurs. Claims, meanwhile, are emerging faster than coverage language.
AI-related risk is no longer confined to forward-looking discussions or theoretical scenarios. It's already influencing loss activity, litigation strategies and underwriting decisions, even if the market hasn't fully caught up.
AI isn't a future problem. It's already here. And, in quiet but meaningful ways, it's helping to rewrite the executive lines market in real time.
Get the full 2026 Executive Lines Market Outlook Report to explore the insights shaping the 2026 executive lines landscape.
Contributor Information
Area Senior Vice President
- Chicago, Illinois
National Cyber Practice Leader
- Cambridge, MD