David Vainer

Managing Partner & CEO of Alliance Risk

AI liability is real. The SEC filed 53 AI-related securities lawsuits in 2025. Companies training models face copyright suits. Chatbot makers defend harm claims. Air Canada’s chatbot cost the airline thousands by honoring a bad discount. The SEC fined companies for making false AI claims.

Your insurance wasn’t built for this. Your tech liability policy, your D&O coverage, your cyber plan: none anticipated AI systems. Coverage exists. Claims are happening. But gaps are widening. You need to know where AI risk sits across your policies. It’s no longer optional.

This guide maps the AI liability landscape. Five main claim types, how traditional policies respond, and what underwriters demand now.

The Five Categories of AI Liability Claims Courts Are Seeing Now

AI liability is multiple risks. Each claim type stems from different legal theories and hits different parts of your business.

Output Liability: When AI Gives Wrong Answers

Bad AI output creates simple but serious claims. A law firm used ChatGPT for case research. The AI cited fake cases. The firm used them in court filings. The bar investigated. This is output liability: the AI worked fine but generated false results that caused real harm.

Nippon Life sued OpenAI for $10 million, claiming ChatGPT enabled unlicensed law practice. The issue was the harm: people made legal decisions based on AI outputs without talking to a real lawyer.

Output liability scales. A chatbot suggesting the wrong movie is harmless. A hiring algorithm rejecting protected classes is serious. A medical AI missing cancer triggers catastrophic damages.

IP Infringement: Training Data and Generated Content

AI models learn from existing work. They generate new content. Both create legal exposure.

Artists and authors claim models trained on their work without permission or pay. Musicians say their songs were scraped into music generators. Publishers say their content powers competing systems.

Google’s AI Overviews summarize web results without always linking back. Wolf River Electric sued, claiming Google stole the value of their content.

When AI generates images resembling copyrighted work, who’s liable? The user? The model builder? Both?

Courts will take years to decide. Underwriters are already asking: where did your training data come from? Can you prove you had rights to it?

Privacy and Data Violations: AI Processing Personal Data

AI systems often process large quantities of personal data. Careless handling creates liability. So does an AI that leaks private information.

Consider a company that builds an AI customer service chatbot trained on support transcripts containing names, account numbers, and financial information. If the model memorizes that data and reproduces it later, the company has a privacy breach. If a user asks about someone else’s account and the model retrieves their personal information from training data, that’s a data privacy incident.

The regulatory framework is tightening. The EU AI Act imposes strict requirements on high-risk AI systems processing personal data. Colorado’s AI Act creates obligations for companies deploying automated decision systems. More states are following. Each regulation expands what counts as improper handling of personal data in an AI context.

Discrimination and Bias: When AI Perpetuates Harm

AI systems learn from historical data. Biased data produces biased systems.

A hiring algorithm trained on past hires learns historical preferences. If the company hired fewer women, the algorithm prefers men. No intent to discriminate. But the legal result is real discrimination.

Lending algorithms face the same exposure. A model denying credit by zip code may proxy for race or ethnicity, violating fair housing and credit laws.

Insurance algorithms face regulatory scrutiny. States are checking whether underwriting models unfairly exclude protected groups. The legal theory is simple: if an AI produces discriminatory outcomes, liability follows. Intent doesn’t matter. Disparate impact is the standard.

Safety and Harm: Chatbots, Autonomous Systems, and Physical Injury

The fifth category involves direct harm. A chatbot encourages a minor to self-harm. An autonomous vehicle strikes a pedestrian. An AI medical device misclassifies a tumor. The harm is physical or psychological.

The Garcia v. Character.AI case shows this clearly. A minor’s family alleged the chatbot engaged in inappropriate conversations encouraging self-harm. The claim rested on the company’s failure to put safety guardrails in place and monitor for harmful interactions.

These cases are rare today. They’re growing. As AI systems take on more consequential roles (controlling machinery, advising on medical treatment, guiding consumer behavior), the potential for direct harm increases. So does liability.

How Existing Insurance Policies Respond to AI Claims

Your insurance portfolio has several policies that can respond to AI claims. Knowing which covers what is critical for spotting gaps.

Technology Errors and Omissions Insurance

Tech E&O is the frontline for output liability. It covers software companies against claims of defective work, incomplete delivery, or failure to perform.

Client sues because your AI gave bad advice, made errors, or failed to perform? Tech E&O responds. It covers professional negligence, failure to exercise reasonable care, and product errors.

For AI, the application is direct. Your hiring model rejects protected classes? Tech E&O covers defense and potentially settlement.

But old policies have gaps. Pre-2023 policies often exclude “automated decision-making” or “algorithmic systems.” Modern policies are more explicit but narrower. They define covered AI systems tightly, excluding new use cases.

Tech E&O also requires reasonable care in system design. If you deployed an AI model without bias testing or data validation, the insurer can deny the claim. Underwriters now demand evidence of AI governance before issuing a policy.

Directors and Officers Liability Insurance

D&O covers claims against directors and officers personally: securities claims, shareholder derivative suits, regulatory investigations.

The “AI washing” enforcement actions by the SEC have created major D&O exposure. Companies that overstated AI capabilities, used unsupported AI claims in marketing, or failed to disclose material AI risks face shareholder class actions alleging securities fraud.

The 2025 shareholder class actions follow a pattern: company makes public AI statements. Statements turn out to be misleading. The company either oversold what the AI does or disclosed risks poorly. Shareholders sue. Directors and officers are named personally.

D&O covers the defense costs and, if liability is established, settlement or judgment. But D&O carries significant exclusions. Most exclude claims from intentional misconduct. If the board knew the AI claims were unsupported and approved them anyway, D&O coverage likely won’t apply. This creates a strong incentive for honest disclosure and rigorous board oversight of AI.

D&O also typically excludes claims covered by other insurance. If a shareholder sues alleging the AI product caused consumer harm, and product liability would cover that claim, the D&O insurer may deny coverage.

Cyber Liability Insurance

Cyber policies cover data breaches, privacy violations, and regulatory actions related to data security. As AI systems increasingly handle personal data, cyber liability becomes relevant.

When an AI model memorizes training data and reproduces it in user interactions, that’s a data breach. When an AI system processes medical records, financial information, or other sensitive data and it’s compromised, cyber liability responds.

Cyber policies typically cover notification costs, credit monitoring, regulatory fines (where permitted), legal defense, and business interruption if systems go offline.

Cyber policies are historically narrow though. They cover “unauthorized access” to data. But if the AI system processes data the company had authorization to access, and the system leaks it unintentionally, coverage becomes uncertain. Is that “unauthorized access” or “negligent handling”? Policy language is often ambiguous.

Underwriters also worry about AI risks that don’t fit traditional cybersecurity frameworks. Poisoned AI models (deliberately trained on corrupted data), manipulated AI producing harmful content: are these data breaches or something else? Policies are being updated, but most in force today lack clear answers.

General Liability Insurance

General liability covers bodily injury, property damage, and personal injury claims. It’s relevant to AI primarily when AI systems control physical machinery or make decisions affecting physical safety.

An AI-controlled robot strikes a worker. An autonomous vehicle hits a pedestrian. An AI medical device misclassifies a tumor. General liability responds, assuming the injury fits bodily injury or property damage definitions.

General liability carries exclusions that complicate AI coverage. Most exclude “failure to warn” claims involving software updates. Most exclude “automated decision-making.” Some exclude “cyber” events, creating uncertainty when the boundary between physical and digital systems blurs.

Product Liability Insurance

Product liability covers defects in manufactured products. As more companies embed AI in physical products (autonomous vehicles, robots, medical devices, smart appliances), product liability becomes increasingly relevant.

The challenge: product liability was designed for traditional manufacturing defects. A part breaks. A weld fails. A material degrades. AI creates different defects. The system might work perfectly from a code perspective but produce biased, unsafe, or harmful outputs.

Underwriters are cautious about product liability for AI-embedded products. Policies are being rewritten to exclude or carve out AI risks. Some insurers won’t write coverage for companies with significant AI in their products. Others require robust AI governance, bias testing, and human oversight.

Where the Coverage Gaps Live

Five policies form your AI liability program. None was built for AI. All have gaps.

Algorithmic Discrimination: The Between-Policy Problem

Your hiring algorithm systematically rejects women for technical roles. Who covers that? Tech E&O? Product liability?

Tech E&O might argue the algorithm performed defectively. But tech E&O typically covers claims by the client you sold the system to, not claims by job applicants harmed by it. Your customer bought your hiring tool and it discriminated? Tech E&O covers your customer’s defense. But applicants sue your customer directly. Their claims fall under your customer’s EPLI.

If you’re deploying the algorithm in your own hiring and applicants sue for discrimination, which policy responds? EPLI covers this. Most companies don’t have it on the shelf. Without it, the discrimination claim falls into a void.

Autonomous Decision-Making: Undefined Territory

A bank deploys an AI lending algorithm making credit decisions without human review. The algorithm denies credit based on factors the bank didn’t intend. Is that a decision the bank made (bank responsibility) or a decision the algorithm made (algorithm maker responsibility)?

Insurance has historically assumed companies make decisions and are responsible for them. Policies haven’t contemplated systems making decisions independently. Liability for autonomous decisions sits in ambiguous territory across multiple policies.

Training Data IP Infringement: Untested Waters

Did your company train its AI model on copyrighted material? Did you have a license? Did you perform due diligence?

If copyright holders sue, which policy responds? Tech E&O? General liability? Most policies have exclusions or limitations for IP infringement. Coverage remains uncertain. Underwriters now ask detailed questions about training data provenance. Policy language hasn’t caught up.

Regulatory Compliance: The Cost of New Laws

The EU AI Act imposes compliance obligations on companies deploying high-risk AI systems. Colorado’s AI Act creates similar requirements. More states are following.

The cost of compliance is significant: security consultants, compliance lawyers, bias auditors, documentation specialists, model retesting, ongoing monitoring, incident response planning. Insurance won’t cover routine compliance infrastructure. First-party or cyber liability might cover some costs tied to a breach, but they won’t cover the compliance buildout.

Shadow AI: Unmanaged Exposure

Shadow AI (tools employees deploy without IT approval) adds $670,000 to the average breach bill, per IBM’s 2025 research. An employee downloads ChatGPT, uses it for work, and uploads sensitive company data. That data is now in the tool’s training data.

Your insurance won’t explicitly address this. Cyber liability might respond if the breach is covered, but policies assume your company controls the systems handling sensitive data. Shadow AI assumes the opposite. Risk transfer becomes uncertain.

The D&O Exposure from “AI Washing”: How the SEC Is Enforcing

The SEC is bringing enforcement actions against companies that misrepresent AI capabilities or fail to disclose material AI risks. These actions trigger shareholder class actions naming directors and officers personally.

The pattern: company makes AI capability statements to investors or customers. Statements are unsupported or misleading. Shareholders bring fraud or misrepresentation claims. Directors and officers face personal liability.

Consider a company that stated in SEC filings it had deployed “AI-driven” underwriting reducing fraud by 40%. The SEC found the company experimented with AI but hadn’t deployed it at scale. Performance claims came from limited testing, not real-world results. The company settled with the SEC. Shareholders filed a class action. Directors and officers faced personal liability.

D&O covered the defense. But the board learned: AI capability claims must be truthful, specific, and substantiated. Vague statements like “we use AI to improve outcomes” are vulnerable to SEC challenge. Bounded statements like “our AI reduced processing time by 12% in controlled testing” are defensible.

Direct consequences for board governance. The audit or risk committee must now oversee AI disclosures with the same rigor applied to financial disclosures. The company must document any AI capability claims. Material AI deployment risks must be disclosed.

D&O underwriters now ask about AI governance. Does your board have an AI committee or designated AI risk owner? Do you have an AI ethics review process? Are AI capability claims reviewed before public release? Underwriters’ willingness to issue or renew D&O may depend on the answers.

State and Federal AI Regulation: The Expanding Liability Landscape

The regulatory framework is fragmenting fast. The EU AI Act (effective 2024-2025) imposes obligations on companies placing AI in the EU market. High-risk systems face extensive requirements: human oversight, bias testing, impact assessments, documentation.

In the U.S., the picture is messier. No federal AI liability framework exists yet, though Congress is considering proposals. States are acting. Colorado’s AI Act covers automated decision systems impacting civil rights. California’s CCPA and privacy laws are being interpreted to apply to AI processing personal data. More states will pass AI-specific regulations in 2025 and 2026.

The fragmentation creates compliance challenges. A company deploying AI in Colorado, California, and the EU faces three different regulatory regimes with different requirements. Build to the most stringent, or build differently for each jurisdiction.

Insurance hasn’t caught up. Policies don’t explicitly cover state AI law compliance. They might cover costs tied to an incident or breach, but they won’t cover the routine infrastructure: impact assessments, bias testing, human oversight protocols, documentation.

Budget for AI regulatory compliance separately. It’s now a standard business expense, not an insurable risk.

Building Your AI Liability Insurance Program

Start with an inventory. Which AI systems do you run? What do they do? What data do they process? Who’s harmed if they fail?

Then layer policies strategically.

Tech E&O as the Foundation

Your tech E&O should explicitly cover AI. Does it cover machine learning? Deep learning? Generative AI? Be specific.

Ask your broker if the policy covers algorithmic bias. What does coverage require? Bias testing? Human oversight? Know these before you need them.

Confirm it covers discrimination litigation defense. If applicants sue for hiring algorithm bias, does tech E&O pay?

D&O for Securities Exposure

our D&O should explicitly cover AI claims. Does it cover “AI washing” shareholder lawsuits? Confirm it contemplates AI.

Document AI governance with your underwriter. Clear governance of AI deployments and capability claims reduces securities risk from the start.

Cyber Liability for Data Protection

Does your cyber policy cover AI-related data breaches? Cost of notifying people if an AI leaks personal data? Regulatory fines from AI privacy violations?

Ask if it covers AI-specific incidents. Poisoned models? Manipulated AI producing harmful content? These aren’t traditional cybersecurity, but they trigger breach notification.

Employment Practices Liability for Discrimination Claims

Using AI in hiring, promotion, or firing? You need EPLI. It covers discrimination claims from employees and applicants.

Standard EPLI now addresses algorithmic bias. Older policies might not. Confirm yours covers AI hiring and algorithmic discrimination.

Product Liability for Embedded AI

Selling products with AI inside? You need product liability. Many insurers are cautious about AI-embedded products. Expect higher premiums, tighter exclusions, or coverage denial without robust AI governance.

Document testing and validation. Show bias testing. Show human oversight. Show incident response plans. Underwriters want this evidence.

Consider Dedicated AI Liability Policies

A few specialty insurers now write policies designed for AI risks. Newer and more expensive, but built to fill gaps in traditional coverage.

These typically cover training data IP infringement, regulatory fines from AI privacy violations, bias audit costs after incidents, and shadow AI exposure. Worth considering if your company has significant AI exposure or large traditional policy exclusions.

What Underwriters Now Demand to See

AI underwriting has moved from caution to specificity. Underwriters want evidence, not assurances.

AI Governance Frameworks.

Someone in your organization is responsible for overseeing AI deployments. New AI systems go through approval before production. Risks are identified for each system, and controls are in place. Ethics review process. Incident logging and response protocol. Clear communication protocol when something fails.

Model Validation and Testing.

Validate models before deployment. Test for accuracy, bias, and safety. Produce reports. For high-risk systems: bias testing on protected characteristics. Documentation of results and corrections. For models making autonomous decisions or generating content: error rates, hallucination rates, harmful output rates. Perfection isn’t required. Understanding failure modes is.

Human Oversight Protocols

Humans reviewing AI decisions, especially for high-risk ones. Sample review of decisions. Anomaly flagging for human review. Documented oversight protocol for lending, content generation, and hiring decisions.

Incident Response Plans.

What happens when an AI system fails? Discriminatory decision. Data leak. Harmful content. Detection, investigation, and remediation protocol. Demonstrate that you’ve thought through failure scenarios.

Compliance Documentation.

Operating in jurisdictions with AI regulations? Show evidence: impact assessments for high-risk systems, governance documentation, audit trails of human review. This documentation strengthens your legal position if a regulator investigates or a plaintiff sues.

Get Your AI Liability Exposure Reviewed

AI risk cuts across Tech E&O, D&O, and cyber. Most companies find out at claim time that their existing policies have AI exclusions, narrow definitions, or sublimits that don’t match their exposure. Alliance Risk reviews your full stack and markets your risk to carriers that specialize in AI and tech liability, delivering proposals in a few business days.

What We Need for Your Quote:

  • Company revenue, industry, and whether you build, deploy, or resell AI systems
  • AI use cases (chatbots, hiring, underwriting, medical, content generation, agentic systems)
  • Training data sources and IP indemnification exposure
  • AI governance documentation (testing, bias audits, human oversight, acceptable use)
  • Current Tech E&O, D&O, cyber, and media liability policies, limits, and carriers
  • Regulatory footprint (EU AI Act, Colorado AI Act, SEC, state consumer protection)

Schedule a Consultation:

Speak with a tech and AI specialist about your AI exposure at no cost.

Policy Review:

Already have coverage? We’ll review your existing Tech E&O, D&O, and cyber policies at no charge, flagging AI exclusions, silent coverage gaps, training data IP carve-outs, and shadow AI vulnerabilities.

Request a Quote

Complete our online form or contact us directly to begin the quote process.

Want coverage built for the AI you actually deploy? Let’s talk. Alliance Risk: your specialized partner for AI liability insurance.