When Your Chatbot Lies: The Liability Landscape Taking Shape in 2025
Understanding emerging legal standards as AI-generated content intersects with defamation law
Customer service chatbots are now routine across industries. They handle returns, answer product questions, and troubleshoot technical issues at scale. But they also hallucinate, misquote policies, and occasionally make false statements about real people—creating a liability exposure that many businesses haven’t fully considered.
The legal framework for AI-generated defamation is being established right now, through court cases that will define how companies are held responsible when their chatbots spread falsehoods. For businesses deploying conversational AI, understanding this emerging landscape isn’t optional.
The Cases Establishing Precedent
In May 2025, a Georgia court dismissed a defamation lawsuit brought by radio host Mark Walters against OpenAI after ChatGPT allegedly generated false claims that he had defrauded and embezzled funds from a gun rights organization. Judge Tracie Cason ruled that Walters had not proven defamation or that OpenAI acted with fault.
The court recognized OpenAI’s warnings about potential inaccuracies and efforts to reduce errors. This suggests that prominent disclaimers and responsible AI design may offer some protection against liability—though the broader question of AI accountability remains unresolved.
Other cases are moving forward. In April 2025, activist Robby Starbuck sued Meta after its AI chatbot allegedly produced false statements linking him to the Capitol riot, Holocaust denial, and child endangerment. Rather than litigate, Meta settled. The terms weren’t disclosed, but the settlement itself signals that companies see real risk in AI defamation claims.
Google also faced a suit from Wolf River Electric, a Minnesota solar company, after Google’s AI Overview erroneously claimed the state attorney general was suing them. The company alleged lost revenue including a $150,000 contract. The case argues that AI-generated content doesn’t qualify for Section 230 immunity since it’s not third-party speech.
These early cases share common elements: AI systems making false factual claims about specific people or businesses, reputational harm, and defendants arguing that warnings and good-faith efforts to prevent errors should limit liability.
Section 230 Won’t Save Everyone
Section 230 of the Communications Decency Act has protected internet platforms from liability for user-generated content since 1996. If someone posts something defamatory on Facebook, Facebook typically can’t be sued—the liability falls on the person who posted it.
But Section 230 only protects platforms from third-party speech. It doesn’t cover first-party speech—content the platform itself generates.
This distinction matters enormously for AI chatbots. When a chatbot generates text, who’s the speaker? Is it “third-party” content (from the AI, which learned from various sources) or “first-party” content (from the company that operates the chatbot)?
Recent court decisions suggest Section 230 immunity may not apply to AI-generated speech. In a 2025 case involving TikTok’s algorithm, a federal appeals court found that TikTok wasn’t protected by Section 230 because the algorithm’s recommendations constituted first-party editorial decisions, not merely hosting of third-party content.
If courts extend this reasoning to generative AI, companies can’t rely on Section 230 to shield them from liability when their chatbots make false statements. The legal protection that platforms have enjoyed for nearly three decades may not apply to this new technology.
The Air Canada Precedent
Even before defamation concerns, businesses learned that chatbots create binding obligations.
In 2024, Air Canada’s chatbot told a passenger he could get a bereavement discount retroactively—contradicting the company’s actual policy requiring advance approval. When the passenger applied for the discount after travel and was denied, he sued. The British Columbia Civil Resolution Tribunal ruled that Air Canada was liable for its chatbot’s statement.
The holding was straightforward: the chatbot was an agent of the company. Air Canada was bound by what its agent told customers, regardless of whether that information was accurate.
This case established that companies can’t disclaim responsibility for their chatbots’ statements simply because the chatbot made an error. Under agency law, when businesses deploy chatbots to interact with customers, those chatbots have apparent authority to speak for the company.
The implications extend beyond customer service. If a chatbot has apparent authority to represent the company, and it makes a defamatory statement, the company could be liable for that defamation—just as it would be if an employee made the same false statement.
The Four Types of AI Defamation Risk
Attorneys tracking AI defamation cases have identified four categories of risk:
Hallucination: The AI invents false information entirely. This is what happened in the Walters case—ChatGPT fabricated an embezzlement claim with no factual basis.
Juxtaposition: The AI combines accurate information in misleading ways. For example, correctly identifying someone’s name and correctly identifying that a lawsuit exists, but wrongly connecting the person to the lawsuit.
Omission: The AI leaves out crucial context that would make a statement accurate instead of defamatory. Saying someone was “arrested” without mentioning they were released without charges, for instance.
Misquote: The AI attributes false statements to real people. Google’s Gemma chatbot told a user that Senator Marsha Blackburn had been accused of rape by a state trooper—a completely fabricated claim.
Each type presents different challenges for prevention. Hallucinations might be reduced through better training and grounding in factual sources. Juxtaposition errors require better context handling. Omissions need completeness checks. Misquotes require source verification.
But none can be eliminated entirely with current technology. AI systems make mistakes. The question is who bears legal responsibility when they do.
The Publisher vs. Distributor Distinction
Traditional media law distinguishes between publishers and distributors. Publishers—newspapers, book publishers—exercise editorial control and can be held liable for defamatory content they publish. Distributors—bookstores, newsstands—don’t review everything they distribute and have more limited liability.
Under this framework, where do AI companies fall? They don’t write the specific outputs, but they do train the models, set the parameters, and control what types of content get generated. That’s more like a publisher than a distributor.
However, they also lack control over specific outputs in the same way a newspaper editor controls specific articles. They can’t review every chatbot response before it’s delivered.
Courts are still working out this classification. Some legal scholars argue AI companies should be treated as publishers because they control the technology that generates content. Others argue they’re more like distributors because they don’t dictate specific outputs.
The answer will likely depend on the degree of human oversight and control. A chatbot with extensive human review before responses go live might create publisher-level liability. A fully automated system with no review might be treated more like a distributor. Most business deployments fall somewhere in between.
What Businesses Can Do Now
While courts establish legal standards, businesses deploying chatbots need practical risk management strategies.
First, implement stronger grounding. Chatbots that rely solely on training data are more likely to hallucinate. Systems grounded in verified knowledge bases, with retrieval-augmented generation pulling from reliable sources, reduce fabrication risk.
Second, limit authority explicitly. Make it clear what the chatbot can and cannot do. Air Canada could have configured its chatbot to only provide information directly from policy documents, not to interpret or extrapolate. Clear limitations reduce apparent authority.
Third, use prominent disclaimers. While not a complete defense, clear warnings that the chatbot may make errors help establish that users shouldn’t rely on unverified information. The Walters court cited OpenAI’s extensive warnings as a factor in dismissal.
Fourth, implement monitoring and correction processes. When users report false information, have systems in place to quickly verify, correct, and update the model. Meta’s alleged failure to fix errors despite notification strengthened the Starbuck case.
Fifth, consider human review for high-stakes interactions. Chatbots handling sensitive topics—medical advice, legal information, accusations of wrongdoing—present higher risk. Human review before responses are delivered adds friction but reduces liability exposure.
Sixth, maintain detailed logs. If sued, you’ll need to show what the chatbot said, when, and what efforts you made to prevent errors. Comprehensive logging enables effective defense.
State Legislation Is Coming
While courts work out common law standards, legislators are beginning to address AI risks directly. Texas passed the Responsible AI Governance Act in June 2025, establishing liability for certain intentional AI abuses including using AI to facilitate crimes, create deepfakes, or engage in unlawful discrimination. Violations carry fines up to $200,000.
While AI defamation isn’t explicitly covered, the law signals growing legislative attention. California passed its own AI governance bill, and federal legislation has been introduced (though not yet passed).
These statutes likely won’t create broad private rights of action for AI defamation—most follow the Texas model of giving enforcement power to attorneys general. But they establish that AI operators can face financial penalties for certain harms, even if private lawsuits aren’t allowed.
Businesses should expect regulation to continue developing at state level while federal frameworks remain in debate. Multi-state operations will need to comply with varying requirements.
The Reasonable Reliance Question
One defense argument gaining attention: can anyone reasonably rely on AI-generated information?
ChatGPT and similar tools prominently state they may produce inaccurate information. Users are advised to verify important information independently. If these warnings are sufficiently prominent, can someone claim they reasonably believed false AI-generated statements?
This argument succeeded in the Walters case. The court found that OpenAI’s extensive warnings meant Walters couldn’t reasonably rely on ChatGPT’s output as fact.
But this defense has limits. It might work for general-purpose chatbots like ChatGPT. It’s less convincing for customer service chatbots specifically designed to provide accurate company information. A customer asking Air Canada’s chatbot about bereavement policies is acting reasonably by trusting the answer—that’s what the chatbot is for.
The reasonable reliance question will likely depend on context. The more specialized and authoritative the chatbot appears, the more reasonable it is to trust its statements, and the less effective disclaimer defenses become.
Looking at Other Jurisdictions
In Australia, former mayor Brian Hood threatened defamation proceedings against OpenAI after ChatGPT falsely claimed he was imprisoned for bribery. He sent a concerns notice—the formal first step in Australian defamation proceedings—but ultimately didn’t pursue the claim further.
The case highlighted how AI defamation risk varies by jurisdiction. Australian defamation law places the burden of proof on defendants to show their statements were true. This is more plaintiff-friendly than U.S. law, where plaintiffs generally must prove falsity.
U.K. law similarly favors defamation plaintiffs. For multinational companies, this creates complex liability exposure. A chatbot deployed globally could face defamation claims in jurisdictions with varying legal standards, where Section 230-style protections don’t exist, and where proving truth rather than falsity is required.
The Broader Business Implication
AI defamation risk exists within a larger context of AI liability. Companies are also facing copyright infringement claims for training data, privacy violations for data handling, negligent misrepresentation for incorrect advice, and product liability theories for harm caused by AI systems.
The common thread: existing legal frameworks weren’t designed for AI-generated content. Courts and legislators are adapting these frameworks in real time. The standards being established now will govern AI liability for years.
For businesses, this uncertainty requires careful risk assessment. The potential benefits of deploying conversational AI—reduced support costs, faster customer service, 24/7 availability—must be weighed against legal exposure that’s still being defined.
Some companies may decide the risk isn’t worth it and maintain human-only customer service for sensitive interactions. Others may accept the risk while implementing strong safeguards. There’s no universal answer.
But ignoring the risk isn’t viable. The cases establishing AI defamation standards are happening now. The companies involved are learning expensive lessons. Better to learn from their experience than to repeat it.

