Your chatbot just told a customer they can return a product past the deadline. Or that the treatment includes a free follow-up. Or that the apartment comes with a parking space. None of it is true. But the customer already made a decision based on that information.
Who bears the legal liability? The chatbot? The company using it? The platform that built it?
There's already case law that answers this question. And Air Canada didn't like the answer.
The Air Canada chatbot case: wrong information, real consequences
In 2022, Jake Moffatt needed to fly urgently to his grandmother's funeral. Before buying the ticket, he asked Air Canada's chatbot whether he could apply for the bereavement fare after travelling. The chatbot said yes: he had 90 days after the flight to request a partial refund.
That was wrong.
Air Canada's actual policy requires bereavement fares to be requested before travel, not after. But the chatbot got it wrong. Moffatt trusted that information, bought the ticket at full price, and when he applied for the refund, Air Canada denied it.
He took the case to a civil tribunal in British Columbia. And he won.
Who is liable when an AI chatbot gives wrong information? What the judge ruled
Air Canada's defence was striking: they argued the chatbot was "a separate legal entity responsible for its own actions."
The tribunal member didn't buy it.
The ruling was clear: "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or from a chatbot."
Air Canada had to pay CAD $812 in refund and costs. The amount was small. The precedent was not.
Your chatbot, your liability
If you have an AI chatbot on your website answering customer questions, legally it's you who's answering. Not the chatbot platform. Not the AI. Not a "separate entity." You.
If the chatbot says a treatment costs €200 when it actually costs €350, the liability is yours. If it says you ship to the Canary Islands and you don't, the liability is yours. If it confirms an appointment that doesn't exist, the liability is yours.
This isn't theory — there's already case law backing it up.
EU AI Act: chatbot regulation taking effect August 2026
As if the Air Canada case weren't enough, on August 2, 2026, most provisions of the EU AI Act take effect — the most ambitious AI regulation in the world. Some obligations have been in force since February 2025, and full enforcement is completed by August 2027.
If your business operates in Europe or sells to European customers, this applies to you.
The EU AI Act classifies AI systems by risk level. A customer service chatbot falls under "limited risk," which comes with one specific obligation: transparency. Your customers must know they're talking to an AI, not a person.
This isn't optional. From August, it's law. And in Spain there's already an agency dedicated to enforcing it: the AESIA (Spanish Agency for the Supervision of Artificial Intelligence), the first of its kind in the European Union, headquartered in A Coruña. This isn't a piece of paper gathering dust in Brussels.
What are the EU AI Act fines for non-compliance?
Penalties can reach €35 million or 7% of global annual turnover, whichever is greater. That sounds terrifying, but there's an important nuance for small businesses: fines are calculated on the lower of the two amounts. A company with €2 million turnover would pay a maximum of €140,000 for the most serious infringement, not €35 million.
Still a lot of money. But it's not the apocalyptic figure you read in the headlines.
Is my chatbot "high risk"?
It depends on the use case, not the technology. The same AI model can be minimal risk if you use it to draft internal emails, limited risk as a customer service chatbot, or high risk if it screens job candidates.
If your chatbot simply answers questions about your products or services, the obligations are reasonable: transparency and accurate data. You don't need a legal department to comply.
How to protect yourself from chatbot legal liability
Now for the practical part.
1. Make sure your chatbot doesn't hallucinate
Air Canada's problem wasn't having a chatbot. It was that the chatbot gave wrong information.
If your chatbot is trained on your real, up-to-date data, the chance of it making things up drops dramatically. Chatbots with RAG (Retrieval-Augmented Generation) search your documents before answering, instead of improvising.
What if your data changes? If you raise prices, change your terms of service, or stop shipping to a certain area, your chatbot needs to know. Outdated data is what gets you in trouble. If you use a chatbot with automatic data sync, this problem disappears: you update your spreadsheet and the chatbot already knows.
2. Disclose that it's an AI
Keeping your chatbot accurate protects you from lawsuits like Air Canada's. Disclosing that it's an AI protects you from EU AI Act fines. These are two separate risks, and you need to cover both.
The EU AI Act will require it, but it's good practice regardless. A clear notice that they're talking to a virtual assistant.
You don't need a 47-paragraph legal disclaimer. It's enough if the chatbot's name, the welcome message, or a label on the widget makes it clear that it's a bot.
3. Provide a path to a human
A chatbot shouldn't be a wall. If the query is complex, if the customer insists, if there's a real issue, there needs to be a way to reach a person. An email, a phone number, a contact form.
The chatbot handles 80% of repetitive queries. The remaining 20% needs someone from your team.
4. Don't let it make promises
Air Canada's chatbot "confirmed" that Moffatt could request the refund after travelling. If instead of confirming, it had said "according to our policy, check the details at [link] or contact our team," there probably wouldn't have been a lawsuit.
Clear instructions in the chatbot's prompt: don't confirm bookings, don't guarantee prices that might change, don't promise deadlines you can't keep. Inform, don't commit.
Don't be Air Canada
A chatbot on your website is a useful tool. But it's not an intern you can ignore. What your chatbot says, you say.
The good news is that protecting yourself isn't hard: up-to-date data, transparency, a path to a human, and a well-configured prompt. These are things that improve the customer experience anyway, with or without regulation.
In fact, that's exactly what we do at Bravos AI. Our chatbots only respond with the data you provide and sync automatically when you update it. And you control the name, the welcome message, and the tone — just enough to make it clear it's a bot, without scary disclaimers.
Air Canada tried to blame their chatbot. The judge said no.
Don't be Air Canada.
Frequently asked questions
Who is liable if an AI chatbot gives wrong information?
The company that deploys it. The Moffatt v. Air Canada case established that a chatbot is part of a company's website, and the company is responsible for all information on its site, whether it comes from a static page or from a chatbot.
Do AI chatbots have to disclose they are AI?
Yes, from August 2026 under the EU AI Act. AI systems that interact directly with people must inform users that they're talking to an AI, not a human.
What if my chatbot is classified as "high risk"?
It depends on the use case. A customer service chatbot is "limited risk" (transparency only). But if your chatbot screens job candidates, evaluates credit applications, or makes decisions that directly affect people, it's "high risk" and requires impact assessments, technical documentation, and human oversight.
How much can EU AI Act non-compliance cost?
Fines can reach €35 million or 7% of global annual turnover. For SMEs, the fine is calculated on the lower of the two amounts. A company with €2M turnover would pay a maximum of €140,000 for the most serious infringement.
Sources
- The Hill — Air Canada must pay refund promised by AI chatbot, tribunal rules — Coverage of the BC Civil Resolution Tribunal ruling
- McCarthy Tétrault — Moffatt v Air Canada: Misrepresentation by AI Chatbot — Legal analysis of case 2024 BCCRT 149
- American Bar Association — BC Tribunal Confirms Companies Remain Liable for Information Provided by AI Chatbot — ABA analysis on corporate liability for AI chatbots
- EU AI Act — Article 50: Transparency obligations — Transparency obligations for AI systems
Want a chatbot that won't get you in trouble?
Build your chatbot with up-to-date data, auto-sync, and full transparency. In under 5 minutes.
Start for free