Regulated, emotional, and liability-adjacent issues need human ownership. Automating these damages CSAT and creates compliance risk.
A Gartner survey of 5,728 customers (December 2023) found that 64% of customers would prefer that companies didn't use AI in their customer service at all - and 53% said they'd consider switching to a competitor if a company announced plans to do so. The 6 categories below account for the majority of automation failures that damage CSAT and generate regulatory risk.
High-emotion situations (bereavement, serious illness, financial crisis) require human empathy - misapplied automation on these tickets reliably damages customer satisfaction and brand trust.
Regulated decisions (GDPR, insurance disputes, financial advice) carry legal liability that AI should not bear unilaterally.
Policy exceptions involving edge cases need human discretion - a rigid rule-based response is the wrong tool.
Contested liability issues should be escalated, not resolved by AI - one wrong automated message can create real legal exposure.
AI automation has genuinely changed what customer service teams can do. Routine refunds, password resets, order tracking - these belong to AI. But the most consequential tickets? Those still need a human in the loop. The challenge is drawing that line correctly, because getting it wrong doesn't just hurt CSAT - it puts your brand and your legal standing at risk.
Why Automation Fails on Certain Ticket Types
Automation fails when the cost of a wrong answer exceeds the cost of a slower human one. For the majority of tickets, speed wins. For a small but critical subset, accuracy, empathy, and accountability matter more - and AI systems are not yet reliable enough to deliver all 3 without supervision.
The failure mode isn't always a wrong answer. Sometimes it's a correct but tone-deaf answer delivered in the wrong moment. A customer who just lost a family member and is calling about a life insurance claim doesn't need a chatbot explaining a policy clause. They need a person. When AI gets this wrong, customers don't just churn - they tell others.
The difference between AI-assisted and fully automated
This distinction matters enormously. AI-assisted means the system researches the issue, drafts a response, surfaces relevant policy, and hands a prepared summary to a human agent. Fully automated means the AI closes the ticket without human review. For most tickets, fully automated is fine. For the categories below, AI-assisted is the ceiling.
What "human in the loop" actually means
Human in the loop doesn't mean a human reads every AI-generated message. It means a human approves the resolution before it's sent. That's the model described in how to safely let AI take actions in backend systems - AI does the preparation, humans hold the final gate on high-stakes actions.
High-Emotion and Grief-Adjacent Situations
Any ticket where a customer is in acute emotional distress - bereavement, serious medical diagnosis, relationship breakdown, financial ruin - requires a human. The resolution itself may be straightforward, but the delivery requires judgment that AI systems consistently fail to calibrate correctly.
Airlines, banks, and insurance companies have all faced public backlash after automated responses to bereavement claims. The reputational cost of one viral story outweighs the efficiency gains from automating thousands of these interactions. Route them to your best agents, not your cheapest channel.
Signals to watch for
Keywords: deceased, passed away, diagnosis, hospitalised, bankruptcy, eviction
Sudden change in account activity after a long-standing customer relationship
Requests to cancel services tied to a life change event
Regulatory and Compliance-Driven Issues
GDPR subject access requests, insurance claim disputes, financial product complaints, and any issue governed by a regulator should never be closed by AI alone. Regulators hold organisations accountable for outcomes - not just processes - and an incorrect automated decision can trigger audits, fines, and mandatory remediation.
In the UK, the Financial Conduct Authority requires complaints to be handled by a "competent person" with the authority to make decisions. An AI system that closes a complaint without human approval is almost certainly non-compliant, regardless of whether the decision was correct.
Regulated ticket categories
GDPR/data subject requests (access, deletion, portability)
Insurance claim disputes and appeals
Financial product complaints (FCA, CFPB, ASIC regulated)
Anti-money laundering flags or account restrictions
Policy Exceptions Requiring Discretion
Every support team has rules, and every rule has edge cases that deserve different treatment. A customer who has been with you for 8 years, never missed a payment, and is asking for a one-time fee waiver isn't the same as a new customer running the same play. AI doesn't currently have the context or authority to make that call correctly at scale.
Policy exception requests are one of the highest-leverage moments in a customer relationship. A human agent who says yes to the right exception creates a loyal customer. An AI that pattern-matches to "no" on a policy rule loses one.
Liability and Legal Exposure
If a customer's complaint could result in legal action - product liability, personal injury, data breach notification, contractual disputes - the response needs legal review before it goes out. An AI-generated response that includes an admission, an incorrect statement of fact, or an inadequate disclosure can create real liability.
This is not hypothetical. Legal teams at financial services and healthcare companies have already had to remediate AI-generated customer communications that created exposure. The fix is simple: flag these ticket types at intake, route to a human, and keep AI in a research-only role.
How to Classify Which Tickets AI Should Own
A practical framework for classifying ticket ownership avoids the binary of "automate everything" or "automate nothing." The 3-tier model below holds up in regulated and high-trust environments. For the technical implementation of this framework, see our guide to what agentic AI actually is and how these guardrails fit into its architecture.
AI-owned. Routine, low-stakes, reversible actions - password resets, order tracking, refund processing under a threshold, FAQ resolution. AI closes these without review. Teams using this model typically handle 60-70% of ticket volume fully automated.
Human-approved. AI researches, drafts a resolution, and surfaces the recommendation. A human reviews and approves before the response is sent. Used for policy exceptions, moderate complaints, and anything touching account security. Typically 20-30% of volume.
Human-owned. AI may assist with research and note-taking, but a human agent drives the interaction from first contact. Applied to the categories above: high-emotion, regulated, liability-adjacent, and contested disputes. Typically 5-15% of volume - but where the majority of reputational risk sits.
The Cost of Getting This Wrong
Automating the wrong ticket types has measurable consequences. According to PwC's "Experience is Everything" study (2017/18, n=15,000 across 12 countries), 32% of customers would stop doing business with a brand they loved after just 1 bad experience. For sensitive ticket categories, that risk is acute.
Teams that route high-emotion and compliance tickets to human agents consistently report higher satisfaction on those interactions. Klarna's 2024 experience - where aggressive AI automation led to quality declines severe enough that the company reversed course and began rehiring human agents - shows what happens when sensitive tickets go to the wrong channel. The efficiency cost of human handling is real, but the alternative is eroding customer relationships and inviting regulatory scrutiny. Fines for systematically mishandled regulated complaints in financial services can reach 6 figures. The maths are not complicated.
Key Takeaways
AI should not close tickets involving bereavement, serious illness, or financial crisis - misapplied automation reliably damages CSAT on these ticket types and erodes brand trust.
Regulated complaints (GDPR, FCA, insurance) must be resolved by a "competent person" - AI-only decisions are likely non-compliant in most jurisdictions.
The 3-tier model (AI-owned, human-approved, human-owned) lets you automate 60-70% of volume while protecting the 5-15% that carries the most risk.
Liability-adjacent tickets - product complaints, data breaches, contractual disputes - should be research-only for AI, never resolution-owners.
Policy exceptions are high-leverage moments: 1 correctly handled exception can retain a customer for years; 1 bad automated refusal can end the relationship.
The goal of AI in customer service isn't to automate everything - it's to automate the right things. Routine, reversible, low-stakes tickets belong to AI. High-emotion situations, regulated decisions, liability-adjacent complaints, and contested disputes belong to humans, with AI in a supporting role.
Getting this classification right is one of the highest-leverage decisions a CX leader makes. The teams that do it well cut costs without cutting corners. The teams that don't end up dealing with CSAT collapse, regulatory scrutiny, and the kind of customer stories that no PR budget can fix.
If you're building or refining your automation strategy, the Lorikeet Coach product is built for this use case - keeping humans accountable for the decisions that matter, while letting AI handle the work that doesn't require them.








