Great CX isn't built on deflection rates
Great CX isn't built on deflection rates


Deflection rate measures how often a customer inquiry is handled without involving a human agent. It sounds reasonable until you realize what it actually incentivizes.
When you can achieve 100% deflection simply by turning off all your support channels, the metric is clearly disconnected from what matters to your customers. It transforms an internal "how" into the core "what."
Does that mean you should stop measuring deflection? No. You just need to redefine the role deflection plays in improving the overall efficiency of your support operation. In our experience, we recommend you:
Distinguish good deflection versus bad deflection.
Focus on efficiency metrics designed for an AI-first world.
Optimize the quality of interactions first, then scale coverage.
Make sure you're not paying for bad deflections.
The deflection trap
I've seen companies celebrate 70% deflection rates while their CSAT plummets. Why? Because deflection doesn't distinguish between "customer got their problem solved" and "customer gave up trying to get help."
A chatbot that confuses customers into abandoning their requests scores the same as an AI agent that genuinely resolves issues. Both register as "deflected" in the metrics.
This creates perverse incentives. Vendors get paid whether the AI helps or not, so they optimize for engagement over outcomes. The result? AI systems that trap customers in endless loops rather than solving problems or escalating appropriately.
Good deflection vs. bad deflection
Let's say a customer contacts support about a billing issue. Here's what a bad versus a good deflection looks like.

Both count as "deflection." Only one creates a good experience. Since customers don’t consistently fill out CSAT surveys, it’s more likely than not that this bad experience never shows up in any metric.
Focus on what actually matters
The real goal isn't deflection – it's efficiency. Deflection is – at best – a “how” to achieve those goals. Here's what you actually want to measure:

AI should make your support operation more efficient by handling appropriate cases well, freeing humans for complex work that requires judgment. The "how" is less important than the outcome.
For example, Magic Eden achieved a 74% CSAT with our AI agent – 30 points higher than their previous solution. But we didn't get there by maximizing deflection. We got there by ensuring every AI interaction was high quality. Indeed given some complex debugging issues they faced, the best solution was for the AI agent to collect information upfront, allowing a fast and efficient resolution once a human agent picked up the ticket. This kind of solution improves CSAT and efficiency, but doesn’t count as “deflection”.
Self-aware AI beats coverage-obsessed AI
The best AI agents know what they don't know. They understand their limits and get out of the way when they can't help.
Most vendors optimize for the wrong thing: they want AI to engage with 100% of tickets to maximize "deflection." But this means customers have multiple failed AI interactions for every successful one.
The better approach is to train AI to only engage when it can do a good job. Maybe it handles 50% of conversations, but solves 80% of the ones it touches. This massively reduces failed AI interactions, which are what actually annoy customers.
We've seen this play out across our customer base. Companies that focus on interaction quality over coverage see better business outcomes. Their customers trust the AI more, escalations are cleaner, and human agents aren't buried under tickets the AI fumbled.
The vendor incentive problem
Here's why this matters: most AI support vendors charge per ticket or their self-determined measure of a "resolution." Put another way, they get paid whether they help your customers or not, and they're incentivized to increase the overall number of tickets their AI handles, at the expense of your customers and your bottom line. Good news: we've created a free CX ROI calculator to help you calculate the real cost of your AI solution.

So when a vendor tells you their AI "deflects 80% of tickets," ask the follow-up question: "How many of those deflected customers actually had their problems solved?"
You'll be surprised how often they can't answer that question or how uncomfortable they get when you ask it.
Bottom line
Great customer experience comes from solving problems, not from keeping customers away from support channels.
The companies we work with pride themselves on providing excellent support. They want to use AI to scale that excellence, not to create barriers between themselves and their customers.
Optimize for customer outcomes, and business metrics will follow. Optimize for deflection, and you'll train customers to hate your AI – and your company.
There's no substitute for quality. Don't let vendors convince you otherwise.
Deflection rate measures how often a customer inquiry is handled without involving a human agent. It sounds reasonable until you realize what it actually incentivizes.
When you can achieve 100% deflection simply by turning off all your support channels, the metric is clearly disconnected from what matters to your customers. It transforms an internal "how" into the core "what."
Does that mean you should stop measuring deflection? No. You just need to redefine the role deflection plays in improving the overall efficiency of your support operation. In our experience, we recommend you:
Distinguish good deflection versus bad deflection.
Focus on efficiency metrics designed for an AI-first world.
Optimize the quality of interactions first, then scale coverage.
Make sure you're not paying for bad deflections.
The deflection trap
I've seen companies celebrate 70% deflection rates while their CSAT plummets. Why? Because deflection doesn't distinguish between "customer got their problem solved" and "customer gave up trying to get help."
A chatbot that confuses customers into abandoning their requests scores the same as an AI agent that genuinely resolves issues. Both register as "deflected" in the metrics.
This creates perverse incentives. Vendors get paid whether the AI helps or not, so they optimize for engagement over outcomes. The result? AI systems that trap customers in endless loops rather than solving problems or escalating appropriately.
Good deflection vs. bad deflection
Let's say a customer contacts support about a billing issue. Here's what a bad versus a good deflection looks like.

Both count as "deflection." Only one creates a good experience. Since customers don’t consistently fill out CSAT surveys, it’s more likely than not that this bad experience never shows up in any metric.
Focus on what actually matters
The real goal isn't deflection – it's efficiency. Deflection is – at best – a “how” to achieve those goals. Here's what you actually want to measure:

AI should make your support operation more efficient by handling appropriate cases well, freeing humans for complex work that requires judgment. The "how" is less important than the outcome.
For example, Magic Eden achieved a 74% CSAT with our AI agent – 30 points higher than their previous solution. But we didn't get there by maximizing deflection. We got there by ensuring every AI interaction was high quality. Indeed given some complex debugging issues they faced, the best solution was for the AI agent to collect information upfront, allowing a fast and efficient resolution once a human agent picked up the ticket. This kind of solution improves CSAT and efficiency, but doesn’t count as “deflection”.
Self-aware AI beats coverage-obsessed AI
The best AI agents know what they don't know. They understand their limits and get out of the way when they can't help.
Most vendors optimize for the wrong thing: they want AI to engage with 100% of tickets to maximize "deflection." But this means customers have multiple failed AI interactions for every successful one.
The better approach is to train AI to only engage when it can do a good job. Maybe it handles 50% of conversations, but solves 80% of the ones it touches. This massively reduces failed AI interactions, which are what actually annoy customers.
We've seen this play out across our customer base. Companies that focus on interaction quality over coverage see better business outcomes. Their customers trust the AI more, escalations are cleaner, and human agents aren't buried under tickets the AI fumbled.
The vendor incentive problem
Here's why this matters: most AI support vendors charge per ticket or their self-determined measure of a "resolution." Put another way, they get paid whether they help your customers or not, and they're incentivized to increase the overall number of tickets their AI handles, at the expense of your customers and your bottom line. Good news: we've created a free CX ROI calculator to help you calculate the real cost of your AI solution.

So when a vendor tells you their AI "deflects 80% of tickets," ask the follow-up question: "How many of those deflected customers actually had their problems solved?"
You'll be surprised how often they can't answer that question or how uncomfortable they get when you ask it.
Bottom line
Great customer experience comes from solving problems, not from keeping customers away from support channels.
The companies we work with pride themselves on providing excellent support. They want to use AI to scale that excellence, not to create barriers between themselves and their customers.
Optimize for customer outcomes, and business metrics will follow. Optimize for deflection, and you'll train customers to hate your AI – and your company.
There's no substitute for quality. Don't let vendors convince you otherwise.
Recent posts
Recent posts
Ready to deploy human-quality CX?
Ready to deploy human-quality CX?
Businesses with the highest CX standards choose Lorikeet's AI agents to
solve the most complicated support cases in the most complex industries.