Make your AI support metrics your own

Make your AI support metrics your own

Estelle Berton

|

Jan 13, 2026

"What's a good engagement rate?" is probably the question I hear most from Lorikeet subscribers. I get it. When you're implementing AI support, you want to know you're on the right track. 

Here's how I answer that question – the best metrics for your business might be the opposite of what works for someone else.

Why benchmarks lead you astray

I recently spoke with a customer who was stressed about their "high" AI engagement rates compared to industry benchmarks. But when we dug deeper, we realized their AI agent was doing exactly what they needed – helping customers discover features, complete complex workflows, and ultimately spend more on the platform.

For them, high engagement was a sign of success. For another business, it might signal product failures.

This isn't about being contrarian. It's about recognizing that AI fundamentally changes what "good" support looks like.

The old rules don't apply anymore

Traditional support metrics were built around human constraints:

  • One-touch resolution mattered because every interaction cost money

  • First response time mattered because customers were waiting in queues

  • Tickets per agent mattered because you needed to staff appropriately

With AI, these constraints go away. Your AI agent can handle multiple interactions without a linear increase in costs. It responds instantly. It scales almost infinitely.

So why are we still measuring success the same way?

Finding your north star metrics

Here's the framework I use with subscribers:

First, clarify your support strategy. Are you using AI to reduce costs and deflect tickets? That's completely valid. Or are you building an AI concierge that proactively helps customers succeed? Also valid. Just be clear about which strategy you're pursuing.

Then, choose metrics that align

  • Cost reduction focus: Track deflection rates, ticket reduction, cost per resolution

  • Revenue/retention focus: Track customer lifetime value, transaction rates, feature adoption

Finally, measure what happens after the interaction. Do customers who engage with your AI agent churn less? Buy more? Get to their "aha moment" faster? That's what really matters.

A real example

One of our telehealth subscribers tracks two different engagement metrics:

  • Medical support engagement (positive) – shows patients are using medical services

  • Customer support engagement (negative) – indicates product friction

Same company, same AI system, completely different success metrics. Because context matters.

Moving forward

I know it's tempting to look for that magic benchmark that tells you you're doing it right. But the businesses seeing real success with AI support are the ones who've done the harder work of defining what success means for them specifically.

Your metrics aren't wrong. They're just not yours yet.

Next time someone shares their AI support benchmarks, don't ask "How do I compare?" Ask instead: "What are they optimizing for, and is that what I want too?"

Recent posts

Ready to deploy human-quality CX?

Ready to deploy human-quality CX?

Ready to deploy human-quality CX?