AI That Makes Your CX Team Better, Not Smaller

AI That Makes Your CX Team Better, Not Smaller

Steve Hind

Steve Hind

|

|

0 Mins
AI That Makes Your CX Team Better, Not Smaller
AI That Makes Your CX Team Better, Not Smaller

The Klarna Experiment

In February 2024, Klarna announced that its AI chatbot had handled 2.3 million customer conversations in a single month, doing the work of 700 full-time agents. CEO Sebastian Siemiatkowski called it a revolution. Investors cheered. The fintech press wrote it up as a blueprint for every support org on the planet.

By mid-2025, Klarna was hiring human agents again. Customer satisfaction had dropped. Complex issues were going unresolved. The AI that replaced 700 people could not replicate the thing those people actually did well: exercise judgment under pressure.

Siemiatkowski admitted publicly that the cuts went too far. The company pivoted to what it now calls a "human-AI partnership," with AI handling routine queries and humans taking everything that requires empathy, discretion, or escalation.

Klarna is not an outlier. It is a preview.

The Rehiring Wave

In February 2026, Gartner predicted that by 2027, half of the companies that cut customer service staff because of AI will rehire for those same functions. Not because AI failed entirely, but because the "replace everyone" strategy produces a specific, predictable failure mode: the operation gets cheaper and worse at the same time.

The data underneath that prediction is striking. Only 20% of companies that reduced headcount did so primarily because of AI. The rest blamed economic pressure and cost-cutting. But once AI became the narrative, every reduction got framed as automation success, whether it was or not.

Salesforce is instructive here. CEO Marc Benioff told the Logan Bartlett podcast in September 2025 that his support team went from 9,000 to 5,000, crediting AI agents. What he also said, less quoted: there is now an "omnichannel supervisor" helping those agents and humans work together. In other words, even the company selling AI replacement landed on a hybrid model.

Forrester puts a number on the regret: 55% of employers who made AI-driven cuts say they moved too fast.

What Agents Actually Do

The replacement thesis rests on a misunderstanding of what customer service agents spend their time on. If you believe the job is mostly answering FAQ-level questions, then yes, a language model can do it cheaper. Tier-1 queries like password resets, shipping status checks, and basic returns policy questions are genuinely automatable.

But anyone who has managed a real support team knows those tickets are not the ones that determine whether customers stay or leave. The hard work is the insurance claim where the customer is frustrated and the policy is ambiguous. The billing dispute where three systems show conflicting data. The onboarding call where a new customer is ready to churn 48 hours after signing up.

A Stanford and MIT study of over 5,000 contact center agents found that when AI was used to assist agents rather than replace them, productivity rose 14%. Newer agents saw a 35% increase in chats resolved per hour. Requests to speak with a manager dropped 25%. The AI compressed ramp-up time, meaning a two-month-old agent performed like an eight-month veteran.

That is not a story about AI doing the job instead of humans. It is a story about AI making humans better at a job that remains fundamentally human.

The Real Problem Nobody Talks About

Contact centers have a 40-45% annual turnover rate. The average agent stays 14.3 months. That means the typical support organization is replacing nearly half its workforce every year, losing institutional knowledge, retraining constantly, and watching quality fluctuate with every cohort change.

This is the problem that keeps CX leaders up at night, not whether AI can answer a password reset question. The existential challenge is maintaining consistent quality across a team that is always partially new.

Traditional QA barely helps. Most contact centers manually review 1-2% of interactions. A QA analyst listens to a handful of calls per agent per month, fills out a scorecard, and delivers feedback days or weeks after the conversation happened. The sample is too small to be statistically meaningful. The feedback loop is too slow to change behavior. And the process itself is expensive enough that scaling it means hiring more QA people, which puts you right back where you started.

This is where the AI-for-replacement crowd gets the diagnosis right but the prescription wrong. They see that support operations are inefficient and conclude the answer is fewer humans. The actual answer is better-supported humans.

Coaching Over Cutting

Think about how the best sports teams operate. When a player underperforms, the response is not to cut the roster. It is to review film, identify patterns, and coach. The best teams invest in making their existing players better because they understand that talent development compounds in ways that talent replacement does not.

Support organizations need the same approach, and AI makes it possible at a scale that was never realistic before.

Instead of reviewing 1-2% of tickets, AI can evaluate 100% of conversations against a consistent rubric. Instead of a QA analyst spending 30 minutes per ticket review, an AI agent can assess quality, flag deviations from protocol, and identify coaching opportunities across thousands of interactions in real time. Instead of feedback arriving two weeks after the conversation, it can surface the same day.

The data supports the impact. Agents who receive personalized AI-driven coaching report 91% job satisfaction, compared to 57% for those getting generic feedback. Regular coaching sessions grounded in QA data improve agent performance by 25-30% while reducing attrition by 20-40%.

Read those numbers again. A 25-30% performance improvement without changing a single person on the team. A 20-40% reduction in the turnover problem that makes everything else harder. This is not incremental, it is structural.

The Skeptic's Entry Point

If you lead a CX team and your CEO just told you to "look at AI," you are probably dreading vendor calls. Every demo starts the same way: here is how many agents we can replace, here is the cost savings, here is the deflection rate. The pitch assumes you want fewer people on your team.

But you built that team. You hired people who care about customers. You know that your agents' ability to handle a difficult conversation with empathy is the reason your retention numbers look the way they do. The last thing you want is a vendor telling you those people are a cost center to be optimized away.

You are not wrong to feel that way. The buyers who understand that empathy is a competitive advantage are the ones building the most durable customer relationships. The question is not whether AI has a role in your operation. It does. The question is what role.

The most productive starting point is not automation. It is visibility.

What Visibility Looks Like

A CX manager at a healthcare platform recently described her workflow before AI-powered QA. She would pull batches of conversations manually, paste them into ChatGPT, and ask for summaries and theme groupings. Hours of work to get a rough sense of what was happening across her team. The analysis was limited to whatever she had time to sample, which was never enough.

With an AI agent evaluating every conversation, she could ask a single question: "What were the primary friction points customers reported across all conversations tagged with early pay last week?" She got sentiment analysis, exact quotes, and a breakdown of which interactions frustrated customers and which satisfied them. The analysis that used to take hours took minutes, and it covered the full dataset instead of a biased sample.

That manager is now running this analysis across every product line. She presents findings to the company showing which products drive the most support friction, turning contact rate data into product improvement signals. Her support team did not get smaller. It became the source of customer intelligence for the entire organization.

This is the shift that AI skeptics miss when they refuse to engage, and that AI evangelists miss when they fixate on headcount reduction. The highest-value application of AI in customer service is not doing the work instead of your team. It is giving your team the information and tools to do work that was previously impossible.

From QA to Continuous Improvement

The old QA model is pass/fail. Did the agent follow the script? Did they hit the required talking points? The model produces compliance, not growth.

The better model treats every conversation as training data. When an AI agent escalates a ticket and a human resolves it, the system should analyze what the human did, aggregate those solutions across similar tickets, and surface the patterns. Over time, this creates a feedback loop: the AI gets better because humans teach it, and humans get better because the AI identifies their strengths and gaps.

One company's CXO described it this way: "When the AI can't answer and it gets escalated, go look at what the human said that resolved it. Aggregate across those conversations. The agent said basically the same thing. That is the first thing you should focus on, go fix your content." That is not a replacement dynamic. That is a partnership where each side makes the other more effective.

Where Lorikeet Coach Fits

This is what we built Lorikeet Coach to do.

Coach is an AI agent that evaluates every support conversation, both human and AI-handled, against a customizable quality rubric. It identifies where agents follow protocol and where they deviate. It scores empathy, accuracy, policy adherence, and resolution quality across 100% of your tickets, not a 2% sample.

But evaluation is only the starting point. Coach translates QA data into coaching opportunities. It surfaces the specific conversations and patterns that managers need to have productive one-on-ones with their team. It identifies knowledge gaps and recommends content updates. It detects when quality is drifting before the numbers show up in your CSAT scores.

For teams that also use Lorikeet's AI concierge, Coach closes the loop entirely. When it identifies a pattern of escalations around a specific topic, it can recommend workflow changes, draft updated reference material, and test the fix before it goes live. The human agents inform the AI improvement, and the AI improvement reduces the load on human agents.

For teams not using Lorikeet for automation, Coach works as a standalone product. It plugs into your existing helpdesk, evaluates your human agents' conversations, and gives your managers the data they need to run a better team. No automation required. No agents replaced.

We built it this way because we believe the companies that win in customer experience over the next decade will not be the ones that cut the deepest. They will be the ones that invested in making their people exceptional while using AI to handle the work that does not require human judgment.

The pitch from every other vendor is: give us your tickets and we will make your team smaller. Our pitch is different. Give your team better tools and they will make your company better.

Book a call

See what Lorikeet is capable of

(function(){ var schemas = "{"@context":"https://schema.org","@type":"BlogPosting","headline":"AI That Makes Your CX Team Better, Not Smaller","description":"Most AI vendors pitch replacing your support team. The better play is making them sharper. Here's why augmentation wins and what Coach does differently.","datePublished":"2026-04-15","author":{"@type":"Person","name":"Steve Hind"},"publisher":{"@type":"Organization","name":"Lorikeet","url":"https://www.lorikeetcx.ai"}}"; var parsed = JSON.parse(schemas); parsed.forEach(function(s){ var el = document.createElement("script"); el.type = "application/ld+json"; el.textContent = JSON.stringify(s); document.head.appendChild(el); }); })();