Klarna's 2024 earnings call made a bold claim: their AI assistant was doing the work of 700 agents. The stock jumped. LinkedIn lit up. And within six months, Klarna quietly started rehiring human agents because resolution quality had cratered and customer satisfaction scores were falling. The technology worked fine. The organization wasn't ready for it.
This pattern repeats across the industry. A VP of Support gets budget approval, signs a contract with an AI vendor, launches in eight weeks, and watches the whole thing underperform. Not because the model was bad, but because the team didn't have the operational foundation to make it work. The help center was outdated. The ticket taxonomy was a mess. Nobody had decided which conversations should even be automated. The AI inherited every dysfunction the team already had, and amplified it.
Technology readiness is one dimension of AI adoption. It might not even be the most important one.
The 700-agent fallacy
Klarna's story gets cited as both a success and a cautionary tale depending on who's telling it, but the real lesson is simpler than either camp admits. They treated AI deployment as a technology project. It was an organizational one.
When an AI agent handles a billing dispute, it needs more than a language model. It needs a knowledge base that accurately reflects current policy. It needs routing logic that knows when to escalate. It needs a team that's been trained to handle the cases AI can't, which are harder and more ambiguous than the ones it can. It needs someone monitoring quality, not just deflection rates.
Most CX teams skip straight to "which vendor should we buy" — a decision with its own complex trade-offs — without answering any of these questions first.
Six dimensions, not one
After working with dozens of support teams through their AI rollouts, a pattern emerges. The ones that succeed tend to be strong across six dimensions, not just the obvious technical ones.
Knowledge management is the first and most predictive. If your help center doesn't exist, or exists but hasn't been updated in six months, AI will confidently serve stale answers to your customers. One e-commerce brand had 340 help articles, of which 40% referenced a returns policy they'd changed two quarters earlier. They launched AI anyway. Their CSAT dropped nine points in the first month. A knowledge base health check before deployment would have caught it.
Process maturity is the second. Teams that have documented their workflows - when to escalate, how to handle exceptions, what "resolved" actually means - give AI something to follow. Teams that run on tribal knowledge and Slack threads are asking AI to guess. It will guess wrong.
Data readiness is the third, and it's more mundane than it sounds. It's not about having a data lake or a BI team. It's about whether your tickets are categorized consistently, whether you track resolution time in a way that means something, whether you can actually measure if AI is helping or hurting. A surprising number of teams can't answer "what percentage of our tickets are password resets" with any confidence.
Team structure is the fourth. AI doesn't replace agents uniformly. It handles the repetitive, well-documented cases and leaves humans with the complex, emotional, ambiguous ones. That's a fundamentally different job than most support agents were hired for. Teams that haven't thought about how roles change post-AI end up with frustrated agents handling only escalations, burning out faster, and leaving.
Change management is the fifth and most commonly ignored. The support team needs to believe this will make their jobs better, not eliminate them. The product team needs to feed AI-surfaced insights back into the product. Leadership needs to define what success looks like beyond "reduce headcount." Without alignment across these groups, AI projects stall after the pilot.
Technology readiness is the sixth. Yes, it matters. Your ticketing system needs decent APIs. Your tech stack needs to support integration. But notice where it falls in the list. It's necessary and insufficient.
The assessment gap
The strange thing about AI buying cycles in CX is how little assessment happens before the purchase. In other enterprise categories - ERP, CRM, security - nobody would deploy without a readiness audit. You'd map your current state, identify gaps, build a remediation plan, then buy.
With AI, teams skip straight to vendor demos. The urgency is understandable. Every board deck has an "AI strategy" slide. Every competitor claims they're already doing it. The pressure to move fast is real.
But speed without readiness isn't speed, it's rework. The teams that "move fast" by deploying AI on top of a broken knowledge base spend the next quarter cleaning up the mess, rewriting articles, rebuilding trust with customers who got bad answers. The teams that spend four weeks getting their foundation right before deploying end up live faster and with better results.
Scoring yourself honestly
The hardest part of readiness assessment is honesty. Every CX leader thinks their knowledge base is "pretty good." Every team believes their processes are "mostly documented." These vague self-assessments are worthless.
What works is forcing binary answers. Do you have a customer-facing help center, yes or no? Have your help articles been reviewed in the last 90 days, yes or no? Can you report on ticket volume by category with data you trust, yes or no? Binary questions eliminate the comfortable middle ground where teams convince themselves they're ready when they're not.
This is why we built the AI Readiness Scorecard - a free tool that walks through all six dimensions with yes/no questions and produces a score out of 100. It takes five minutes. The median score across the hundreds of CX leaders who've taken it is 54. Most teams are closer to ready than they think in some dimensions, and further away than they'd admit in others.
The value isn't the number itself. It's the specificity. A score of 62 tells you less than knowing you're strong on technology and process maturity but weak on knowledge management and change management. That's an actionable gap analysis. That tells you where to spend the next month before you sign a vendor contract.
Foundation first
The CX teams that will get the most from AI in 2026 and 2027 aren't the ones buying the fanciest tools. They're the ones doing the unglamorous prep work right now - auditing their knowledge base, documenting their escalation paths, cleaning up their ticket taxonomy, having honest conversations with their team about how roles will evolve.
AI readiness isn't a technology question, it's an operational one. The teams that treat it that way will deploy faster, see better results, and avoid the expensive rewind that comes from launching before you're ready.
Book a call
See what Lorikeet is capable of







