Thoughts on CX
Paying for AI support per conversation is a raw deal for buyers: 1. You pay for every conversation, regardless of whether the AI solves it. 2. Because of 1, you have a fundamental misalignment of incentives between you and your vendor. 3. Because of 1, slick sales-led vendors will perform incredible mental gymnastics to convince you it's actually good for you.
1mo
The passage below from Jimmy Knight's great substack (link in comments) really resonated with me. I've been saying to customers that while metrics and analytics are important "there's no substitute for review by subject matter experts with taste". You fundamentally have to assess any support interaction (human or AI) by asking "is this the way we want to show up to our customers". Too much of the AI industry is focused on deflection rates and other user-hostile metrics that map to vendor revenue, not end customer value. In Jimmy's words: "I once worked with a company obsessed with reducing ticket volume. They hit their target—deflection rates soared—but churn climbed. Why? Customers were being funneled to a chatbot that couldn’t handle complex issues, leaving them unheard. The dashboard looked great; the customer felt ignored. That’s the metric mirage: chasing numbers that don’t reflect reality. Re-centering the customer doesn’t mean abandoning structure or KPIs. It means tying every goal to the human on the other side. Let’s explore how. ... You can’t design great experiences from a conference room. You must live them. Order from your own website. Navigate your help center. Submit a support ticket. Pay attention to how it feels. Is it intuitive? Respectful of time? Or does it leave you frustrated?"
1mo
One of the biggest threats to successful AI rollouts are Dunning Kruger AI bots that don't know what they don't know and attempt to answer every query. For most vendors "30% automation in month 1" means 70% of the interactions were failures, since the AI attempts to answer 100% of tickets. That's crap: >2 bad AI interactions for every good one. The most amazing part is the psyop vendors run where they convince customers that this is _your_ problem! You don't need a pristine help center, or a full staffed team to make the most of AI. You just need the right AI agent, with the ability to know when it can help (and when it can't). At Lorikeet we try to maximize the ratio of good AI interactions to bad, and then scale up how many tickets the AI attempts as we improve its training together. It's *much* better for your customers.
2mo
Had a great chat this week with Georgie Healy about building in and working on AI. We talked about Lorikeet as an AI agent that knows what it doesn't know, but also focused a bit more here on the AI landscape, including how candidates who want to work on AI can stand out head and shoulders with a small amount of extra effort.
2mo
The customer support AI chatbot backlash is just getting started. But AI is the scapegoat, not the cause. Some companies have been willing to cut costs on customer support by degrading customer experience for years. This is not a new phenomenon. In the past it was maze-like FAQ pages, or offshoring support to people managed by metrics without adequate training or context. AI is a new tool those companies can use to achieve the same old goal: save a buck short term by degrading customer experience (which does long-term damage). The AI chatbots aren't the problem - the willingness to degrade experience to save a buck is. We've been really fortunate at Lorikeet to only work with companies that see AI as a way to scale great support experiences. For them, AI is a way to provide fast, personalized support 24/7. For every one of our customers, AI providing an as-good-or-better experience compared to human agents is non-negotiable. Not *one* of them would proceed with an AI roll out if that wasn't true, no matter the cost savings. If you *are* looking to save money by rolling out a worse experience, lmk - I've got a few vendors I could suggest 😜.
4mo
When it comes to adopting AI to support your customers, the idea of a Copilot is tempting. They produce a draft so a human agent can "check it" before hitting send. They feel safer. Like you're putting a safety blanket on your CX. But here's the thing: Copilots don't have as much of an effect as you think. Let's be honest: At scale, human agents aren't reviewing what an AI Copilot drafts all that carefully. It's just another step that results in a slower resolution for the customer. We find it's actually much safer to do really thorough testing, come to a conclusion about how good the AI is at handling certain tickets, and then put it on autopilot. The customer gets faster responses and when tickets inevitably get closed and reopened due to inactivity, unlike the new human agent who picks it back up, the AI agent is already up to speed. Here's a real example, from a real customer interaction between one of our customers (a big bank) and their customers (a parent in need). The result? → 100% AI → 100% satisfaction → 100% on autopilot All powered by Lorikeet.
3mo














