Thoughts on CX
Thoughts on CX
Paying for AI support per conversation is a raw deal for buyers: 1. You pay for every conversation, regardless of whether the AI solves it. 2. Because of 1, you have a fundamental misalignment of incentives between you and your vendor. 3. Because of 1, slick sales-led vendors will perform incredible mental gymnastics to convince you it's actually good for you.
1mo
Paying for AI support per conversation is a raw deal for buyers: 1. You pay for every conversation, regardless of whether the AI solves it. 2. Because of 1, you have a fundamental misalignment of incentives between you and your vendor. 3. Because of 1, slick sales-led vendors will perform incredible mental gymnastics to convince you it's actually good for you.
1mo
The passage below from Jimmy Knight's great substack (link in comments) really resonated with me. I've been saying to customers that while metrics and analytics are important "there's no substitute for review by subject matter experts with taste". You fundamentally have to assess any support interaction (human or AI) by asking "is this the way we want to show up to our customers". Too much of the AI industry is focused on deflection rates and other user-hostile metrics that map to vendor revenue, not end customer value. In Jimmy's words: "I once worked with a company obsessed with reducing ticket volume. They hit their target—deflection rates soared—but churn climbed. Why? Customers were being funneled to a chatbot that couldn’t handle complex issues, leaving them unheard. The dashboard looked great; the customer felt ignored. That’s the metric mirage: chasing numbers that don’t reflect reality. Re-centering the customer doesn’t mean abandoning structure or KPIs. It means tying every goal to the human on the other side. Let’s explore how. ... You can’t design great experiences from a conference room. You must live them. Order from your own website. Navigate your help center. Submit a support ticket. Pay attention to how it feels. Is it intuitive? Respectful of time? Or does it leave you frustrated?"
1mo
The passage below from Jimmy Knight's great substack (link in comments) really resonated with me. I've been saying to customers that while metrics and analytics are important "there's no substitute for review by subject matter experts with taste". You fundamentally have to assess any support interaction (human or AI) by asking "is this the way we want to show up to our customers". Too much of the AI industry is focused on deflection rates and other user-hostile metrics that map to vendor revenue, not end customer value. In Jimmy's words: "I once worked with a company obsessed with reducing ticket volume. They hit their target—deflection rates soared—but churn climbed. Why? Customers were being funneled to a chatbot that couldn’t handle complex issues, leaving them unheard. The dashboard looked great; the customer felt ignored. That’s the metric mirage: chasing numbers that don’t reflect reality. Re-centering the customer doesn’t mean abandoning structure or KPIs. It means tying every goal to the human on the other side. Let’s explore how. ... You can’t design great experiences from a conference room. You must live them. Order from your own website. Navigate your help center. Submit a support ticket. Pay attention to how it feels. Is it intuitive? Respectful of time? Or does it leave you frustrated?"
1mo
One of the biggest threats to successful AI rollouts are Dunning Kruger AI bots that don't know what they don't know and attempt to answer every query. For most vendors "30% automation in month 1" means 70% of the interactions were failures, since the AI attempts to answer 100% of tickets. That's crap: >2 bad AI interactions for every good one. The most amazing part is the psyop vendors run where they convince customers that this is _your_ problem! You don't need a pristine help center, or a full staffed team to make the most of AI. You just need the right AI agent, with the ability to know when it can help (and when it can't). At Lorikeet we try to maximize the ratio of good AI interactions to bad, and then scale up how many tickets the AI attempts as we improve its training together. It's *much* better for your customers.
2mo
One of the biggest threats to successful AI rollouts are Dunning Kruger AI bots that don't know what they don't know and attempt to answer every query. For most vendors "30% automation in month 1" means 70% of the interactions were failures, since the AI attempts to answer 100% of tickets. That's crap: >2 bad AI interactions for every good one. The most amazing part is the psyop vendors run where they convince customers that this is _your_ problem! You don't need a pristine help center, or a full staffed team to make the most of AI. You just need the right AI agent, with the ability to know when it can help (and when it can't). At Lorikeet we try to maximize the ratio of good AI interactions to bad, and then scale up how many tickets the AI attempts as we improve its training together. It's *much* better for your customers.
2mo
Had a great chat this week with Georgie Healy about building in and working on AI. We talked about Lorikeet as an AI agent that knows what it doesn't know, but also focused a bit more here on the AI landscape, including how candidates who want to work on AI can stand out head and shoulders with a small amount of extra effort.
2mo
Had a great chat this week with Georgie Healy about building in and working on AI. We talked about Lorikeet as an AI agent that knows what it doesn't know, but also focused a bit more here on the AI landscape, including how candidates who want to work on AI can stand out head and shoulders with a small amount of extra effort.
2mo
The customer support AI chatbot backlash is just getting started. But AI is the scapegoat, not the cause. Some companies have been willing to cut costs on customer support by degrading customer experience for years. This is not a new phenomenon. In the past it was maze-like FAQ pages, or offshoring support to people managed by metrics without adequate training or context. AI is a new tool those companies can use to achieve the same old goal: save a buck short term by degrading customer experience (which does long-term damage). The AI chatbots aren't the problem - the willingness to degrade experience to save a buck is. We've been really fortunate at Lorikeet to only work with companies that see AI as a way to scale great support experiences. For them, AI is a way to provide fast, personalized support 24/7. For every one of our customers, AI providing an as-good-or-better experience compared to human agents is non-negotiable. Not *one* of them would proceed with an AI roll out if that wasn't true, no matter the cost savings. If you *are* looking to save money by rolling out a worse experience, lmk - I've got a few vendors I could suggest 😜.
4mo
The customer support AI chatbot backlash is just getting started. But AI is the scapegoat, not the cause. Some companies have been willing to cut costs on customer support by degrading customer experience for years. This is not a new phenomenon. In the past it was maze-like FAQ pages, or offshoring support to people managed by metrics without adequate training or context. AI is a new tool those companies can use to achieve the same old goal: save a buck short term by degrading customer experience (which does long-term damage). The AI chatbots aren't the problem - the willingness to degrade experience to save a buck is. We've been really fortunate at Lorikeet to only work with companies that see AI as a way to scale great support experiences. For them, AI is a way to provide fast, personalized support 24/7. For every one of our customers, AI providing an as-good-or-better experience compared to human agents is non-negotiable. Not *one* of them would proceed with an AI roll out if that wasn't true, no matter the cost savings. If you *are* looking to save money by rolling out a worse experience, lmk - I've got a few vendors I could suggest 😜.
4mo
When it comes to adopting AI to support your customers, the idea of a Copilot is tempting. They produce a draft so a human agent can "check it" before hitting send. They feel safer. Like you're putting a safety blanket on your CX. But here's the thing: Copilots don't have as much of an effect as you think. Let's be honest: At scale, human agents aren't reviewing what an AI Copilot drafts all that carefully. It's just another step that results in a slower resolution for the customer. We find it's actually much safer to do really thorough testing, come to a conclusion about how good the AI is at handling certain tickets, and then put it on autopilot. The customer gets faster responses and when tickets inevitably get closed and reopened due to inactivity, unlike the new human agent who picks it back up, the AI agent is already up to speed. Here's a real example, from a real customer interaction between one of our customers (a big bank) and their customers (a parent in need). The result? → 100% AI → 100% satisfaction → 100% on autopilot All powered by Lorikeet.
3mo
When it comes to adopting AI to support your customers, the idea of a Copilot is tempting. They produce a draft so a human agent can "check it" before hitting send. They feel safer. Like you're putting a safety blanket on your CX. But here's the thing: Copilots don't have as much of an effect as you think. Let's be honest: At scale, human agents aren't reviewing what an AI Copilot drafts all that carefully. It's just another step that results in a slower resolution for the customer. We find it's actually much safer to do really thorough testing, come to a conclusion about how good the AI is at handling certain tickets, and then put it on autopilot. The customer gets faster responses and when tickets inevitably get closed and reopened due to inactivity, unlike the new human agent who picks it back up, the AI agent is already up to speed. Here's a real example, from a real customer interaction between one of our customers (a big bank) and their customers (a parent in need). The result? → 100% AI → 100% satisfaction → 100% on autopilot All powered by Lorikeet.
3mo
If you're testing customer support AI vendors and your *best* vendor isn't failing 20%+ of your test cases, you're not testing properly. The reality right now is that any competent customer support AI you test will get most of your tier 1 test cases right. At the same time, all vendors (Lorikeet included) are going to say they can do *much more* than just tier 1. So to sort what's real from what's marketing you need to run some kind of comparison. Vendors with more limited capabilities *want* you to have simple test cases - they'll crush them. But so will everyone else, so they'll then pivot to selling on the VCs backing them, or ancillary nice-to-have features. To figure out who can actually help your customers the most, you need a test suite hard enough that the best vendor you test fails ~20% of the cases. You should put deliberate traps in your knowledge center, and add questions you know your best agents can't answer easily. One particular variant of this is that it's hard to test multi-step, tier 2 or 3 tickets because you can't just input a question and evaluate an answer. Figuring this out is key to seeing differentiation between vendors. Ping me if you'd like me to share the approach we use for testing this in proof of concepts.
4mo
If you're testing customer support AI vendors and your *best* vendor isn't failing 20%+ of your test cases, you're not testing properly. The reality right now is that any competent customer support AI you test will get most of your tier 1 test cases right. At the same time, all vendors (Lorikeet included) are going to say they can do *much more* than just tier 1. So to sort what's real from what's marketing you need to run some kind of comparison. Vendors with more limited capabilities *want* you to have simple test cases - they'll crush them. But so will everyone else, so they'll then pivot to selling on the VCs backing them, or ancillary nice-to-have features. To figure out who can actually help your customers the most, you need a test suite hard enough that the best vendor you test fails ~20% of the cases. You should put deliberate traps in your knowledge center, and add questions you know your best agents can't answer easily. One particular variant of this is that it's hard to test multi-step, tier 2 or 3 tickets because you can't just input a question and evaluate an answer. Figuring this out is key to seeing differentiation between vendors. Ping me if you'd like me to share the approach we use for testing this in proof of concepts.
4mo
Excited to share my and Jamie Hall's little manifesto about why AI "Copilots" are not the answer for customer support. They're an admission of defeat by unambitious vendors and cautious, under-informed buyers. The logical case for them is weak, and so are the results they produce. Companies seeking to buy copilots today should reassess. This topic is complex so I took the time to lay out our thinking in a blog post (link in comments). Here’s the basic thesis: 1. A capable copilot is a capable pilot 2. A capable copilot offers only marginal efficiency gains 3. An incapable copilot may be worse than nothing 4. As a result, using a copilot as a stepping stone to full autonomy isn’t effective 5. Vendors are selling copilots because buyers are understandably afraid of AI 6. The better solution is high quality AI agents with robust testing
4mo
Excited to share my and Jamie Hall's little manifesto about why AI "Copilots" are not the answer for customer support. They're an admission of defeat by unambitious vendors and cautious, under-informed buyers. The logical case for them is weak, and so are the results they produce. Companies seeking to buy copilots today should reassess. This topic is complex so I took the time to lay out our thinking in a blog post (link in comments). Here’s the basic thesis: 1. A capable copilot is a capable pilot 2. A capable copilot offers only marginal efficiency gains 3. An incapable copilot may be worse than nothing 4. As a result, using a copilot as a stepping stone to full autonomy isn’t effective 5. Vendors are selling copilots because buyers are understandably afraid of AI 6. The better solution is high quality AI agents with robust testing
4mo
The coverage of our launch last week emphasized our key differentiator: the ability to reliably handle much more complex customer interactions than other solutions while maintaining a high quality, open-ended conversational experience. We can do this because of the unique architecture that powers our product. Rather than commercializing commodity architectures like RAG or agentic reasoning, we built our own solution designed to solve our customers' hardest problems. We call it the Lorikeet Intelligent Graph. Jamie Hall has shared more about how this works in a technical deep dive blog post (link in comments).
8mo
The coverage of our launch last week emphasized our key differentiator: the ability to reliably handle much more complex customer interactions than other solutions while maintaining a high quality, open-ended conversational experience. We can do this because of the unique architecture that powers our product. Rather than commercializing commodity architectures like RAG or agentic reasoning, we built our own solution designed to solve our customers' hardest problems. We call it the Lorikeet Intelligent Graph. Jamie Hall has shared more about how this works in a technical deep dive blog post (link in comments).
8mo
See Lorikeet in action
See Lorikeet
in action
Companies choose Lorikeet because our AI agent can do more than any other available. We'd love to show you what we can do for you.