Launching: Toolshed

Launching: Toolshed

Thomas Wing-Evans

|

|

0 Mins

Sierra launched a university. Decagon launched a university. When the two most-funded AI support companies in the world both decide that what CX leaders really need is to sit in a classroom, it tells you something about how this industry sees its buyers.

The assumption is that you - the person running support for a company doing real volume - don't know enough to make good decisions about AI. That if someone just explained it to you, slowly, maybe with a video series and a certification badge, you'd finally get it. We think that's backwards.

Tools, not lectures

CX leaders don't have a knowledge problem. They have a decision problem. The VP of Support at a Series C fintech doesn't need someone to explain what an LLM is. She needs to know whether her knowledge base is actually good enough to power an AI agent, what her ticket backlog is costing her while she evaluates vendors, and whether the "build it ourselves" pitch from her engineering team holds up against a three-year TCO model.

Those aren't questions you answer with a blog post or a webinar. They're questions you answer with a calculator, an assessment, a model you can run with your own numbers and show your CFO on Monday.

The gatekeeping problem

Here's what the "university" model actually does: it creates a captive audience. You fly to San Francisco. You sit in a room with a vendor's team for two days. You leave feeling educated and slightly obligated. The vendor leaves with your contact details, your company's pain points laid bare, and a warm lead they can work for six months. It's not education. It's a very expensive top-of-funnel play dressed up as thought leadership.

The tell is who gets access. These programs aren't open to every CX leader trying to figure out AI. They're open to the ones who can afford a seven-figure contract - or who the vendor thinks might be able to. If your company isn't big enough, you don't get to learn.

What free actually means

When we started building Toolshed, the first decision was the simplest one: no gates. No email capture before you can use a tool. No request access form. Every tool works immediately, with your real numbers, and gives you a concrete output - a cost figure, a readiness score, a hiring plan - that you can screenshot and put in a deck. If you never talk to us, that's fine. You still got something useful.

This isn't altruism. It's a bet on a specific idea: that the best way to build trust with a CX leader is to be genuinely useful to them before you ask for anything in return. The worst way is to make them feel like they need your permission to understand their own operation.

The eight tools (with more coming)

We started with the questions CX leaders actually ask when they're evaluating AI - not the questions vendors wish they'd ask.

The Build vs Buy Calculator exists because every CX leader has an engineering team that says "we could build this ourselves." They're usually right that they could start. They're usually wrong about what it costs to maintain. The calculator models the full three-year cost of both paths, including the rebuild cycles that catch internal teams off guard when models and architectures shift every six months.

The Knowledge Base Evaluator exists because the number one predictor of AI agent quality is the quality of the knowledge it's trained on. Most companies don't know whether their help centre is ready for AI until they've already deployed and seen it hallucinate answers from three-year-old articles. The evaluator scores coverage, freshness, and AI-readiness before you've committed to anything.

The AI Readiness Scorecard measures six dimensions of organizational readiness - because the technology being ready doesn't mean your team, your processes, or your data are.

The Quality vs Speed Benchmarker visualizes the tradeoff that every support leader feels in their gut but can't articulate in a board meeting: what happens to quality when you push for faster response times, and where AI actually shifts that frontier rather than just sliding along it.

The Hiring Forecaster and Agent Turnover Calculator both address the human side. Hiring in CX takes months. Attrition is brutal - and most leaders underestimate what it really costs when experienced agents leave and new ones take 90 days to ramp. These tools model the compounding effects that spreadsheets tend to flatten.

The Backlog Cost Calculator puts a dollar figure on the thing every support leader knows but struggles to quantify for finance: the revenue leaking out of every unanswered ticket sitting in the queue. Lost customers, reduced expansion, overtime spend.

The CX ROI Calculator is the most direct one. It compares what you're paying your current vendor - or what you'd pay under a pay-per-ticket model - against what a pay-per-resolution model looks like across CSAT, coverage, and total cost. It's the question every CX leader should be asking their vendor and most aren't.

Why vendors don't do this

Building free tools that help prospects make independent decisions is a terrible idea if your product doesn't hold up to scrutiny. If your AI agent relies on deflection rates to look good, the last thing you want is a calculator that separates good deflections from bad ones. If your pricing model depends on charging for every ticket touched rather than every ticket resolved, you definitely don't want a side-by-side cost comparison sitting on the internet.

The reason most vendors prefer "education" over "tools" is that education lets you control the narrative. A university syllabus is written by the vendor. A calculator with the buyer's own numbers is written by reality.

Questions to ask yourself


  • When your AI vendor offers to educate you, what are they actually optimizing for?

  • Could you explain to your CFO today what your ticket backlog costs the business?

  • How much of your current AI evaluation is based on your own data versus the vendor's hand-picked case studies?

Book a call

See what Lorikeet is capable of

Ready to deploy human-quality CX?

Ready to deploy human-quality CX?

Ready to deploy human-quality CX?