Customer effort score (CES)
Customer effort score (CES) measures how easy it is for a customer to get their issue resolved. Typically captured through a post-interaction survey asking "How easy was it to handle your issue?" on a scale of 1-7 (or 1-5), CES reflects the friction a customer experiences — not just whether they're satisfied, but how much work they had to do to get there.
CES was introduced by the Corporate Executive Board (now Gartner) based on research showing that reducing customer effort is a stronger driver of loyalty than exceeding expectations. The finding challenged the prevailing "delight" strategy — it turns out that making things easy matters more than making things impressive.
CES is particularly valuable for evaluating AI customer service implementations because it captures something CSAT misses. A customer might rate an interaction as "satisfactory" (decent CSAT) while noting it required three attempts, a channel switch, and 45 minutes of total effort. CES catches that gap.
High-effort experiences that CES detects include:
Having to repeat information across channels or agents
Being transferred multiple times before reaching someone who can help
Needing to follow up on an unresolved issue
Navigating complex self-service systems that don't lead to resolution
Being forced through AI chatbot loops before reaching a human
For AI-automated customer service, CES is arguably the most important quality metric. The promise of AI is frictionless resolution — customer describes their issue, AI handles it, done. If the AI creates friction (asking redundant questions, failing to understand intent, requiring customers to start over), it defeats the purpose regardless of whether it technically "resolves" the ticket.
Related terms: customer satisfaction score, Net Promoter Score, first contact resolution



