Error Rate / Hallucination Rate
Error Rate measures the percentage of AI responses containing factually incorrect information, policy violations, or fabricated details. Hallucination Rate specifically refers to errors where the AI confidently stated false information as if it were true.
These metrics require active QA. Unlike CSAT or resolution rate, errors don't surface automatically—customers often accept confident-sounding wrong answers. Measuring error rate means sampling AI conversations and checking responses against source data, policies, and known facts.
Common error categories include: wrong product information, incorrect policy statements, fabricated order statuses, promised actions the AI can't perform, and invented features or capabilities. Hallucinations are errors where the AI generated plausible-sounding but entirely false information, like a tracking number that doesn't exist.
Error rate is the safety metric for AI support. A 5% error rate means 1 in 20 customers receives wrong information. For regulated industries—fintech, healthcare, insurance—this creates compliance risk. For any business, errors erode trust faster than good interactions build it.
Related terms: Total Quality Score (TQS), Effective Automation Rate, Success Rate



