Model Drift
Model drift is the degradation of AI model performance over time as the statistical properties of production data diverge from training data.
AI models learn patterns from historical data. When reality shifts—new products, changed policies, evolving customer language, emerging issues—the model's learned patterns become less accurate. Drift happens gradually and often invisibly until performance degrades noticeably.
Two types matter for customer service. Data drift: the inputs change (customers ask about new things, phrase requests differently). Concept drift: the right answers change (policies update, processes evolve). Both cause previously-accurate models to produce incorrect results. A model trained on pre-pandemic support patterns struggled with "can I cancel my event" queries when that intent spiked dramatically.
Detecting drift requires ongoing monitoring: tracking confidence distributions, comparing current inputs to training data distributions, measuring performance metrics over time. Addressing drift typically means retraining with updated data—but for grounded AI systems, it might mean updating knowledge sources rather than the model itself. The key is having observability infrastructure that surfaces drift before customers complain.
Related terms: AI observability, Hallucination detection, Knowledge base



