Model Drift
Model drift is the gradual or sudden degradation of a model's performance in production caused by changes in input data, target distribution, or operating context.
What is Model Drift?
Drift comes in several flavours: covariate shift (inputs change), concept drift (the relationship between input and label changes), and prior shift (label frequencies change). Detection methods include monitoring input distributions, output distributions, calibration, and business KPIs. Response options are retraining, re-thresholding, or rolling back to a previous model. Drift monitoring is required for high-risk AI under the EU AI Act post-market monitoring obligations.
How does Model Drift apply to enterprise AI?
Enterprise ML systems trained on 2024 data may drift in 2026 as customer behaviour, product mix, or regulatory definitions shift. Without drift monitoring, the team learns about the problem from customers or regulators.
Related terms
- MLOps - MLOps is the discipline of operating machine learning systems in production: versioning, deployment, monitoring, retraining, and governance.
- Evaluation Harness - An evaluation harness is the test framework used to measure an AI system against a fixed set of inputs, expected outputs, and metrics, run on every change.
- Observability - Observability for AI is the ability to understand what an AI system did, why it did it, and at what cost, by inspecting its inputs, outputs, intermediate steps, and metrics.
- AI Risk Management - AI risk management is the discipline of identifying, assessing, mitigating, and monitoring the harms an AI system can cause across its lifecycle.
External references
Need help applying Model Drift to your enterprise? Submit a short brief and we reply within one business day.