Back to blog home

Which is More Important: Explainability or Monitoring?

Explainable AI (XAI) and model monitoring are foundational components of machine learning operations (MLOps). To understand why they’re so important for the MLOps lifecycle, consider that ML models are increasingly complex and opaque, making it difficult to understand the how and why behind their decisions. Without XAI and monitoring:

  1. You can’t see what’s going on inside the model.
  2. You can’t understand why it’s happening.
  3. You can’t identify the root cause of issues causing your model to underperform.

These three points together hint at the primary benefits of explainability and monitoring, and suggest an answer to which is more important: neither and both.

Model monitoring and XAI are the yin and yang of model performance management. They’re different, but complementary, and most effective when used together.

Monitoring tells the team how the model is performing and alerts them to critical issues that may not be visible without it. When critical issues emerge, explainability helps stakeholders understand the root cause of model performance and drift issues for quick resolution.

Model monitoring is a must

Monitoring is crucial to ensure ML models perform as they were intended. When models degrade in performance, it can impact the business, damage reputation, and lose stakeholder trust. Because model degradation can go unnoticed, using tools to monitor model quality and performance is important to minimize the time to detection and time to resolution.

Align the organization by collaborating with technical and business stakeholders to identify which KPIs and metrics to track in order to meet your business goals. It is highly critical to monitor models against those KPIs because a slight dip in model performance may drastically change business outcomes. As a result, you need a monitoring tool that can accurately alert on even the slightest change in model performance or drift. 

The Fiddler Model Performance Management platform offers more accurate monitoring and explainability methods, and provides practitioners the flexibility of monitoring custom metrics, as well as the industry standard metrics to measure model decay — model drift, data drift, prediction drift, model bias, etc. They’re especially critical to measure as proxies, because small errors in data can go unnoticed, even as they chip away at model performance over time.

Explainable AI is a must

Knowing there’s a potential problem isn’t the same as understanding how it impacts model outcomes. Once monitoring alerts the team to a critical issue, explanations are urgent to understand model behavior in order to take the necessary measures to resolve the issue and improve the model. XAI helps data scientists and engineers on MLOps teams understand model predictions and which features contributed to those predictions, whether in training or production. It helps ensure that models are behaving as intended and provides insights to all stakeholders, both at the business and technical level.

Monitoring and Explainability are better together 

Given that model monitoring and explainability are distinct functions, it may seem that there’s a binary choice between them, requiring you to prioritize one or the other.

But they’re more properly understood as two indispensable tools that bridge the gap between the way machine learning models behave, and how humans can comprehend model behavior.