Back to blog home

Explainable AI

Do you know how changing a single data point will affect the predictions of models that power your business? Artificial intelligence (AI) models can be quite complex, and not all models are built the same — knowing how an input will affect the model’s output makes it easier to optimize that model for your company’s needs. 

What is Explainable Artificial Intelligence (XAI)?

Explainable AI (XAI) is a method of designing AI with the goal of creating human-understandable models. An explainable model makes clear the effect of each individual input on the model’s output. Some simpler models (e.g. logistic regression) are explainable by nature, while more complex models (e.g. deep learning) require specific explainability-focused techniques. XAI can be used alongside model performance management (MPM) to optimize your model’s behavior throughout the model lifecycle. 

XAI is also related to the idea of Responsible AI, a branch of AI that emphasizes designing models to be fair, privacy-sensitive, secure, and explainable. In particular, the benefits of XAI overlap with the goals of Responsible AI: not only can explainable models be understood by stakeholders, but using XAI also contributes to detecting and limiting model bias.

Who benefits from XAI?

  • Internal MLOps, technical, and business team members can improve model performance through new understanding of the reasoning behind certain predictions. 
  • Public stakeholders in the industry become more informed about the models they are regulating or investing in.
  • Product end users can better understand how their individually-relevant prediction was reached.

How does XAI work with MPM? 

Model performance management (MPM) benefits from clarity on how the model derives its predictions. Explainability is important throughout the model lifecycle. Offline explanations are used to optimize models for production use while still in development. Online explanations are used in two ways after the model is shipped: spot online explanations help debug model issues and consistent online explanations help track model performance over time. 

By tracking the impact of inputs on model outputs, XAI helps with one of the key use cases for MPM: preventing data drift and decreased performance. Being able to understand the model also means easier model debugging and adjustment for new data.

How is XAI used in different contexts?

Structured data

When working with tabular data or other structured data, XAI helps identify the highest contributing features in the model, or the inputs that most strongly impact the output. This information helps identify the presence of model bias and helps the team find features that can be removed from consideration without impacting model predictions. It also helps them determine what features lead to unexpected or poor model behavior.

Text data

With text or speech data, models often conduct sentiment analysis through natural language processing (NLP). Sentiment analysis assigns a positive or negative tonal score to a text based on how positive or negative a model perceives individual words in the text. XAI helps identify the words that most impact the sentiment score. It ensures that a model works as expected and allows the model to be adapted for changing uses of language.

Image / video data

Image models typically use neural networks to conduct image classification or object detection, and video models work similarly, treating each video frame as its own image. Convolutional neural networks (CNNs) are often used for image data by labeling pixels with relative importance via heatmaps. For visual data, XAI helps identify the highest-weighted pixels within those heatmaps so that the team can fine-tune pixel weights to capture the most important image features.

Ultimately, XAI makes models more transparent, which in turn contributes to easier model monitoring, debugging, optimization, and performance tracking throughout the model lifecycle.