Back to blog home

Introducing All-Purpose Explainable AI

While AI is being increasingly adopted across industries, the need for interpretability limits its success in many high value applications. The black box nature of ML models impedes model comprehension hampering its operationalization from model validation and analysis to production monitoring. Explainable AI, a rapidly evolving diagnostic toolbox offering insight into model decisions, allows teams to inspect individual decisions and analyze models holistically. The Federal Reserve has made promising remarks furthering its support in enabling Responsible AI with the help of model explainability. The EU's recent AI regulation, also called the GDPR of AI, has mandated explainability for high-risk AI systems and encouraged it for non-high-risk AI systems. MLOps is therefore incomplete without model explainability.

With ML models applied to business critical use cases, even a fraction of a percentage increase in accuracy can mean tens of millions in additional revenue or reduced losses. It’s why AI’s impact on banking, in particular, is expected to reach $300B by 2030. The powerful combination of deep-learning and large volumes of data has made it possible to improve model accuracy over more conventional methodologies by taking a lighter touch to feature engineering; these models are at their best when ingesting less processed, often heterogeneous data elements. While in many cases this simplifies the feature-engineering task, the additional complexity passed to the model only causes its behavior to be more opaque.   

Only a few Explainable AI offerings are available today - these are mostly open source software (OSS) and a handful of enterprise grade solutions like Fiddler. OSS offers limited coverage for complex model types and, outside its ad-hoc Jupyter Notebook usage, lacks a centralized solution for the entire team. 

At Fiddler, we are working with some of the largest financial institutions and technology-native companies. These enterprises are deploying increasingly complex models with inputs that span multiple data types e.g. structured and unstructured data together. These inputs can themselves be composed of complex data types, e.g. tables, or hierarchical data types - like variable length sequences of event records. The teams building these models need to explain them across the model development lifecycle; for development, validation, and governance purposes; and to make that transparency available to teammates and across the organization. Current OSS can only provide this for a narrow slice of the modeling framework space and does little to provide this visibility to other stakeholders. 

Introducing All-Purpose Explainable AI

Today, we're launching an industry-first All-purpose Explainable AI (AXAI) preview that expands Fiddler's industry-leading Explainable AI which powers our Model Performance Monitoring platform. What does this mean? With AXAI, you can now explain deep-learning models with complex and heterogeneous data inputs at any granularity, recursive to any level. It is also extensible, built to easily accommodate data types that we don't handle today. We use Integrated Gradients (IG) to power these explanations.

To enable this on the Fiddler platform, we built a new extensible markup format. This allows our customers to express their explanations in task-appropriate ways through the Fiddler front-end,  and to control the granularity of explainability they desire. This can also allow customers to, for the first time, plug their own custom or application-specific explainability algorithms into Fiddler.

Here is an AXAI explanation of a lending model that takes structured and unstructured data.

At high level, we’ve kept the same patented UX that offers an easy and intuitive way for users - technical and non-technical stakeholders - to fiddle with inputs. Since this model takes multi-modal data types, you can immediately see the overall feature impact and either perform an in-depth analysis on the text input or manipulate any of the inputs to understand the model better on a single screen.

Here’s a quick demonstration of this new Explainable AI capability, All-purpose XAI:

If you’d like to learn more about how to unlock your AI black box and transform the way you enable trust, visibility and performance in MLOps, let us know.