Back to blog home

Fiddler X AWS Startup Showcase: Why Model Performance Management Is the Next Big Thing in AI

Everyone knows the benefits of artificial intelligence (AI)—and indeed we’re seeing AI being embedded into nearly every application. But if you want to take advantage of AI, how do you make sure you’re also staying compliant with existing and future regulations? This was the topic of a recent conversation between Fiddler’s co-founders, Krishna Gade and Amit Paka, and John Furrier of SiliconANGLE’s theCUBE, as part of the AWS Startup Showcase on June 16, 2021. 

Fiddler is a model performance management platform company that enables organizations to build trust with AI by continually monitoring, analyzing, and explaining their models, ensuring they are compliant with all the regulations they have in their industry. In this article, we’re sharing a condensed version of the discussion our team had about why MPM is so crucial for companies right now. You can also watch the full interview below. 

What is model performance management (MPM)?

AI models essentially represent the patterns inside data, and use these historical patterns to predict the future. The problem, though, is that this AI model is a black box. Unlike regular software, you can’t really open up and read the model’s code and patterns and understand what it’s doing. That’s what makes AI risky to implement. And that’s where a model performance management platform can help you look into that black box and monitor its predictions continuously so that you know how the model is behaving at any given point in time.

Similar to QA practices for traditional software, an MPM system can test the model against different inputs. For example, how does a loan approval model behave for male vs. female applicants? And what are the key factors that it uses to decide whether to grant a loan? This information can be used to explain how the model is working to all the stakeholders, including those who may be less technical (like compliance teams and regulators).

Why do we need MPM — what is the risk with AI? 

With all the data and computing power available these days, we can build really sophisticated models. Some recent models have even done better than humans when it comes to image recognition tasks! However, the problem is there are risks in using AI without properly testing and not properly monitoring.

For example, a couple of years ago, Apple and Goldman Sachs launched a credit card where they used machine learning (ML) algorithms to set credit limits for their cardholders. Within the same household, the husband and wife saw a 10x difference in their credit limit. There was a lot of PR damage and eventually a regulatory probe into Goldman Sachs. These are the kind of stories that keep companies up at night when it comes to using ML because you can lose customer trust in an instant. 

That’s why tools like Fiddler are coming forward so that enterprises can ensure responsibility for their organization as well as their customers.

How does MPM help a team manage the ML lifecycle?

If you look at the whole lifecycle of Machine Learning Ops (MLOps), in some ways it mirrors the traditional lifecycle of DevOps. But it also introduces new complexities. First, models can be black boxes. And secondly, the quality of their predictions can decay over time. This is a constant challenge with ML because “model drift” can happen when the data seen in production differs from the data used to train the model. 

Imagine you're a bank, you're probably creating hundreds of these models for a variety of use cases—credit risk, fraud, anti-money laundering—how are you going to know which models are actually working and which models are underperforming? 

The MPM system sits at the heart of the ML workflow, keeping track of the data that is flowing through your ML lifecycle, keeping track of the models that are getting created and getting deployed, and how they're performing. What’s more, you’re able to explain the model. Our system is a visual interface, with lots of dashboards, that developers, operations, and compliance teams can all view to continuously monitor the model—and get alerts when something goes wrong. Check out our MPM best practices to learn more.

What’s Fiddler’s approach to MPM?

Fiddler is a purpose-built platform to solve for model explainability, model monitoring, and model bias detection. As this is our only focus, we are dedicated to building a tool that’s useful for a variety of AI problems: financial services, retail, advertising, human resources, healthcare, and so on. We’ve found a lot of commonalities around how data scientists are solving ML problems across these industries. 

Therefore we’ve been able to create a system that can be plugged into teams’ workflows seamlessly. We can support a variety of heterogeneous types of models—ingesting all these different types of models is a very hard technical problem. Then we provide a single, centralized interface that shows how models are performing. 

There are three pillars to Fiddler’s product philosophy. First, we leverage the latest research. This not only makes our explanations very fast, but it also activates new use cases—for example, counterfactual analysis where you can ask hypothetical questions of the model. The second pillar is infrastructure. We’re building something that works at an enterprise scale, with potentially billions and billions of predictions flowing through the system. And finally, user experience. We care about creating beautiful experiences that are very intuitive, and not just designed for technical users, but also for non-technical stakeholders.

What's the operational playbook for companies to get started with AI?

Most companies are still in the early stages of implementing AI. Having AI models in the lab is one thing—when you’re just experimenting, you’re not going to hurt your business or your users. But once you operationalize, and then you have to do it in a trustworthy way. 

The playbook for responsible AI involves thinking through questions like: How are you going to test these models? How are you going to analyze and validate them before they actually are deployed? How are you going to analyze biases in your training data? And once your models are deployed to production, how are you going to track drift in performance?

Ultimately a lot of these answers can come down to whether or not you have the right tools. Tools like Fiddler will add value so that companies can safely reap the benefits of this powerful technology. AI is game-changing—and it can also be responsible and trustworthy.

Interested in learning more? Contact us today.