Back to blog home

Explainable Monitoring for Successful Impact with AI Deployments

Training and deploying ML models is relatively fast and cheap, but operationalization — maintaining, monitoring, and governing models over time — is difficult and expensive. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps. As part of Fiddler’s 3rd annual Explainable AI Summit in October 2020, we brought together a panel of technical and product leaders to discuss operationalizing machine learning systems, and the key role that monitoring and explainability has to play in an organization’s AI stack.

The shift to operationalization

As Natalia Burina (AI Product Leader, Facebook) noted, “There’s been a shift towards operations with the rise of MLOps. A recent report gave the figure that 25% of the top 20 fastest-growing Github projects of Q2 2020 concerned ML infrastructure, tooling, and operations.” Abhishek Gupta (Engineering Lead, Facebook; ex-Head of Engineering, Hired, Inc.) predicts that over the next 2-5 years, we will see more and more tools that “SaaSify” aspects of ML operationalization. 

These innovations are a response to more organizations trying — and often struggling — to get their ML projects “out of the lab.” As Peter Skomoroch (Machine Learning Advisor) explained, due to the push around big data years ago, companies have already been investing in data infrastructure to help power analytics on their site. Now they’re trying to use this data for machine learning, but running into challenges. Traditional engineering processes are based around software that the team writes, tests, and then deploys to the site, and while it might be A/B tested for effectiveness, the software itself isn’t changing. However, the same can’t be said for machine learning. Monitoring and explainability are therefore key components of a successful AI system.

Case in point: COVID-19

Kenny Daniel (Co-founder and CTO, Algorithmia) shared that, “In the data science communities that I run in, there’s a picture of a timeseries, any time series, and it looks normal, and then — COVID hit.” Moral of the story: If you don’t have a way of recognizing when the macro environment has shifted, you’re going to have problems. Airlines experienced this: at the start of the pandemic, their prices dropped dramatically, because the algorithms mistakenly thought that was the way to get people flying again.

Many companies had to rapidly retrain their models when COVID hit. Gupta described the situation at Hired as “surreal” as they saw a sudden drop in hiring and surge in candidates, resulting in their models behaving in less-than-ideal ways. (Gupta has since moved on to an engineering lead role at Facebook.)

Monitoring and explainability

All the panelists agreed that monitoring is especially important for machine learning systems, and that most companies’ current tools are not sufficient. “You have to assume that things will go wrong and your machine learning team will be under the gun to fix it — quickly,” said Skomoroch. “If you have a model that you can’t interrogate, where you can’t determine why the accuracy is dropping, that’s a very stressful situation.”

This is even more important for high-stakes use cases where you’re dealing with fairness and vulnerable groups, Burina said, and added that “Debugging models is something that’s developing. We don’t have in the industry a very good way of doing this like we have in traditional software.” Skomoroch agreed: “That’s why I think stuff like Fiddler is pretty exciting because a lot of this is done manually currently and ad hoc — there’s some notebooks flying around in emails. We really need to have benchmarks that we’re looking at consistently and continuously.”

Gupta said that in his opinion, “ML monitoring and the ability to drill down and explain is inextricably linked.” When you have both of these things, you get faster detection and resolution of issues, and at the same time, ML engineers are able to develop a better intuition about which models and features need more work. Gupta explained that “Fiddler’s tool and explainable monitoring has been a gamechanger and a step function improvement to how we monitor and react to challenges that we see in the marketplace.”

Monolithic solutions vs best-in-breed approach 

The panelists unanimously agreed that the trend in the AI tooling stack is towards a more heterogeneous, “best-in-breed” approach that combines open source, custom software, and various vendor solutions — rather than one tool that does it all. 

According to Daniel, “The more valuable and the more important the project is, the more you really want to have the best component for each bit.” In traditional software, that means combining different solutions for CI/CD, testing, monitoring, and observability, and the same logic applies for ML. After all, “You can’t build the end-to-end solution and expect to succeed in an industry that’s evolving so quickly. You need to be able to switch out parts of the car while you’re driving it, because the things that were popular two years ago are not today.”

Components for an ML tooling stack are increasingly out-sourced, not built in-house. The task for companies now is to pick high-quality tools that are specifically geared towards their domain and use case. “For companies that are serious from the get-go,” said Burina, “they should really consider best-of-breed solutions because that’s going to be their competitive advantage.”

Stakeholders for AI

What are all the different personas that might care about a model and its outputs? Of course, data scientists and engineers are one group. Also, product managers care about the fit of a model with business strategy and purpose. Legal teams, regulators, and end-users will all potentially require access to this information, as well. And C-suite leadership often wants to know how models are doing at a high level.

As Skomoroch put it, “There’s a whole world of people who don’t really understand what you [data scientists] do day to day, and the whole team is kind of a black box to them. So there’s a side benefit to having something like Fiddler, having this observability, and monitoring happening, which is they have something to look at where they can see: what’s the progress? What’s happening with our machine learning models?” Gupta observed that having ML monitoring and explainability provides “a shared understanding of the levers and tradeoffs — and having a conversation at that level of abstraction goes a long way.”

Algorithmic bias and fairness

One of the most important use cases for explainable AI and monitoring, and one that stakeholders have a shared interest in, is preventing issues with bias and fairness. “Unwanted consequences can creep in at any part of the pipeline,” said Burina. “Companies must think about it holistically, from design to development, and they really should have continuous monitoring for bias and fairness.”

Continuous monitoring can help teams “trust but verify,” according to Gupta. With many people working asynchronously to improve the collective performance of an AI system, individual bias can over time creep in, even though no single person is controlling at the macro level how the system must behave. This is where explainable monitoring can really help.

Who is ultimately responsible for making sure AI isn’t biased? After all, as Daniel noted, “Just because it’s in an AI black box doesn’t mean nobody’s responsible. Somebody still needs to be responsible.” In Skomoroch’s opinion, having a dedicated role like a chief data science officer or director focused on AI ethics can be a good choice. This person can make sure that nothing falls through the cracks when work moves from one team to the next. Burina also proposed a new industry-wide role of “model quality scientist: someone who would challenge the model, check it for robustness, including anything potentially adversarial….someone who would approve deployment, really making it a more rigorous process.”

At Fiddler we’ve heard about bias concerns from many of the customers we’ve engaged with. In response, we’ve been trying to put together a high-level framework that can showcase where there could be bias, and allow customers to take action from those insights: whether they might want to retrain a model, balance their data set, or continuously monitor over time and use those insights to adjust their applications. 

Interested in listening to the full panel discussion? You can watch the live recording here

Panelists for Explainable Monitoring Panel

Panelists: 

Peter Skomoroch, Machine Learning Advisor

Abhishek Gupta, Engineering Lead, Facebook; ex-Head of Engineering, Hired 

Natalia Burina, AI Product Leader, Facebook

Kenny Daniel, Co-Founder and CTO, Algorithmia

Moderated by Rob Harrell, Senior Product Manager, Fiddler