You could say Scott Zoldi knows a thing or two about Responsible AI. As Chief Analytics Officer at FICO, a company that powers billions of AI-driven decisions in production, Scott has authored over 100 patents in areas like ethics, interpretability, and explainability. One of his most recent projects, a new industry report on Responsible AI, found that:
“Building models without a framework around Responsible AI and ethics could have a big impact on an organization's revenue, their customers, and also their brand,” Scott said. With more regulations coming soon, including a recent proposal from the EU, we spoke with Scott about how AI needs to grow up fast — and what organizations can do about it. Listen to the full podcast here or read the highlights of our conversation below.
Scott identified four major components of Responsible AI:
One challenge of implementing Responsible AI is the complexity of ML systems. “We surveyed 100 Chief Analytics Officers and Chief AI Officers and Chief Data Officers and about 65% said they can't explain how their model behaves,” Scott said. This is an education problem, but it’s also due to companies using overly complicated models because they feel pressured to have the latest technology.
Another challenge is the lack of monitoring. “Only 20% of these CIOs and Chief AI Officers are monitoring models for performance and ethics,” Scott said. This is due to multiple factors: Lack of tooling, lack of investment and company culture around Responsible AI, and lack of model explainability to know what to monitor.
Practitioners should be thinking about explainability long before models go into production. “My focus is really on ensuring that when we develop models, we can understand what drives these models, in particular latent features,” Scott said. This lets teams design models that avoid exposing protected classes to bias, and constrain models so their behavior is easier for humans to understand.
When models are in production, Scott explained, teams should know the metrics associated with their most important features in order to see how they’re shifting over time. Monitoring sub-slices or segments of the data is also essential in order to find outliers. And teams should set informed thresholds to know when to raise an alarm about data drift.
Lastly, responsible AI can mean starting with a model design that’s simpler. Complex models are harder to explain, and are more prone to degradation as data drifts over time.
Here’s what Scott believes organizations should do going forward:
To ensure organizations act responsibly when their models affect customers, it’s important for AI systems to be thoughtfully designed and monitored. Fiddler is an end-to-end monitoring and explainability platform that helps teams build trust with AI. You can learn more and request a demo of Fiddler.