Back to blog home

Responsible AI Podcast with Scott Zoldi — "It's time for AI to grow up"

You could say Scott Zoldi knows a thing or two about Responsible AI. As Chief Analytics Officer at FICO, a company that powers billions of AI-driven decisions in production, Scott has authored over 100 patents in areas like ethics, interpretability, and explainability. One of his most recent projects, a new industry report on Responsible AI, found that:

  • 65% of respondents’ companies can’t explain how specific AI model decisions or predictions are made
  • 73% have struggled to get executive support for prioritizing AI ethics and Responsible AI practices
  • Only 20% actively monitor their models in production for fairness and ethics

“Building models without a framework around Responsible AI and ethics could have a big impact on an organization's revenue, their customers, and also their brand,” Scott said. With more regulations coming soon, including a recent proposal from the EU, we spoke with Scott about how AI needs to grow up fast — and what organizations can do about it. Listen to the full podcast here or read the highlights of our conversation below. 

What is responsible AI?

Scott identified four major components of Responsible AI:

  1. Robust AI: Understanding the data deeply, doing stability testing, predicting causes of data drift, and anticipating how the model might be used differently from its original use intent.
  1. Explainable AI: Knowing what's driving the model, both while developing it and at prediction time, in order to create helpful, actionable explanations for end-users.
  1. Ethical AI: Making a concerted effort to remove bias from your models: in your data and in your model’s learned features.
  1. Auditable AI: Efficiently and proactively monitoring your models.

The challenges of implementing Responsible AI

One challenge of implementing Responsible AI is the complexity of ML systems. “We surveyed 100 Chief Analytics Officers and Chief AI Officers and Chief Data Officers and about 65% said they can't explain how their model behaves,” Scott said. This is an education problem, but it’s also due to companies using overly complicated models because they feel pressured to have the latest technology. 

Another challenge is the lack of monitoring. “Only 20% of these CIOs and Chief AI Officers are monitoring models for performance and ethics,” Scott said. This is due to multiple factors: Lack of tooling, lack of investment and company culture around Responsible AI, and lack of model explainability to know what to monitor. 

Strategies for implementing Responsible AI

Practitioners should be thinking about explainability long before models go into production. “My focus is really on ensuring that when we develop models, we can understand what drives these models, in particular latent features,” Scott said. This lets teams design models that avoid exposing protected classes to bias, and constrain models so their behavior is easier for humans to understand.

When models are in production, Scott explained, teams should know the metrics associated with their most important features in order to see how they’re shifting over time. Monitoring sub-slices or segments of the data is also essential in order to find outliers. And teams should set informed thresholds to know when to raise an alarm about data drift.

Lastly, responsible AI can mean starting with a model design that’s simpler. Complex models are harder to explain, and are more prone to degradation as data drifts over time.

3 things teams can do to prepare for the future of AI

Here’s what Scott believes organizations should do going forward: 

  1. Recognize that a model development standard, set at the company level, is essential. 
  1. Commit to enforcing that standard. Document your success criteria and your progress so that everyone can see where the team is at. Scott’s doing research into model development governance based on blockchain, so when someone signs off on a model, their name goes into a permanent open record.
  1. Focus on production and how models can provide business value. This might require a mindset shift for data scientists. “If you’re in an organization where you're building the model, you need to see yourself as part of the success of the production environment,” Scott said.

To ensure organizations act responsibly when their models affect customers, it’s important for AI systems to be thoughtfully designed and monitored. Fiddler is an end-to-end monitoring and explainability platform that helps teams build trust with AI. Request a demo.