Forward - Building Trust Into AI With Model Monitoring, Explainable AI, and Analytics

Table of content

In this podcast episode, Krishna Gade, the founder and CEO of Fiddler, discusses the importance of building trust in artificial intelligence (AI) systems. He emphasizes that trust is crucial in ensuring that AI systems are used effectively and safely.

Gade explains that one of the key ways to build trust in AI is through model monitoring, which involves monitoring the performance and behavior of AI models in real-time. This can help detect issues such as bias, errors, or anomalies that could impact the reliability of the AI system.

He also discusses the concept of explainable AI (XAI), which involves designing AI systems that are transparent and can provide explanations for their decision-making processes. XAI can help increase trust in AI systems by providing users with a better understanding of how the system works and why it's making certain decisions.

Gade also touches on the importance of analytics in building trust in AI systems. He explains that analytics can help organizations gain insights into how their AI systems are performing and identify areas for improvement.

Overall, Gade stresses that building trust in AI is an ongoing process that requires continuous monitoring, transparency, and communication. By prioritizing trust, organizations can ensure that their AI systems are effective, reliable, and safe for all users.

Hosted by: Forward

Audio version:


Video transcript