Data Council 2019: Explaining AI - Putting Theory into Practice

Table of content

In recent years, Model Interpretability has become a hot area of research in Machine Learning, mainly due to the proliferation of ML in products and the resulting social implications. At Fiddler Labs, we're building a general purpose Explainable AI Engine to help ML practitioners better trust and understand their models at scale.

In this talk, we will cover some of the learnings from our experiences working with various model-explanation algorithms across business domains. Through the lens of two case studies, we will discuss the theory, application, and practical-use guidelines for effectively using explainability techniques to generate value in your data science lifecycle.

Video transcript