Virtual fireside chat

AI Explained: Model Monitoring Best Practices IRL

Tuesday, May 24, 2022
Registration is now closed. Please check back later for the recording.

Whether identifying drift and performance issues or pinpointing outliers, model monitoring is a must-have for any organization leveraging machine learning. But do you know the best practices used by industry leaders in real-life?

Watch the video to learn:

  • Best practices for model monitoring, including real-life use cases
  • Why model monitoring is now a must-have
  • How model monitoring fits into MLOps workflows

AI Explained is our new AMA series featuring experts on the most pressing issues facing AI and ML teams.

Can’t attend live? Recordings will be available to all registrants after the event.

Speakers
Krishnaram Kenthapadi
Chief AI Officer & Scientist, Fiddler AI

Prior to Fiddler, he was a Principal Scientist at Amazon AWS AI and LinkedIn AI, where he led the fairness, explainability, privacy, and model understanding initiatives. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted).

Hima Lakkaraju
Assistant Professor, Harvard University

She works with various domain experts in policy and healthcare to understand the real-world implications of explainable and fair ML. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. Her research has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS, and grants from NSF, Google, Amazon, and Bayer. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers/practitioners working on the topic.

Whether identifying drift and performance issues or pinpointing outliers, model monitoring is a must-have for any organization leveraging machine learning. But do you know the best practices used by industry leaders in real-life?

Watch the video to learn:

  • Best practices for model monitoring, including real-life use cases
  • Why model monitoring is now a must-have
  • How model monitoring fits into MLOps workflows

AI Explained is our new AMA series featuring experts on the most pressing issues facing AI and ML teams.

Speakers
Krishnaram Kenthapadi
Chief AI Officer & Scientist, Fiddler AI

Prior to Fiddler, he was a Principal Scientist at Amazon AWS AI and LinkedIn AI, where he led the fairness, explainability, privacy, and model understanding initiatives. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted).

Hima Lakkaraju
Assistant Professor, Harvard University

She works with various domain experts in policy and healthcare to understand the real-world implications of explainable and fair ML. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. Her research has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS, and grants from NSF, Google, Amazon, and Bayer. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers/practitioners working on the topic.