Virtual fireside chat

AI Explained: Rethinking Model Monitoring

Thursday, July 21, 2022
Registration is now closed. Please check back later for the recording.

Model monitoring and debugging are difficult in real-world ML workflows due to lack of ground-truth labels, alert fatigue, and organizational challenges. How can we address these issues today, and what do ideal solutions look like?

Watch the webinar to learn:

  • The current mess plaguing ML workflows
  • Emerging research and solutions for model monitoring and debugging
  • How responsible AI incorporates privacy, fairness, explainability, and model monitoring

AI Explained is our new AMA series featuring experts on the most pressing issues facing AI and ML teams.

Can’t attend live? Recordings will be available to all registrants after the event.

Speakers
Shreya Shankar
PhD Student, UC Berkeley

Shreya Shankar is a computer scientist living in the Bay Area. She's interested in building systems to operationalize machine learning workflows. Shreya's research focus is on end-to-end observability for ML systems, particularly in the context of heterogeneous stacks of tools. Currently, Shreya is doing her Ph.D. in the RISE lab at UC Berkeley. Previously, she was the first ML engineer at Viaduct, did research at Google Brain, and completed her BS and MS in computer science at Stanford University.

Krishnaram Kenthapadi
Chief AI Officer & Scientist, Fiddler AI

Prior to Fiddler, he was a Principal Scientist at Amazon AWS AI and LinkedIn AI, where he led the fairness, explainability, privacy, and model understanding initiatives. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted).

Model monitoring and debugging are difficult in real-world ML workflows due to lack of ground-truth labels, alert fatigue, and organizational challenges. How can we address these issues today, and what do ideal solutions look like?

Watch the webinar to learn:

  • The current mess plaguing ML workflows
  • Emerging research and solutions for model monitoring and debugging
  • How responsible AI incorporates privacy, fairness, explainability, and model monitoring

AI Explained is our new AMA series featuring experts on the most pressing issues facing AI and ML teams.

Speakers
Shreya Shankar
PhD Student, UC Berkeley

Shreya Shankar is a computer scientist living in the Bay Area. She's interested in building systems to operationalize machine learning workflows. Shreya's research focus is on end-to-end observability for ML systems, particularly in the context of heterogeneous stacks of tools. Currently, Shreya is doing her Ph.D. in the RISE lab at UC Berkeley. Previously, she was the first ML engineer at Viaduct, did research at Google Brain, and completed her BS and MS in computer science at Stanford University.

Krishnaram Kenthapadi
Chief AI Officer & Scientist, Fiddler AI

Prior to Fiddler, he was a Principal Scientist at Amazon AWS AI and LinkedIn AI, where he led the fairness, explainability, privacy, and model understanding initiatives. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted).