Whether identifying drift and performance issues or pinpointing outliers, model monitoring is a must-have for any organization leveraging machine learning. But do you know the best practices used by industry leaders in real-life?
Register for AI Explained to learn:
AI Explained is our new AMA series featuring experts on the most pressing issues facing AI and ML teams.
Can’t attend live? You should still register! Recordings will be available to all registrants after the event.
Prior to Fiddler, he was a Principal Scientist at Amazon AWS AI and LinkedIn AI, where he led the fairness, explainability, privacy, and model understanding initiatives. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted).
She works with various domain experts in policy and healthcare to understand the real-world implications of explainable and fair ML. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. Her research has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS, and grants from NSF, Google, Amazon, and Bayer. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers/practitioners working on the topic.