October 21, 2020
The Explainable AI Summit brought together industry leaders, researchers, and Responsible AI experts to discuss the future of Explainable AI. Attendees learned about the top-of-mind issues that leaders face when implementing AI, and heard from those shaping the field on where things are headed. #XAISummit2020 has passed, but we’ll be sharing the content for you to view on-demand soon.
We closed out the Explainable AI Summit with a special screening of Shalini Kantayya’s Coded Bias, which premiered at the 2020 Sundance Film Festival.
The film explores the fallout of MIT Media Lab researcher Joy Buolamwini’s startling discovery that facial recognition does not see dark-skinned faces and women accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.
Discussions on top-of-mind issues in AI Explainability and Responsible AI
Screening of Coded Bias & conversation with director Shalini Kantayya
All times in PTRequest on-demand access
9.00 - 9.15am
Krishna Gade, Founder and CEO, Fiddler
Amit Paka, Founder and CPO, Fiddler
9.15 - 9.40am
On the current state of AI and democratizing its use
Karen Hao, Senior AI Reporter, MIT Technology Review
In conversation with:
Anusha Sethuraman Head of Marketing, Fiddler
9.45 - 10.30am
AI is increasingly being applied to business-critical use cases across industries - as AI goes from a technology on the fringe to mainstream, the importance of deploying AI responsibly has reached a crescendo. As businesses, consumers, and regulators are calling for more transparency and accountability in AI solutions, our panelists will discuss how to provide more trustworthy, transparent, responsible AI at every stage of the AI lifecycle.
Arnobio Morelix Research & Data Science Leader
Manasi Joshi, Director of Software Engineering, Google
Merve Hickok, AI Ethicist & Founder, AIethicist.org
Bill Franks, Chief Analytics Officer, International Institute for Analytics
Lofred Madzou, Project Lead, Artificial Intelligence, World Economic Forum
Moderated by Anusha Sethuraman, Head of Marketing, Fiddler
10.30 - 10.40am
10.40 - 11.10am
Captum, Facebook AI’s open source model interpretability library for PyTorch, is now interoperable with Fiddler’s Explainable AI platform. Our panel of experts from Captum and Fiddler discuss how this partnership is pushing the boundaries of Explainable AI to help the data science community improve model understanding and its applications, as well as to promote the usage of Explainable AI in the ML workflow.
Carlos Araya, Product Manager, Facebook AI Applied Research
Narine Kokhlikyan, Research Scientist, Facebook
Aalok Shanbhag, Data Scientist II, Fiddler
Moderated by Joshua Rubin Data Science Tech Lead, Fiddler
11.15am - 12.00pm
Training and deploying ML models is relatively fast and cheap, but maintaining, monitoring and governing them over time is difficult and expensive. An Explainable ML Monitoring system extends traditional monitoring to provide deep model insights with actionable steps. Our panelists discuss ways to increase transparency and actionability across the entire AI lifecycle using explainable monitoring, allowing for better understanding of problem drivers, root cause issues, and model analysis through AI deployment.
Peter Skomoroch, Machine Learning Advisor
Abhishek Gupta, Head of Engineering, Hired, Inc.
Natalia Burina, AI Product Leader, Facebook
Kenny Daniel, Co-Founder and CTO, Algorithmia
Moderated by Rob Harrell, Senior Product Manager, Fiddler
12.00 - 12.15pm
12.15 - 1.00pm
As a highly regulated industry, rolling out AI within financial services is risky: organizations not only have to navigate a new technology, but the specific security and regulatory requirements that come with using AI on sensitive data. Our panelists will discuss the increasing use cases for AI within the financial services industry, with a lens on the unique regulatory and compliance considerations that must be considered, and the areas of opportunity as the field evolves.
Patrick Hall, Principal Scientist, bnh.ai and Advisor to H2O.ai
Jon Hill, Professor of Model Risk Management, NYU Tandon, School of Financial Risk Engineering
Michelle Allade, Head of Bank Model Risk Management, Alliance Data Card Services
Alexander Izydorczyk, Head of Data Science, Coatue Management
Pavan Wadhwa, Managing Director, JPMorgan Chase & Co.
Moderated by Krishna Gade, Founder and CEO, Fiddler
1.00 - 1.45pm
Explainability is the most effective way to ensure AI solutions are transparent, accountable, responsible, fair, and ethical across use cases and industries. When you know why your models are doing something, you have the power to make them better while also sharing this knowledge to empower your entire organization. In this panel discussion, industry and research experts will shed light on the state of Explainable AI today, and key considerations to ensure success moving forward.
Sara Hooker, Research Scholar, Google Brain
Amirata Ghorbani, PhD Candidate in A.I., Stanford University
Victor Storchan, Senior Machine Learning Engineer at JPMorgan Chase & Co.
Pradeep Natarajan, Principal Scientist, Amazon Alexa AI
Moderated by Amit Paka, Founder and CPO, Fiddler
1.50 - 2.20pm
On AI’s role in the creative world, its opportunities for enhancing the creative process, and how leveraging AI responsibly has the potential for unleashing the full potential of the creative mind
Scott Belsky, Chief Product Officer, EVP - Creative Cloud, Adobe
In conversation with Krishna Gade, CEO, Fiddler
2.20 - 2.30pm
2.30 - 5.00pm
Coded Bias, a documentary film focusing on the ground-breaking research by MIT’s Joy Buolamwini into facial recognition technology’s troubling racial bias, premiered at the Sundance Film Festival in February. To close out the Explainable AI Summit, Fiddler is hosting a special screening of the film, followed by a Q&A with the film’s director, Shalini Kantayya.
Shalini Kantayya, Director, Coded Bias
2.30 - 3.00pm - Opening remarks from Director Shalini Kantayya
3.00 - 4.30pm - Film Screening
4.30 - 5.00pm - Q&A with Kantayya
Product Manager, Facebook AI Applied Research
Head of Bank Model Risk Management, Alliance Data Card Services
Chief Product Officer, EVP - Creative Cloud, Adobe
AI Product Leader, Facebook
Co-Founder and CTO, Algorithmia
Chief Analytics Officer, International Institute for Analytics
PhD Candidate in A.I., Stanford University
Head of Engineering, Hired, Inc.
Principal Scientist, bnh.ai and Advisor to H2O.ai
Senior AI Reporter, MIT Technology Review
AI Ethicist & Founder, AIethicist.org
Research Scholar, Google Brain
Professor of Model Risk Management, NYU Tandon, School of Financial Risk Engineering
Head of Data Science, Coatue Management
Director of Software Engineering, Google
Research Scientist, Facebook
Project Lead, Artificial Intelligence, World Economic Forum
CIO Startup Genome, and author, The Great Reboot book
Principal Scientist, Amazon Alexa AI
Senior Data Scientist, Fiddler
Machine Learning Advisor
Senior Machine Learning Engineer at JPMorgan Chase & Co.
Managing Director, JPMorgan Chase & Co.
Founder and CEO, Fiddler
Senior Product Manager, Fiddler
Founder and CPO, Fiddler
Lead Data Scientist, Fiddler
Head of Marketing, Fiddler
Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces and the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence is not neutral, and women are leading the charge to ensure our civil rights are protected.
Coded Bias, a documentary film focusing on Buolamwini’s ground-breaking research, premiered at the 2020 Sundance Film Festival. We closed out the Explainable AI Summit with a special screening of the film, followed by a Q&A with the film’s director, Shalini Kantayya.