AI Explainability and Monitoring Survey 2020

As adoption advances, Data Science and AI/ML teams and business leaders must in turn advance their scope of focus to the broader implications of AI systems: are these systems trustworthy, transparent, and responsible? Are outcomes reliable over time? Is there bias built into models? Are models standing up to regulatory and compliance requirements? From ethical, regulatory, end-user, and model developer perspectives, there is a tremendous need for explainability methods and the ability to continuously monitor models.

An overarching question that arises for this variety of stakeholders is: why did the model make this prediction? This question is of importance to developers in debugging (mis-)predictions, regulators in assessing the robustness and fairness of the model, and end-users in deciding whether they can trust the model.

To better understand the state of AI Explainability and Monitoring in 2020, we conducted a survey across a range of organizations from publicly-held corporations to privately-held, emerging tech companies, representing industries including software, consulting, and banking & financial services, among others. The majority of respondents came from Data Science functions, including Data Scientists, Chief Data Scientists, and Heads of Data Science.In our new market report, we present the results of the survey and share our take on implications for the future of AI adoption within organizations. Learn more about AI monitoring today with help from Fiddler AI.