Back to blog home

XAI Summit Speaker Spotlight: How Can We Increase Trust in AI?

Fiddler is hosting our third annual Explainable AI Summit on October 21st, bringing together industry leaders, researchers, and Responsible AI experts to discuss the future of Explainable AI. This year’s conference will be entirely virtual, and we’re using the opportunity to take the conference global, with speakers zooming in from across the country and the globe.

Reserve Your Seat

Before the Summit, we asked our panelists to fill us in on how they got where they are today, their views on the key challenges and opportunities in AI usage today, and how to ensure AI is deployed in a responsible, trustworthy manner. In our first post, we covered the importance of Responsible AI. In this spotlight, we’re covering what our panelists view as some of the biggest challenges in AI usage today, and how they believe we can increase trust in AI. 

Biggest challenges in AI today

As Kenny Daniel, co-founder and CTO at Algorithmia succinctly puts it, “AI is going to be as important as any of the great inventions of history, whether that be the assembly line or the internet itself. We’ve gone from automating labor in a field to automating labor of the mind—that’s what AI is.” But when a new technology is so far-reaching and interconnected, the challenges are naturally multivariate and complex. Implementing AI means that challenges surface within existing systems and frameworks, among the individuals who are involved directly or indirectly, and within the regulatory space that must adapt to the new complexities. 

“There’s a gap between the things that are happening in the lab and the things that are actually happening in production, in real products and in the real world,” Daniel continues, “Bridging that gap is in many ways one of the biggest challenges to adoption in the industry.” A big part of this comes down to the human element - taking into consideration the various stakeholders involved and figuring out how to provide them with the information they need. Michelle Allade, Head of Bank Model Risk Management at Alliance Data Card Services, explains that a major challenge is “setting the right expectation for the AI application before it is conceived and ultimately getting users, partners and the society to trust that the application is working as intended.” 

Merve Hickok, AI Ethicist and Founder of AIethicist.org, lays out the challenges for some of these stakeholders: “For individuals, the challenges are not being fully aware of what big data and AI mean for them and their lives, and not having the means to challenge these tools when they are aware. For companies, it is about balancing the risks of AI with their corporate priorities of revenue, efficiency and deadlines. For governments, the challenges are whether to regulate AI, how to regulate it and still be relevant in the world and how to improve the policy and regulations.

From a regulatory standpoint, there are many issues that can arise that can lead to legal issues. Patrick Hall, visiting faculty at the George Washington University, Principal Scientist at bnh.ai, and Advisor to H2O.ai, says that the most common incidents he has observed in the Partnership on AI incident database are around algorithmic discrimination and disregard for data privacy in AI applications, which results in legal problems for AI operators and privacy harms for consumers or the general public. Lofred Madzou, Artificial Intelligence Project lead at the World Economic Forum, shares that colleagues at the WEF have been tracking initiatives in the regulation of AI globally for several years and have identified the following common themes: accountability, fairness, human oversight, privacy, safety/security and transparency. “Obviously, these themes differ across countries," Madzou says; "I would like to stress that these are socio-technical challenges; in which social context, multi-stakeholder collaboration, citizens and users engagement are essential to be successful.” 

So how do we increase trust in AI? 

The challenges with implementing and using AI today are clearly varied and multifaceted. But many of these challenges come down to trust: building trustworthy artificial intelligence in systems and frameworks, earning the trust of the individuals using and impacted by AI, and enshrining trust through governance and regulation. 

Developing systems and frameworks 

“Trust will be increased when transparency, explainability and accountability are embedded in all the decisions taken about the development & implementation of AI tools,” says Hickok. Daniel agrees: “Better understanding is at the core. The more you understand, the less uncertainty there is, the less fear there is, and the more AI will be trusted and people will be able to use it with confidence. Some of this just happens over time—it’s a very young field. But it’s also about developing systems. So take a software development team. How do they decide who can deploy what? How do they decide who’s responsible for writing code, and signing off on it, and shipping it? There are all these processes that have been developed in traditional software over the decades for delivering trusted, reliable, resilient code. In that sense, it’s not just the code that’s written, but also the people and the organization and the process that enforce that. So some of the things that we do in other fields absolutely transfer over into AI.”

Building in trustworthy and responsible regulation 

“Adopting a transparent governance of AI” is essential, says Storchan. “Ethics and principles have been widely discussed and debated. Now it is key to adopt a framework for fairness, transparency, privacy…regulation has to be more specific. Madzou, who is currently leading a project on this at WEF, adds, “we need thoughtful regulation to strengthen trust in democratic institutions and their ability to protect consumers and citizens alike.” He continues, “improving model understanding and audit capabilities to assess compliance with legal and industry requirements is key. Companies able to provide such services are going to quickly grow in the near future.”

Keeping humans at the center 

Victor Storchan, Senior Machine Learning Engineer at JPMorgan Chase & Co. explains that a crucial element is “engaging more people (especially people who will get affected) when building systems and collecting data and democratizing AI to a wider audience.”

“Trust in AI can be increased through educating stakeholders on what the AI application actually does. This can only be achieved when there is a clear understanding and transparency around what is known about the application, what is unclear and the controls to mitigate potential risks,” says Allade.We’re social beings and as such we tend to approach trust through a relational lens,” says Madzou. “Most consumers and citizens are still somehow confused about AI. They struggle to discern hype from substance and various controversies (e.g. misuse of facial recognition by law enforcement) have fueled public concerns about AI. This won’t be addressed only by improved audit capabilities because what matters for them is that when something goes wrong, organizations that are involved are held accountable.” 

This is just the tip of the iceberg - join us for the Explainable AI Summit on October 21 to hear more from our full lineup of speakers.  

Reserve Your Seat