This article was previously posted on Forbes.
———
As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash.
With customer protection in mind, regulators are staying ahead of this technology and introducing the first wave of AI regulations meant to address AI transparency. This is a step in the right direction in terms of helping customers trust AI-driven experiences while enabling businesses to reap the benefits of AI adoption.
This first group of regulations relates to the understanding of an AI-driven, automated decision by a customer. This is especially important for key decisions like lending, insurance and health care but is also applicable to personalization, recommendations, etc.
The General Data Protection Regulation (GDPR), specifically Articles 13 and 22, was the first regulation about automated decision-making that states anyone given an automated decision has the right to be informed and the right to a meaningful explanation. According to clause 2(f) of Article 13:
"[Information about] the existence of automated decision-making, including profiling ... and ... meaningful information about the logic involved [is needed] to ensure fair and transparent processing."
One of the most frequently asked questions is what the “right to explanation” means in the context of AI. Does “meaningful information about the logic involved” mean that companies have to disclose the actual algorithm or source code? Would explaining the mechanics of the algorithm be really helpful for the individuals? It might make more sense to provide information on what inputs were used and how they influenced the output of the algorithm.
For example, if a loan application or insurance claim is denied using an algorithm or machine learning model, under Articles 13 and 22, the loan or insurance officer would need to provide specific details about the impact of the user's data to the decision. Or, they could provide general parameters of the algorithm or model used to make that decision.
Similar laws working their way through the U.S. state legislatures of Washington, Illinois and Massachusetts are
- WA House Bill 1655, which establishes guidelines for "the use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability."
- MA Bill H.2701, which establishes a commission on "automated decision-making, artificial intelligence, transparency, fairness, and individual rights."
- IL HB3415, which states that "predictive data analytics in determining creditworthiness or in making hiring decisions...may not include information that correlates with the race of zip code of the applicant."
Fortunately, advances in AI have kept pace with these needs. Recent research in machine learning (ML) model interpretability makes compliance to these regulations feasible. Cutting-edge techniques like Integrated Gradients from Google Brain along with SHAP and LIME from the University of Washington enable unlocking the AI black box to get meaningful explanations for consumers.
Ensuring fair automated decisions is another related area of upcoming regulations. While there is no consensus in the research community on the right set of fairness metrics, some approaches like equality of opportunity are already required by law in use cases like hiring. Integrating AI explainability in the ML lifecycle can also help provide insights for fair and unbiased automated decisions. Assessing and monitoring these biases, along with data quality and model interpretability approaches, provides a good playbook towards developing fair and ethical AI.
The recent June 26 US House Committee hearing is a sign that financial services need to get ready for upcoming regulations that ensure transparent AI systems. All these regulations will help increase trust in AI models and accelerate their adoption across industries toward the longer-term goal of trustworthy AI.