Back to blog home

The Real World Impact of Models without Explainable AI

When ML teams build models they must keep in mind the human impact their models’ decisions can have. Few ML applications have the potential to be more damaging than AI-enabled HR platforms used to source and evaluate job candidates. Potential model errors within these platforms have significant consequences for applicants’ lives and can cause major damage to a company’s reputation.

Expertise from experience

The issue isn’t merely hypothetical.

In his previous role at LinkedIn, Fiddler’s Chief AI Officer and Scientist, Krishnaram Kenthapadi, realized that the ML models and systems they were deploying had a huge and potentially long term impact on people’s lives — connecting candidates with job opportunities, recommending candidates to recruiters, and helping companies retain the talent they have.

Because of the potentially life-altering nature of such systems, the LinkedIn team had to understand how the complex models work, identify any potential model bias, and detect and resolve issues before they affect users and the reputation of the business.

Interest in their model behavior went beyond the core ML team. Stakeholders across product leadership and enterprise customers wanted to know how the models work.

What could go wrong?

Explainable AI (XAI) is necessary to provide human readable explanations for complex systems that consist of multiple models. A given model might be dedicated to a particular reasoning task in workflow, feeding its output to be consumed by another model. Job recommendation systems, for instance, may have one model responsible for parsing and classifying a job opening, while another matches open positions to candidates. Flaws in a single model have the potential to produce an errant recommendation, with no clear markers to identify what caused the flaw.

To give an example, suppose a model recommends a clearly inappropriate job, like an internship for someone who is already in a senior position. It’s possible that the root cause lies in the recommendation model, but with multiple layers of purpose-built models providing supporting classification or natural-language processing, the error could sit in any upstream process.

Although it’s the recommendation that was errant, the root cause may be in the process that extracted the job title from the job posting, incorrectly marking it as requiring VP-level seniority. In this case, providing an explanation focused solely on the job recommendation model layer would not suffice since it would fail to expose potential upstream issues in the pipeline.

The challenge is magnified by the need to provide end-user explanations aligned to each individual’s needs and technical knowledge.

Explainability for humans 

Determining which XAI method is most appropriate depends on the use case and business context – the type model, whether it consumes structured or unstructured data, the model’s purpose, etc. – and audience, from data scientists or model validators to business decision makers or final end-users.

Attribution is a widely-used class of explainability methods that characterizes the contribution of input features to a particular recommendation. Frameworks including SHAP, LIME, and Integrated Gradients are some of the dominant approaches to attribution-based XAI.

Another promising XAI framework is counterfactual explanations, which helps isolate the features that dominate given predictions. What-if tools, such as the one found in Fiddler, offer an easy way to construct counterfactual explanations across different scenarios. Counterfactuals are emerging as an important way to stress models in testing, and better understand their behavior at the most extreme dimensions of input data.

As XAI becomes more mainstream, ML teams may also consider using their company’s own explainers to obtain faithful explanations for particular models. Risk and compliance teams, for instance, may require certain explanations on model outputs and ensure recommendations are fair and ethical.

To better understand how XAI works, read our technical brief on XAI in Fiddler.

Technical Brief: How Explainable AI Works in Fiddler
Technical Brief: How Explainable AI Works in Fiddler