AI ethics is becoming a business imperative
As AI’s adoption accelerates across industries, it also raises concerns when the ML models powering these applications are not implemented correctly. Operational visibility may hide underlying problems facing the ML models that are put into production. For example, model bias or unfairness often goes unnoticed but poses significant risks when training data or production models amplify discrimination towards specific groups. Collectively, these ML challenges fall under the umbrella of AI Ethics — an area that 75% of executives rank as important.
The Operational Unfairness problem
ML fairness and monitoring are early areas of adoption that individually solve for the challenges outlined above. ML practitioners generally regard monitoring for drift as an early warning system for performance issues and evaluating models with fairness metrics as a solution for assessing bias in a trained model. However, a trained model that is fair can become unfair after deployment due to the same model drift that causes performance issues. Analyzing the impact of drift on the model’s unfairness is equally or perhaps even more important than its performance metrics. Yet model fairness monitoring for a deployed model is still nascent and unadopted. Beyond the lack of tooling, current solutions have statistical limitations and may require unavailable outcome labels that make their implementation difficult.
FairCanary - a fast solution for monitoring and explaining fairness
FairCanary is a system to continuously monitor the real-time fairness of a model in production. Using FairCanary, an ML developer can set fairness alerts and leverage explainable AI to understand why the fairness alert triggered. FairCanary works for both classification and regression models.
To do this, FairCanary introduces a new bias quantification metric, Quantile Demographic Drift or QDD. Typically data drift is calculated by measuring the drift between production data distributions against training data distribution. However, QDD measures the shift in the data distributions between different protected groups and uses their divergence as an indicator of unfairness. Because this approach does not require outcomes which are threshold-based and generally unavailable, it provides both insights across all individuals, instead of just the threshold based ones, and accurate intra-group disparities that might have been aggregated out due to small groups. That makes QDD ideal for real-time model monitoring.
When an unfairness alert is triggered, it can be challenging to isolate the issue causing the bias. To address this, FairCanary also incorporates an approach, called Local QDD Attribution, to explain the QDD value in the context of the contributing model features. It uses Shapley value based methods and Integrated gradients under the hood. Local QDD attribution uses a single attribution for multiple explanations across groups. This makes it many times faster than current metrics that require recalculation for every grouping.
When unfairness is detected, ML teams take corrective action in the form of mitigation. FairCanary provides automatic bias mitigation uncovered by the QDD metric using a quantile norming approach. This approach replaces the score of the disadvantaged group with the score of the corresponding rank in the advantaged group. Since this approach is a post processing step, it avoids pretraining to debias the model and is therefore a computationally inexpensive approach to bias mitigation.
In summary, FairCanary helps monitor, troubleshoot and mitigate bias in production ML models in a fast and efficient way. Please refer to our research paper in Arvix if you’d like to review the technical underpinnings of FairCanary.