Learn the human-centric challenges and requirements for ML model monitoring in real-world applications.
Learn the human-centric challenges and requirements for ML model monitoring in real-world applications.
Learn the human-centric challenges and requirements for ML model monitoring in real-world applications.
Learn about FairCanary, a novel approach to generating feature-level bias explanations that is an order of magnitude faster than previous methods.
Learn about FairCanary, a novel approach to generating feature-level bias explanations that is an order of magnitude faster than previous methods.
Learn about FairCanary, a novel approach to generating feature-level bias explanations that is an order of magnitude faster than previous methods.
Read about GroupShapley and GroupIG (Integrated Gradients), axiomatically justified methods to understand drift in ML model prediction distributions over time.
Read about GroupShapley and GroupIG (Integrated Gradients), axiomatically justified methods to understand drift in ML model prediction distributions over time.
Read about GroupShapley and GroupIG (Integrated Gradients), axiomatically justified methods to understand drift in ML model prediction distributions over time.
Learn why fairness metrics need to incorporate intersectionality, including a simple method to expand the definitions of existing group fairness.
Learn why fairness metrics need to incorporate intersectionality, including a simple method to expand the definitions of existing group fairness.
Learn why fairness metrics need to incorporate intersectionality, including a simple method to expand the definitions of existing group fairness.
Learn how organizations view and use explainability for stakeholder consumption, including a framework for establishing clear goals for explainability.
Learn how organizations view and use explainability for stakeholder consumption, including a framework for establishing clear goals for explainability.
Learn how organizations view and use explainability for stakeholder consumption, including a framework for establishing clear goals for explainability.
Learn how subtle differences in the underlying game formulations of existing methods can cause large differences in the attributions for a prediction.
Learn how subtle differences in the underlying game formulations of existing methods can cause large differences in the attributions for a prediction.
Learn how subtle differences in the underlying game formulations of existing methods can cause large differences in the attributions for a prediction.
Learn how randomized feature ablation helps measure the importance of a feature on a model's ability to make good predictions.
Learn how randomized feature ablation helps measure the importance of a feature on a model's ability to make good predictions.
Learn how randomized feature ablation helps measure the importance of a feature on a model's ability to make good predictions.
An overview of model interpretability and explainability in AI, key regulations/laws, and techniques/tools for providing explainability as part of AI/ML systems
An overview of model interpretability and explainability in AI, key regulations/laws, and techniques/tools for providing explainability as part of AI/ML systems
An overview of model interpretability and explainability in AI, key regulations/laws, and techniques/tools for providing explainability as part of AI/ML systems
Please try a different topic or type.