We’re excited to introduce Hima Lakkaraju to Fiddler. Hima is a well-known AI researcher in Ethical and Explainable AI. She is currently an Assistant Professor at Harvard University and prior to that Hima got her Ph.D. from Stanford University working with Jure Leskovec. Hima also leads the AI4LIFE research group at Harvard and has co-founded the Trustworthy ML Initiative (TrustML) to help lower entry barriers into trustworthy ML and bring together researchers and practitioners working in the field.
Hima is joining Fiddler as a Research Fellow where she will be guiding Explainable AI Research working with our data science and engineering team. In her own words:
My research interests lie within the broad area of trustworthy machine learning. More specifically, my research spans explainable, fair, and robust ML. I am also very interested in reinforcement learning and causal inference. I develop machine learning tools and techniques which enable human decision makers to make better decisions. More specifically, my research addresses the following fundamental questions pertaining to human and algorithmic decision-making:
1. How to build fair and interpretable models that can aid human decision-making?
2. How to ensure that models and their explanations are robust to adversarial attacks?
3. How to train and evaluate models in the presence of missing counterfactuals?
4. How to detect and correct underlying biases in human decisions and algorithmic predictions?
These questions have far-reaching implications in domains involving high-stakes decisions such as criminal justice, health care, public policy, business, and education. Fiddler is working on the challenge of applying this research to build an enterprise-ready product, across a variety of industry verticals and use-cases. This is what excited me to join Fiddler on this journey!
Hima passionately believes in the need for Explainable AI and is excited to join Fiddler. If you’re interested in our mission and are excited about working with Hima, feel free to apply here.