Join us for the latest AI Explained: Metrics to Detect Hallucinations with RagaAI on 4/25. Register
Product
Platform Capabilities
Why Fiddler AI Observability
Key capabilities and benefits
Explainable AI
Understand the ‘why’ and ‘how’ behind your models
NLP and CV Models
Monitor and uncover anomalies in unstructured models
LLMOps
AI Observability for end-to-end LLMOps
Fiddler Auditor
Evaluate LLMs in pre-production
Security
Enterprise-grade security and compliance standards
MLOps
Deliver high performing AI solutions at scale
More
Model Monitoring
Detect model drift, assess performance and integrity, and set alerts
Analytics
Connect predictions with context to business alignment and value
Fairness
Mitigate bias and build a responsible AI culture
Improve your AI models. Request a demo
Solutions
Customer Experience
Deliver seamless customer experiences
Lending and Trading
Make fair and transparent lending decisions with confidence
Case Studies
Tide drives innovation, scale, and savings with AI Observability
Read more
Conjura reduces time to detect and resolve model drift from days to hours
Read more
Lifetime Value
Extend the customer lifetime value
Risk and Compliance
Minimize risk with model governance and ML compliance
Government
Safeguarding citizens and national security with trusted AI
Pricing
Pricing Plans
Choose the plan that’s right for you
Platform Pricing Methodology
Discover our simple and transparent pricing
Plan Comparison
Compare platform capabilities and support across plans
FAQs
Obtain pricing answers from frequently asked questions
Build vs Buy
Key considerations for buying an AI Observability solution
Contact Sales
Have questions about pricing, plans, or Fiddler? Contact us to talk to an expert
Resources
Resource Library
Discover the latest reports, videos, and research
Docs
Get in-depth user guides and technical documentation
Blog
Read the latest blogs, product updates, and company news
AI Explained Webinars
Watch experts discuss the most pressing issues in ML and LLMOps
AI Forward Summit
Watch recordings on how to operationalize production LLMs, and maximize the value of AI
Events
Find out about upcoming events
Podcasts
Tune in to hear from industry experts
Become a Partner
Learn more about our partner program
Amazon SageMaker + Fiddler
End-to-end model lifecycle management
Support
Need help? Contact the Fiddler AI
support team
Company
About
Our mission and who we are
Careers
Join Fiddler AI to build trustworthy and responsible AI solutions
Featured news
Fiddler AI is on a16z's inaugural Data50 list of the world's top 50 data startups
Read analysis
Customers
Learn how customers use Fiddler
Newsroom
Explore recent news and press releases
Request demo
Contact us
Contact us
Request demo
Request demo
Data science blogs
Dive into key concepts, terminology, and cutting-edge research in data science.
Bashir Rastegarpanah and Karen He
Fiddler Report Generator for AI Risk and Governance
Amal Iyer and Karen He
Evaluate LLMs Against Prompt Injection Attacks Using Fiddler Auditor
Murtuza Shergadwala
Making Image Explanations Human-Centric: Decisions Beyond Heatmaps
Josh Rubin
What is ChatGPT Thinking?
Murtuza Shergadwala
Human-Centric Design For Fairness And Explainable AI
Bashir Rastegarpanah
Monitoring Natural Language Processing and Computer Vision Models, Part 3
Ankur Taly
Expect The Unexpected: The Importance of Model Robustness
Amal Iyer
Monitoring Natural Language Processing and Computer Vision Models, Part 2
Bashir Rastegarpanah
Monitoring Natural Language Processing and Computer Vision Models, Part 1
Murtuza Shergadwala
Detecting Intersectional Unfairness in AI: Part 2
Murtuza Shergadwala
Measuring Data Drift: Population Stability Index
Murtuza Shergadwala
Detecting Intersectional Unfairness in AI: Part 1
Avijit Ghosh
Measuring Intersectional Fairness
Malhar Jere
A Practical Guide to Adversarial Robustness
Marissa Gerchick
Identifying Bias When Sensitive Attribute Data is Unavailable: Geolocation in Mortgage Data
Marissa Gerchick
Identifying Bias When Sensitive Attribute Data is Unavailable: Exploring Data From the Hmda
Ankur Taly
[Video] AI Explained: What are Integrated Gradients?
Ankur Taly
AI Explained Video Series: What are Shapley Values?
Josh Rubin
The State of Explainability: Impressions from Partnership on AI (PAI)’s Workshop in NYC
Ankur Taly and Aalok Shanbhag
Counterfactual Explanations vs. Attribution Based Explanations
Marissa Gerchick
Identifying Bias When Sensitive Attribute Data is Unavailable: Techniques for Inferring Protected Characteristics
Marissa Gerchick
Identifying Bias When Sensitive Attribute Data is Unavailable
Ankur Taly
FAccT 2020 - Three Trends in Explainability
Dan Frankowski
Should You Explain Your Predictions With SHAP or IG?
Dan Frankowski and Ankur Taly
Causality in Model Explanations and in the Real World
Dan Frankowski
Debugging Predictions Using Explainable AI
Dan Frankowski
A Gentle Introduction to GA2Ms - A White Box Model
Dan Frankowski
Humans Choose - AI Does Not
Dan Frankowski
A Gentle Introduction to Algorithmic Fairness