Join us for the latest AI Explained: Metrics to Detect Hallucinations with RagaAI on 4/25. Register
Product
Platform Capabilities
Why Fiddler AI Observability
Key capabilities and benefits
Explainable AI
Understand the ‘why’ and ‘how’ behind your models
NLP and CV Models
Monitor and uncover anomalies in unstructured models
LLMOps
AI Observability for end-to-end LLMOps
Fiddler Auditor
Evaluate LLMs in pre-production
Security
Enterprise-grade security and compliance standards
MLOps
Deliver high performing AI solutions at scale
More
Model Monitoring
Detect model drift, assess performance and integrity, and set alerts
Analytics
Connect predictions with context to business alignment and value
Fairness
Mitigate bias and build a responsible AI culture
Improve your AI models. Request a demo
Solutions
Customer Experience
Deliver seamless customer experiences
Lending and Trading
Make fair and transparent lending decisions with confidence
Case Studies
Tide drives innovation, scale, and savings with AI Observability
Read more
Conjura reduces time to detect and resolve model drift from days to hours
Read more
Lifetime Value
Extend the customer lifetime value
Risk and Compliance
Minimize risk with model governance and ML compliance
Government
Safeguarding citizens and national security with trusted AI
Pricing
Pricing Plans
Choose the plan that’s right for you
Platform Pricing Methodology
Discover our simple and transparent pricing
Plan Comparison
Compare platform capabilities and support across plans
FAQs
Obtain pricing answers from frequently asked questions
Build vs Buy
Key considerations for buying an AI Observability solution
Contact Sales
Have questions about pricing, plans, or Fiddler? Contact us to talk to an expert
Resources
Resource Library
Discover the latest reports, videos, and research
Docs
Get in-depth user guides and technical documentation
Blog
Read the latest blogs, product updates, and company news
AI Explained Webinars
Watch experts discuss the most pressing issues in ML and LLMOps
AI Forward Summit
Watch recordings on how to operationalize production LLMs, and maximize the value of AI
Events
Find out about upcoming events
Podcasts
Tune in to hear from industry experts
Become a Partner
Learn more about our partner program
Amazon SageMaker + Fiddler
End-to-end model lifecycle management
Support
Need help? Contact the Fiddler AI
support team
Company
About
Our mission and who we are
Careers
Join Fiddler AI to build trustworthy and responsible AI solutions
Featured news
Fiddler AI is on a16z's inaugural Data50 list of the world's top 50 data startups
Read analysis
Customers
Learn how customers use Fiddler
Newsroom
Explore recent news and press releases
Request demo
Contact us
Contact us
Request demo
Request demo
Explainable AI blogs
Learn how to explain model predictions, key considerations for explainability, and the latest research in explainable AI.
Murtuza Shergadwala
Human-Centric Design For Fairness And Explainable AI
Mary Reagan
Not all Rainbows and Sunshine: the Darker Side of ChatGPT
Shohil Kothari
The Real World Impact of Models without Explainable AI
Krishnaram Kenthapadi
Why You Need Explainable AI
Amit Paka
FairCanary: Rapid Continuous Explainable Fairness
Shohil Kothari
Implementing Model Performance Management in Practice
Shohil Kothari
Explainable AI
Amy Holder
XAI Summit Highlights: Responsible AI in Banking
Krishna Gade
Where Do We Go from Here? The Case for Explainable AI
Henry Lim
The Key Role of Explainable AI in the Next Decade
Henry Lim
How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio
Anusha Sethuraman
Responsible AI Podcast with Anjana Susarla – “The Industry Is Still in a Very Nascent Phase”
Henry Lim
Achieving Responsible AI in Finance With Model Performance Management
Amit Paka
Introducing All-Purpose Explainable AI
Henry Lim
What Should Research and Industry Prioritize to Build the Future of Explainable AI?
Anusha Sethuraman
The Past, Present, and Future States of Explainable AI
Anusha Sethuraman
Explainable Monitoring for Successful Impact with AI Deployments
Anusha Sethuraman
Achieving Responsible AI in Finance
Erika Renson
The State of AI Explainability and Monitoring: Market Survey 2020
Amit Paka
AI in Banking: Rise of the AI Validator
Erika Renson
XAI Summit Speaker Spotlight: How Can We Increase Trust in AI?
Erika Renson
XAI Summit Speaker Spotlight: What Is Responsible AI to You and Why Is It Important?
Anusha Sethuraman
Fiddler’s 3rd Annual Explainable AI Summit
Erika Renson
AI Explained Video Series: The AI Concepts You Need to Understand
Ankur Taly
[Video] AI Explained: What are Integrated Gradients?
Krishna Gade
Explainable Monitoring: Stop Flying Blind and Monitor Your AI
Ankur Taly
AI Explained Video Series: What are Shapley Values?
Ankur Taly and Aalok Shanbhag
Counterfactual Explanations vs. Attribution Based Explanations
Anusha Sethuraman
Explainable AI Podcast: Founder of AIEthicist.org, Merve Hickok, explains the importance of ethical AI and its future
Anusha Sethuraman
The Next Generation of AI: Explainable AI
Anusha Sethuraman
CIO Outlook 2020: Building an Explainable AI Strategy for Your Company
Anusha Sethuraman
Latest Explainable AI Newsletter: January 2020
Anusha Sethuraman
Explainable AI Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency
Amit Paka
How to Design to make AI Explainable
Anusha Sethuraman
Where is AI Headed in 2020?
Luke Merrick
Explainable AI at NeurIPS 2019
Anusha Sethuraman
Explainable AI Podcast: Global & Fiddler discuss AI, explainability, and machine learning