Fiddler Introduces End-to-End Workflow for Robust Generative AI. Read blog
Product
Platform Capabilities
Why Fiddler AI Observability
Key capabilities and benefits
Explainable AI
Understand the ‘why’ and ‘how’ behind your models
NLP and CV Models
Monitor and uncover anomalies in unstructured models
LLMOps
Optimize LLMs for better outcomes
Security
Enterprise-grade security and compliance standards
MLOps
Deliver high performing AI solutions at scale
More
Model Monitoring
Detect model drift, assess performance and integrity, and set alerts
Analytics
Connect predictions with context to business alignment and value
Fairness
Mitigate bias and build a responsible AI culture
Improve your models. Try Fiddler today!
Solutions
Customer Experience
Deliver seamless customer experiences
Lending and Trading
Make fair and transparent lending decisions with confidence
Case Studies
how.fm reduces time to detect model issues from days to minutes
Read more
Conjura reduces time to detect and resolve model drift from days to hours
Read more
Lifetime Value
Extend the customer lifetime value
Risk and Compliance
Minimize risk with model governance and ML compliance
Pricing
Pricing Plans
Choose the plan that’s right for you
Pricing Methodology
Discover our simple and transparent pricing
Plan Comparison
Compare platform capabilities and support across plans
FAQs
Obtain pricing answers from frequently asked questions
Build vs Buy
Key considerations for buying an MPM solution
Contact Sales
Have questions about pricing, plans, or Fiddler? Contact us to talk to an expert
Resources
Learn
Featured Resources
The Missing Link in Generative AI
Read blog
Generative AI Meets Responsible AI
Watch on-demand
How to Measure ML Model Drift
Download whitepaper
Resource Library
Discover the latest reports, videos, and research
Docs
Get in-depth user guides and technical documentation
Blog
Read the latest blogs, product updates, and company news
Guides
Download our latest guides on model monitoring and AI explainability
Connect
Events
Find out about upcoming events
Become a Partner
Learn more about our partner program
Support
Need help? Contact the Fiddler AI
support team
Amazon SageMaker + Fiddler
End-to-end model lifecycle management
Company
About
Our mission and who we are
Careers
We're hiring!
Join Fiddler AI to build trustworthy and responsible AI solutions
Featured news
Fiddler AI is on a16z's inaugural Data50 list of the world's top 50 data startups
Read analysis
Customers
Learn how customers use Fiddler
Newsroom
Explore recent news and press releases
Try Fiddler
Contact us
Contact us
Try Fiddler
Try Fiddler
Bias and fairness in AI blogs
By
Murtuza Shergadwala
Human-Centric Design For Fairness And Explainable AI
By
Amit Paka
FairCanary: Rapid Continuous Explainable Fairness
By
Murtuza Shergadwala
Detecting Intersectional Unfairness in AI: Part 2
By
Krishna Gade
AI Regulations Are Here. Are You Ready?
By
Murtuza Shergadwala
Detecting Intersectional Unfairness in AI: Part 1
By
Henry Lim
Fiddler Recognized as a Representative Vendor in the 2021 Gartner Market Guide
By
Henry Lim
Fiddler Listed As A Sample Vendor for Explainable AI in Two 2021 Gartner Hype Cycle Reports
By
Anusha Sethuraman
Responsible AI Podcast with Scott Zoldi — "It's time for AI to grow up"
By
Amit Paka
EU Mandates Explainability and Monitoring in Proposed GDPR of AI
By
Anusha Sethuraman
Fiddler X AWS Startup Showcase: Why Model Performance Management Is the Next Big Thing in AI
By
Amit Paka
Introducing Bias Detector: A New Methodology to Assess Machine Learning Fairness
By
Anusha Sethuraman
Responsible AI Podcast with Anand Rao – “It’s the Right Thing to Do”
By
Avijit Ghosh
Measuring Intersectional Fairness
By
Mary Reagan, PhD
Understanding Bias and Fairness in AI Systems
By
Anusha Sethuraman
AI in Finance Panel: Accelerating AI Risk Mitigation with XAI and Continuous Monitoring
By
Anusha Sethuraman
How Do We Build Responsible, Ethical AI?
By
Amit Paka
How to Build a Fair AI System
By
Krishna Gade
TikTok and the Risks of Black Box Algorithms
By
Marissa Gerchick
Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data
By
Marissa Gerchick
Identifying bias when sensitive attribute data is unavailable: Exploring Data from the HMDA
By
Marissa Gerchick
Identifying bias when sensitive attribute data is unavailable: Techniques for inferring protected characteristics
By
Marissa Gerchick
Identifying bias when sensitive attribute data is unavailable
By
Ankur Taly
FAccT 2020 - Three Trends in Explainability
By
Krishna Gade
The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?
By
Amit Paka
Regulations To Trust AI Are Here. And It's a Good Thing.
By
Kent Twardock
Can Congress help keep AI fair for consumers?
By
Dan Frankowski
A gentle introduction to algorithmic fairness