Fiddler + Amazon Partnership: Leading Financial Services Institution Scales AI Governance

Industry
Financial Services
Location
Company Size
Revenue
Deployment
VPC
AI Observability Solutions
  • LLM Observability
  • Agentic Observability
  • ML Monitoring
Use Cases
  • Customer support (wealth, brokerage, asset management)
  • Hallucination detection and blocking
  • ML model monitoring (SageMaker AI)
Tech Stack
  • Deployment & Infrastructure: AWS SageMaker Partner App
  • AI Models & Inference: Amazon Bedrock (configured for Haiku 3), Proprietary LLM Gateway
  • Observability & Monitoring: Fiddler AI, OpenTelemetry, AWS CloudWatch, AWS Managed Prometheus/Grafana, AWS Application Inference Profiles

Fiddler AI Observability and Security has enabled this Leading Financial Services Institution to accelerate the deployment of GenAI applications and AI agents, while eliminating data-sharing risks and overcoming compliance blockers. The result is a scalable foundation for enterprise AI governance.

Results at a Glance

Fiddler’s partnership delivered measurable results for the platform, including:

  • Accelerated Time-to-Market: Launched GenAI applications and AI agents months ahead of schedule by removing compliance blockers.
  • Eliminated Data-Sharing Risk: Deployed Fiddler Trust Models in-environment to detect hallucinations and PII without external API calls.
  • Centralized AI Monitoring: Established a unified command center for all Amazon SageMaker AI deployments with complete visibility.
  • Total Cost of Ownership Predictability: Achieved predictable AI operations costs through in-built Trust Models, eliminating unpredictable LLM usage fees.

The Challenge: Scaling AI in a Highly Regulated Environment

As a major investment financial services institution, the organization operates under stringent financial services industry (FSI) security and compliance requirements. The AI team was tasked with deploying GenAI applications and AI agents to modernize operations, but faced significant hurdles that traditional monitoring tools could not address.

The institution encountered four critical obstacles. 

  1. FSI compliance requirements demanded that no proprietary data leave their secure environment, ruling out AI monitoring solutions that relied on external API calls for evaluation and scoring. 
  2. Cost unpredictability from LLM usage created budget uncertainty, as token-based pricing from external model calls made operational costs difficult to forecast. 
  3. Lifecycle governance for models, agents, and tool calls required comprehensive oversight across the entire AI stack, from individual model outputs to complex multi-agent interactions. 
  4. The high risk of hallucinated outputs in customer-facing and internal AI applications posed significant reputational and operational risk.

The team needed a solution that could provide comprehensive AI observability while meeting their security-first requirements and enabling governance at scale.

The Solution: The Fiddler AI Control Plane

Fiddler emerged as the preferred solution because it addressed each of the institution’s core requirements while providing a scalable foundation for future AI operations.

Fiddler serves as the unified AI command center for the institution’s entire AI portfolio. The platform integrates directly with their Amazon SageMaker AI environment, providing centralized monitoring and governance across all GenAI applications and AI agents. This single pane of glass enables the team to see every action, understand every decision, and control every outcome.

The Fiddler Trust Service, with its in-built Trust Models, was particularly critical for the institution’s security requirements. These purpose-built, fine-tuned models run entirely within the institution’s VPC environment, enabling hallucination detection, PII identification, and safety scoring without any data leaving their secure perimeter. This architecture eliminated the data-sharing risks that had been a compliance blocker with other solutions, and lowered Total Cost of Ownership (TCO) by running without external API calls.

Fiddler Guardrails provides the institution with proactive protection against harmful LLM outputs. By moderating risky prompts and responses in real-time, the platform prevents hallucinations, safety violations, and potential compliance issues before they reach end users. The low-latency performance of the Trust Models ensures this protection does not impact application responsiveness.

The Results: Faster Deployment, Lower Risk

With Fiddler in place, the institution achieved meaningful acceleration in their AI modernization efforts. GenAI applications and AI agents that had been stalled by compliance concerns moved into production months ahead of revised timelines. The compliance team gained confidence in approving AI deployments because Fiddler provided the audit evidence and governance capabilities required by FSI regulations.

The in-environment deployment model delivered immediate cost benefits. By using Fiddler's Trust Models instead of external LLM-as-a-judge approaches, the institution eliminated unpredictable API and token costs while maintaining comprehensive evaluation capabilities. This predictability allowed the AI team to plan capacity and budget with greater confidence.

Perhaps most importantly, Fiddler established the foundation the institution needs for continued AI expansion. As they scale their use of AI agents and explore more sophisticated multi-agent systems, Fiddler's visibility across the agentic hierarchy ensures they can maintain oversight and governance across increasingly complex deployments.

Looking Ahead: Building on a Trusted Foundation

The institution’s partnership with Fiddler continues to evolve as their AI ambitions grow. With the observability and governance infrastructure now in place, they are positioned to expand their use of GenAI across additional business functions while maintaining the security and compliance standards that their industry demands.

As the institution explores more sophisticated agentic AI applications, Fiddler's end-to-end observability capabilities will enable them to understand system behaviors, dependencies, and outcomes across the entire agentic hierarchy. This visibility is essential for managing the exponential complexity that multi-agent systems introduce.

To learn more about how the Fiddler AI Observability Platform can help you ship more LLM deployments into production, book a demo, or read additional case studies.