Conjura Reduces Time to Detect and Resolve Model Drift from Days to Hours with Fiddler

Industry
eCommerce, B2B SaaS
Location
London, UK
Company Size
Revenue
Models in production
10+
Use Cases
  • Customer lifetime value prediction
  • Customer segmentation and personalization
  • Demand planning
  • Marketing automation
Tech Stack
  • Training: Amazon SageMaker
  • Packaging: Docker
  • Registry: Amazon SageMaker
  • Data testing /validation: dbt
  • AI Observability: Fiddler

Fiddler AI Observability (formerly known as Model Performance Management) helps Conjura centralize model monitoring, manage the full ML lifecycle, and fold critical learnings back into the core use cases that provide real operational benefit to the growing list of eCommerce businesses using their platform.

Challenge

The data analytics provider for eCommerce, Conjura, was looking to consistently link model performance degradation with data drift, which was magnified by a lack of tooling and process that emphasizes the entire ML lifecycle. The challenge as they looked to scale their ML offerings was to measurably improve model quality over the entire lifecycle and respond proportionally and efficiently to real issues in production.

Results

Fiddler created scalable workflows to proactively:

  • Monitor Conjura’s machine learning models
  • Align the organization
  • Establish internal best practices
  • Move more quickly into production
  • Improve data reliability and analysis for their eCommerce customers

Elevating insight into model performance

Conjura enables their customers to connect and benchmark data from the multitude of sources that modern eCommerce businesses rely on. With machine learning at its core, the Conjura business analytics and data science platform helps eCommerce companies make better business decisions, while optimizing marketing efforts, accelerating sales, streamlining fulfillment, managing product inventory, and building profitability into each brand’s customer base.

As Conjura built out their ML offerings, growing their Data Science team and number of models in production, it became evident that they were under-tooled to manage those models and accelerate the pace of development. The lack of standardization and centralization of model monitoring meant that timely insights into model performance were sometimes missed, causing a reactive rather than proactive allocation of resources to production issues.

Simply put, the Conjura team asked Fiddler for a way to create internal best practices around their models, proactively monitor those models, and receive prompt alerts whenever a model’s performance began to degrade. For the longer term, Conjura was looking for a robust machine learning model lifecycle solution that would scale with their business. Conjura is keen to reinvest the learnings from this solution back into their ML products to make them more resilient and applicable to a wider range of use cases.

Aligning the organization

First, teams from Fiddler and Conjura discussed how to instrument the communication required by the AWS pipeline. By helping the Conjura team understand precisely how the Fiddler APIs worked, they were able to create an automated flow to minimize the amount of user intervention required to get a model onboarded for monitoring and transparency. Richer model monitoring and debugging tools are made available with ease once a model is deployed. This has significantly improved the speed to production and the Conjura DS team are keen to use the early learnings from the Fiddler platform to make further refinements to their models. Specifically for their LTV prediction use cases, they are already leveraging feature drift to refine the feature selection process.

“The traditional DS workflow is overly fixated on refining and optimizing models in notebooks on historical training data; however, I believe there is so much more to be learned about model quality and performance in production, when these models are exposed to new data.

With its clean API and the rich monitoring and analysis tools provided by the UI, Fiddler has greatly reduced the energy barrier to gaining valuable insight on production data. Not only does this provide our team with peace of mind, but we’re equally excited to see how this ‘production-first’ mindset is already informing model improvements in our LTV use cases for more resilient predictions.”

— Anthony Anderson, Head of Data Science, Conjura

AI Observability accelerates the time to detection and resolution

The Conjura team was very proactive in onboarding and adopting Fiddler into their ecosystem; they even designed an automated AWS workflow (using Step Functions and Lambda) that allows them to automatically register their models quickly and publish production traffic. The Fiddler AI Observability platform empowered the Conjura team to take ownership of the platform and they are now able to quickly identify and mitigate errors on their own.

By tracking and reducing mean time to detect (MTTD) and mean time to respond (MTTR) from several days to hours, Fiddler helped Conjura create best practices for AI Observability that directly address data and performance drift. Taking the solution even further, Conjura set up a machine learning operations (MLOps) practice team, which shares ownership of production models maintenance with the data science team, and improved internal transparency. The level of accountability and maturity around these issues is now a fixture of the team’s daily touchpoints.

Growing into the ML lifecycle with Fiddler

The Conjura team was looking for a model monitoring and explainable AI platform that meets today’s business and technical requirements, with the mindset to build and evolve together with Fiddler based on user demand and requirements.

Conjura’s business is built upon artificial intelligence that meet — and exceeds — the massive, rapid-fire, and complex data requirements of eCommerce. With Fiddler as a key component in their MLOps infrastructure, Conjura now has a fully functional MPM platform and employs a series of automation tools that improve model lifecycle management and data understanding for its eCommerce customers.

Conjura is looking forward to collecting data around model retraining events within the Fiddler AI Observability platform over a longer period of time and using this to challenge and refine their current procedures for model maintenance with the goal of building in more automation into the MLOps lifecycle, which are key initiatives as they continue to scale operations.

With Fiddler, Conjura enjoys

  • Visibility into model events: predictions and changes
  • Record of how models have been performing over time
  • Single source of truth for model predictions
  • Centralized tool to analyze and debug model performance issues
  • Standardized model interpretations
  • Working with their great customer success team