Back to blog home

Implementing Model Performance Management in Practice

Model performance management (MPM) is a framework for building trust into AI systems — and as with any framework, the big question that businesses have is: How should we implement it? (And should we do it ourselves, or use existing tools and platforms?)

How does feedback contribute to model optimization?

Being able to continuously update machine learning models to keep up with data drift often relies on closing the feedback loop of your models. This is an idea that comes from control theory and involves comparing expected versus actual model outputs to improve model performance. Control theory includes two kinds of feedback loops: open loops and closed loops. Open loops don’t consider previous output from the system. Closed loops do take past outputs into account when handling new inputs to the system. 

Model performance management serves as part of a closed loop feedback system in the MLOps lifecycle. The closed loop results from the model taking in feedback that the model performance management system provides about that model’s output. The system compares the model’s output against externally-sourced ideal or expected outputs, also called desired references. Your team can then analyze the results of this validation process and use them as indications of how to approach optimizing a model’s predictions. Feedback from the system can help your business avoid model bias in the early stages of the MLOps lifecycle, and improve predictions in the later stages.

How does model performance management fit into the MLOps lifecycle? 

Model performance management is a simple way to achieve consistent monitoring that aids model optimization throughout the MLOps lifecycle. 

Model performance management in the ML development and training stages

When developing a model, it’s important to prevent bias to avoid deploying a high-performing but flawed model that could pose risks to your business. Explainable AI (XAI), the topic of Chapter 2, can aid in detecting biases in training data as well as skew within the model itself. After training, model performance management can also help determine which model will best serve your goals by indicating whether a particular model is overly sensitive to the training data.

Model performance management in the ML release and monitoring stages

After deploying a model to production, it’s recommended that the model tracks additional metadata, such as input and model version number, alongside predictions. This kind of model monitoring log ensures that your system records the context of any deviations from expected behavior and can prevent data drift from causing model drift through misaligned predictions. In the post-deployment stage, model performance management can help your team review predictions and metadata to understand issues and update the model accordingly. 

Model performance management also allows your team to easily test newer versions of a model. This kind of testing, also called live experimentation, can compare an updated model against an original model or compare multiple models at once. A/B testing or champion/challenger testing is live experimentation that involves only two models, where the potential replacement is the challenger. Multivariate testing refers to simultaneous testing of multiple models, where the model performance management system helps track developments and collect data to determine which model performs best.

What does an ideal MPM system provide?

An MPM system helps your team manage any machine learning models that your business relies on. It should be able to provide consistent feedback on how each model performs, provide transparency into the model’s processes, and enable model governance.

Requirements for MPM systems differ from team to team, but an ideal system will:

  • Track model version and training data and log prediction results and metadata
  • Automatically flag certain model behavior, like data drift, using rules that your team can configure
  • Replicate and compare past prediction results across different models
  • Make raw data and metadata about the model accessible to your team
  • Use Explainable AI to identify any hidden model bias

What tools are available to implement MPM systems?

A wide range of tools and platforms exist for implementing model performance management. Your choice will depend on your company’s needs and any pre-existing workflows or model infrastructure that you use. Here are some tips for thinking through the options.

Built-in cloud provider tools

When your team already relies on a cloud platform, it’s worth investigating the MPM tools provided by these cloud services. Though they may not cover all of your organization’s needs, large cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) often have features or add-ons for model tracking and model monitoring.

Different cloud services will offer different features that are more or less suited to your company’s use case. Ultimately, it’s best to consider a few and evaluate them according to your specific needs.

Specialized MPM software

If cloud platforms don’t provide everything your company needs to implement an MPM framework, or if your business hosts everything on-premises, a full-scale MPM platform could be a solution. Some platforms are open source (such as MLFlow and Seldon), while others are offered by SaaS companies like Fiddler.

The type of MPM platform that your company chooses can depend on business concerns. If latency, scale, support, and ease of integration are priorities, a SaaS solution may be more efficient. However, if cost is top of mind and your MLOps team has the resources to manage the implementation, an open-source option might serve your needs better.

Custom platform

It might be a good idea to create a custom MPM platform if your company’s needs aren’t met by either your cloud provider or the specialized MPM platforms on the market. A bespoke solution can be built using open-source resources, such as the Elasticsearch ELK software stack (which consists of Elasticsearch, Logstash, and Kibana), some of which can be managed by a vendor instead of by your company if you prefer. 

If your team already has some tools or services in place for some model management that just isn’t end-to-end yet, building an MPM platform in house will let you take advantage of what you already have in place and are used to. You’ll also be able to tailor the platform precisely to your company’s needs. However, your team will be spending time on building, testing, and maintaining the platform itself rather than focusing on model monitoring and developing workflows. There are large upfront and long-term costs to be aware of. 

Conclusion

The ideal MPM platform provides transparency into your machine learning systems, and can be implemented in a variety of ways. To determine what tools or methods will work best for your business, it’s important to consider the features you’re looking for and evaluate your options carefully.