Blog

The latest news and updates from Imubit.

The ABCs of Closed Loop AI Optimization

Mar 8, 2024

And why it’s gaining traction compared to legacy real-time optimization tools

By Allison Buenemann, Product Marketing Manager at Imubit

One of the most frequently asked questions of closed loop AI optimization providers is “how does this differ from existing real-time optimization tools?” So, the team here at Imubit decided to make the answer simple and readily available to the masses.

It all starts with YOUR real-life data

One of the craziest ideas to grasp when first introduced to AI-based optimization strategies is that, unlike legacy real-time optimization tools, a complete, robust process simulation is not a prerequisite. Instead of relying on an underlying simulation model built on first principles or a table of manually-specified vectors, a deep learning neural network is constructed to model current and future operation based on years’ worth of actual operating data from your historian and lab systems. This means that the model is unique to your plant and all its configurational, environmental, and conditional nuance.

First principles are incorporated into these data-first models as domain experts evaluate results and retrain the model on their real-world experience. This ensures the irrefutable laws of math and physics aren’t broken while reducing the number of limiting assumptions imposed on the model.

Then we bring in the reinforcements

When we’re satisfied with the behavior of the deep learning neural network model under a multitude of test scenarios, it’s time to move on to training the optimizing controller. This is where reinforcement learning comes in. Note: If you take anything away from this section, let it be that offline reinforcement learning enables the controller to make real-time control decisions with NO REAL-TIME SOLVER. Crazy?

The achilles heel of a real-time solver is the assumption of a perfectly accurate underlying prediction. This assumption, made to ensure solution convergence, can produce unexpected or erroneous results in scenarios of model-mismatch, where the model and reality diverge.

Reinforcement learning teaches the controller optimal decision-making strategies rather than pattern recognition. It learns through trial and error in an offline cloud environment by iterating through hundreds of millions of scenarios that represent unique combinations of process conditions, pricing, and constraints. The controller learns how best to behave under all possible historical scenarios and how to extend those historical scenarios, responding to an unknown situation with the equivalent experience of thousands of years as a plant operator on this exact unit.

Learning to trust the AI

I think we can all agree that “I don’t know, the AI did it” is probably not the response a plant manager wants to hear following a quality or regulatory upset. While there’s no requirement for an operator to understand the mathematics involved in constructing a deep learning neural network, they are a requisite member of the model evaluation and training team. Establishing this credibility, understanding and trust of the underlying process model within the rest of the site team are critical components of a controller implementation training program.

Because the deep learning neural network used to train the controller is completely offline, it can be a great playground to see how the controller would respond to a variety of disturbances or changes to constraints. A platform designed with all levels of domain experts (rather than exclusively APC/RTO experts) in mind provides a point-and-click environment for operators and engineers to test predictions. Exploring different scenarios this way can help build trust amongst engineering and operations teams that the controller will take an appropriate action in response to a disturbance they’re familiar with.

Imubit’s manipulation functionality lets users test different what-if scenarios to validate that the model responds as expected.

Timelines to consider

Comparing the implementation timelines of a closed loop AI optimization strategy with traditional real-time optimization comes down to two rate limiting steps: people and prerequisites. Traditional approaches including prerequisite APC infrastructure that are trained and implemented by APC/RTO experts will run 18-24 months. Modern approaches that rely on data and domain expertise can be operational in less than 6 months.

Let’s get technical

Want to learn more about the nuts and bolts that make this form of closed loop AI advantageous to traditional real-time optimizers? Check out our on-demand webinar on the advantages of true AI versus hybrid models.

Unlock your plant's Untapped Value.

It all starts with Imubit.

REQUEST A DEMO