The term Predictive Analytics has hit the marketplace with great force, but what does it mean? The increased utilization of existing data has become a necessity in all industries as we become more efficient in our operations and maintenance.
The traditional way of one signal / one alarm is a great way to flag a problem once it has occurred, but Predictive Analytics utilizes multiple data signals with consideration given to the operations of the machine, and similar machines during normal operation conditions to identify abnormal states. Unlike control monitoring and alarm systems, the predictive analytics model utilizes trend data and multiple signals to foresee a probable outcome. These analytics have had great success in identifying abnormal conditions, when compared to a like machine or previous data under the same operating conditions. This enables the prediction of failures, based on the identification of an abnormalities in the state of a system prior to becoming a failure.
The prediction of the failure begins with the identification of the failure. Standard FMEA practices are a crucial part of the writing of an algorithm. You must start with the end in mind. In other words, what failure are you trying to detect? Once this is identified, map to the signals or data streams that are available to identify the state prior to that failure mode occurring.
For example, a heat exchanger may have inlet and outlet temperature sensors. This will give a temperature differential at any given time. Taking it a step further, you can integrate system load, or overall heat load to identify the normal temperature at any given load. Once this is identified, the abnormal condition can be detected. The possible identifier can be a low inlet temperature detected at a specific load value, which may indicate too much coolant flow for the optimal operating temperature range. This can be extremely critical in cold weather environments or chilled cooling loops. Additionally, the inverse applies for not enough coolant flow. This could point to a valve lineup issue or clogged exchanger issue. The detection can further be narrowed down by the differential temperature across the heat exchanger. A normal temperature drop will indicate a valve lineup, where an abnormal, or low temperature drop will indicate a clogged heat exchanger. Once again, load remains a factor in these examples.
Once an issue has been identified, the “repair” work can be completed off the critical path of operations. There may be a maintenance window scheduled or an operation that does not require that piece of equipment when the repair can be scheduled.
By detecting the failure prior to catastrophe, the repair becomes less intensive. In the example above, it could mean the difference between a light cleaning of the heat exchanger versus an acid clean or replacement operation. This inherently reduces the costs of the repair and reduces the time to complete the repair.
Identifying the need for this technology will be crucial to getting the buy-in from the right stakeholders. The implementation of this program is not a single beneficiary sell. Think of how your operations effect the business. What departments are affected by breakdowns? What is the cost of non-production? Looking beyond the monetary cost, what are the risks associated with equipment repairs? Consider reputation, late deliveries, and morale. It is key to include all stakeholders in the construction of the business case. Each department has a stake in effective operations.
With today’s technology, there are very few equipment operating parameters that are not being monitored already. The addition of sensors could be required to detect very specific critical failure modes, but there are gains to be made with the existing data.
Considering how each failure mode effects operations, and what data is available to detect that failure is the first step. Following guidelines for a Failure Modes and Effects Analysis (FMEA) is a good way to document this process and drive ideas from a multi-functional team. Once the signals are defined and the relationship between the signals is established, the expected reactions to the failure should be identified. A software developer / programmer will be required to do the actual code work.
Utilizing simple code to auto-input data is another great utilization of this technology. How many hours does your plant spend recording data that is already being displayed, or with a simple logic can be calculated? Developing these communication paths and simple algorithms is a quick win. Auto-recording items like running hours, cycle counts, and cylinder strokes is a great way to achieve greater data accuracy. This eliminates the estimation by an operator or maintenance technician making rounds. This will also be a critical component for evolving from a planned maintenance to a condition-based maintenance routine.
Measuring operations performance becomes more visible. In most cases, existing automation data can be used to calculate operations Key Performance Indicators. This can also aid in displaying the processes that make up your operations. With this detail, it will become clear where process improvements can be made to achieve production goals.
If dataloggers are available, they are a great source to learn from past mistakes. It is also free information to test new algorithms. The historical data not only can define the normal state, but also set production baselines, and reveal indicators to past failures.
Traditional alarms have been crucial to the identification of a failure. Improvements in processor capability and software intelligence have given light to improved utilization of our data. Each department has a stake in the efficient operation of an organization. The implementation of a predictive analytics program can seem overwhelming, but with the organization’s buy-in, support and a structured approach, these programs can lead to a positive return on investment without major production disruptions.