Strategies to Reduce Unplanned Downtime and Improve Operations

by | Feb 15, 2018 | Event, Services, Consulting & Training

Jim Cahill

Jim Cahill

Chief Blogger, Social Marketing Leader

At the 2018 ARC Industry Forum, ARC Advisory Groups’ Craig Resnick led a panel session, Strategies to Reduce Downtime and Increase Plant KPIs. The description for this session was:

This session will discuss manufacturers’ biggest nemesis – unscheduled downtime – and strategies on how to reduce or eliminate it, subsequently increasing a plant’s key performance indicators (KPIs). Case studies will offer need-to-know guidance on “must haves” for OT leaders looking to maximize the ROI of their automation systems by eliminating any unscheduled downtime. Presentations will focus on learning to assess availability readiness of automation systems, as well as best practices to engage with IT decision makers. Learn how IT and OT convergence can align to deploy the best solution, one that fits into existing systems, supports standards including OPC, and delivers ROI, while laying the foundation to support modern technologies, such as virtualization and IIoT.

The first case study presented was a batch control application for a chemical production process. Ingredient additions were manually controlled and the control system manage blending, mixing and movements through the production process. The control system hardware managing the process had become obsolete. The human machine interface (HMI) was cluttered and did not effectively use colors as best practices recommended today.

The justification to modernize was a customer demanding more automation to improve product quality. Also, there was a lot of deadtime in the process and additional throughput would drive enhanced revenues. Upon receiving approval, the project team took a phased approach to modernize the batch control system and HMI. The project team included corporate engineering, local plant subject matter experts and the control system supplier.

The next presenter shared a case study on virtualizing their distributed control system (DCS). The plant was an oil production and refining process.

The virtualized servers could support up to 10 virtual machines (VMs) per one physical server. All the physical servers were fully redundant. The site saw more efficient upgrades for the DCS software. Backup and restore functions were also simplified with period snapshots of the virtual machines. VMs booted much more quickly than physical servers. The physical footprint and HVAC requirements were also significantly reduced. Troubleshooting was also simplified since the state of the VM could be shared with the DCS supplier to recreate and solve the issues.

The plant had DCS servers and workstations failing at an excessive rate which helped to justify the project to virtualize. Virtualization allowed operator stations to be upgraded on the fly and redundancy allowed testing and verification before switching online.

The results were reduced maintenance, reduced unplanned downtime and reduced impact of obsolescence.

The next speaker described their chemicals production process and their state before modernization. Data collection was disjointed, cumbersome and not very effective. Operations were manual based on deep operator knowledge and experience. The processing was quite reactive to changing customer demands. The project team was able to justify a supervisory control and data acquisition project.

The control system was run on a fault tolerant server running virtual machines for development, I/O servers, visualization servers, MES server, web server, historians, and recipes. The results achieved included a 65% reduction in scrap and increased overall throughput.

The next presenter described their SCADA system as Excel spreadsheets and check sheets with data scattered everywhere. They consolidated the data from all their plants into a modern SCADA system. They decided to try to measure everything from machine downtime to slicer settings, conveyor speeds, cooking parameters and more. From the data, they could determine optimum speeds for running the production process, where before it had been left to the shift operations staff to determine the optimum speed. There was wide variability between shifts.

The common thread in these stories was the impact of the data in analyzing ways to improve existing operations to drive business results.

Popular Posts


Follow Us

We invite you to follow us on Facebook, LinkedIn, Twitter and YouTube to stay up to date on the latest news, events and innovations that will help you face and solve your toughest challenges.

Do you want to reuse or translate content?

Just post a link to the entry and send us a quick note so we can share your work. Thank you very much.

Our Global Community

Emerson Exchange 365

The opinions expressed here are the personal opinions of the authors. Content published here is not read or approved by Emerson before it is posted and does not necessarily represent the views and opinions of Emerson.

PHP Code Snippets Powered By :