Author: Dennis Tkacs
Why do computer driven systems fail? It’s a subject I want to develop over the next few blog posts. And by fail I don’t mean technically, but rather why do they fail to meet the business cases under which their procurement was initially justified? There are multiple reasons and none have to do with technology. Rather, it has to do with governance and to fully understand the implications we have to start with the evolution of computer and control systems.
Place a DCS [distributed control system] and an IT system at a distance and it’s difficult to tell the two apart. Both have made extensive use of commercial off the shelf technology that includes computing platforms, operating systems, displays, and networks.True, the DCS incorporates specialized components such as controllers and I/O that are unique to its specific mission of controlling a process plant but, by and large, both leverage common technologies. There are differences though, unseen differences that are embedded in the governance policies under which each are procured, implemented, and maintained; policies that are deeply rooted in the past when control and IT systems had little in common.
Control systems are descendant from hardware-centric loop controllers that were densely mounted on panels and desks in a control room. Given this hardware centricity, inflexibility, and high installation costs, such systems were expected to last 15-20 years, and well they could, but then things changed.
With advent of the DCS in the late 70s single loop controllers morphed into faceplate displays on a monitor and algorithms executing in a remote (distributed) multi-loop rack, linked together via a proprietary data highway. Functions that had previously been accomplished in hardware were implemented in software. Signals that had moved over dedicated wires now used serial communication.
Still the specific needs of control application and IT platforms resulted in separate tracks. Early PCs were deemed unreliable and unstable and mini computers too expensive, so DCS providers developed a variety of board level components based on industrial backplanes, proprietary networks, and data base kernels through which to realize the functions of a distributed control system.
The ceaseless advance of technology saw chip densities increase exponentially making personal computers powerful enough – and stable enough – to make their way out of the office and into the plant. Similarly, networks morphed from proprietary designs to commercially available technologies such as Ethernet.
Incubated in IT, PC and network standards evolved, usage increased, and costs nosedived. Driven by both capability and cost factors, IT technology quickly found its way into control systems. One might say that there was convergence of IT and control technologies. From this point there would be no returning to the 15-20 year life expectancy of the old limited capability, panel mounted control system.
Obsolescence, driven by Moore’s Law, was now a significant consideration. The move to commercial platforms meant that every 18 months or so computer technology turned over bringing higher speeds and enhanced capabilities – this change had to be accommodated.
Questions had to be asked: On what basis do I select a control system? How often do I upgrade? How do I exploit ever increasing capabilities? And eventually, how do I secure my control system from cyber security threats?
I would offer that the convergence of IT and control systems has stratified computing resources into three layers:
- Enterprise level computing on the top later
- Embedded systems on the bottom layer
- Operational systems in the middle
I used the term operational systems in my Doctoral degree research to define a class of system that used enterprise IT technologies and infrastructure but that ran highly specialized applications far beyond most at the enterprise level. These boundaries between these layers are increasingly blurred mandating different approaches as to how they must be managed. This we take up in the next post.