Look around and you’ll very quickly discover just how rapidly the world is changing. Electric vehicles—including those used for mass transportation—seem to be around every corner. People communicate and entertain themselves using a wide array of electronic mobile devices. Data centers are popping up around the globe to power AI engines that are reshaping our world. And all these changes have one thing in common: they all draw enormous amounts of energy.
Meeting this rapidly increasing demand has put intense pressure on power producers. Just as they need to drive the most efficient operations they possibly can, they are simultaneously facing a workforce and (associated) industry expertise shortage. Skilled workers are hard to find, and that creates enormous pressure.
However, as Rick Kephart explores in his recent article in Power magazine, technology is stepping up to fill the gap.
“Artificial intelligence (AI) is the next evolution of assistive technology, built on a foundation of ML, to help organizations accomplish rapid, productive change in their operations and maintenance. Today, AI is impossible to miss. It is in the headlines, mobile devices, the art studio, and enterprise business systems. However, one place AI technologies are struggling to take hold is in the control room—or, more precisely, the control layer.”
True, many organizations are still very hesitant to incorporate AI into their control. The technology is unquestionably still evolving. But GenAI will one day impact process control, even if we don’t know exactly what those solutions will look like. For companies that want to capture competitive advantage and position themselves to support the grid of the future, it is critical that they start getting ready to capitalize on those changes as they come to pass.
Confronting the challenges
For all its potential, the adoption of GenAI in control will not be easy. After all, most of today’s large GenAI models are cloud-based and public. Typically, it requires the processing power of the cloud to manage the large language models (LLM) which are not only massive to begin with, but also need nearly unlimited scalability to perform their tasks.
The cloud always brings OT teams pause. Rick explains,
“There are a couple core reasons for this hesitancy. First, the cloud, regardless of cybersecurity protocols, is a shared space. Connecting the control system and its operational data to a public space immediately puts an organization’s intellectual property at risk, along with its continuity of operation for critical systems. While unlikely on the most secure platforms, data breaches are possible on absolutely any shared platform, making the risk far too high for most operations teams to tolerate.
In addition, North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) regulations are strict and challenging to follow even on a local level. While it is likely possible to create a NERC CIP compliant connection to a public GenAI model, the complexity, risk, and limitations would make the process more cumbersome and costly than it is worth.”
There is another way
But here’s the thing; just because cloud-based LLMs are the standard today, doesn’t mean that is the only way to bring GenAI to the plant.
“While GenAI grew up on a single, large, open AI model, that is not the only option.”
Today’s LLMs are massive because they encompass nearly everything. Whether you want to know how to better tune your gas turbine, or know the best zones in which to plant hydrangeas, cloud-based GenAI will have an opinion, and therefore, an answer. But plant personnel don’t need to know where to plant hydrangeas, and they possibly don’t even need to know the tuning details on the gas turbine if it’s not specific to their own equipment. And that’s where smaller, on-premises LLMs come in.
“If teams are willing to narrow the scope of their individual AI engines, they will likely be able to run many smaller models on local platforms right at the control layer. These types of GenAI models can capture qualitive behavior to augment controls. This capability will likely be most useful during abnormal conditions where there may be multiple failures that potentially cause the base control system to have difficulty maintaining control.”
Ready to go
Smaller LLM software can be ready out of the box to serve power-specific OT applications, and can then be further customized for the plant using specific asset information. They can be deployed locally to provide GenAI capability catered to a site’s unique configuration, and, more importantly, secured against intrusion. These smaller LLMs will be simpler to manage and maintain, while simultaneously having more guardrails, making their results more trustworthy and transparent.
Even today, power producers can get started working with GenAI in a safe way using software like the Ovation™ AI-Enabled Virtual Advisor—the first GenAI advisor integrated into an automation system specifically designed for the power and water industries.
Nobody can guess exactly what the future of AI will look like. However, we do know that it will change the OT space across nearly every industry, and has the potential to deliver unprecedented performance and efficiency if used correctly. Getting started today will unlock potential that will drive massive results as the technology matures.