I received an interesting email from Dave Harrold with an article he just wrote. You may recall him from years of articles he wrote for Control Engineering magazine and many years before that with Team Emerson. He’s now enjoying retired life in Florida and keeps up with what’s happening in the world of artificial intelligence (AI). I enjoyed Dave’s article and wanted to share it with you.
By Dave Harrold
Sixty years ago, in an era of computer mainframes and slide rules, J.C.R. “Lick” Licklider of the Defense Advanced Research Projects Agency (DARPA) outlined a bold vision of a mutually beneficial human-computer symbiosis, a partnership in which both humans and machines positively benefit – a vision that remains largely unfulfilled.
Today, humans and intelligent machines work shoulder-to-shoulder with one another but it’s truly hard to claim that working shoulder-to-shoulder represents a true symbiosis – an environment, if you will, where two completely different organisms exist to the mutual benefit of both.
Consider an Amazon fulfillment center for example. While humans do the routine work of filling boxes, algorithms collect the insights on product popularity. That is not a symbiotic environment nor is simply adding automation into a factory or workplace, because the machines gain little to nothing from the humans involved. And humans gain only to the extent that increasing productivity frees them up to do equally or more interesting work.
But what about the world of online gaming; where intelligent algorithms working in cooperation with a human player are able to outplay competitor players or other algorithms, singularly playing alone? It’s happening thousands, tens-of-thousands times each day.
But let’s take it one step further. What sort of benefit is gained when a “superintelligent” algorithm partners with a team of game players to play against other gaming teams. The algorithm is able to almost instantaneously advise its teammates how to make “on-the-fly” adjustments as previous “learned” competitors are added or dropped from competing teams. That represents the symbiotic partnership envisioned by “Lick” where machines learn and advise partner humans while the humans provide the “meta-strategy” and judgement: such as managing time, deciding when to shift from formulaic rules to a more aggressive strategy, and exploiting opportunities only humans are able to recognize.
Several years ago DARPA attempted to apply this teaming model to aviation, placing a centaur of a human mission commander and automated assistant into a cockpit that previously required two people. The critical insight was that the automation couldn’t replace the cognitive breadth of a human copilot dealing with uncertainty and ambiguity, but could perform better at many tasks where humans struggled, such as landing with a failed engine where controlling the flight path to preserve every watt of energy counts, something once believed to be uniquely suited to humans.
Technically, the DARPA experiment was a success, but it also revealed the disappointing realization that the teaming approach is likely to work only in a field where there are small numbers of humans and machines with clearly defined tasks and objectives.
And there you have it … “a small number of humans (shift-leads & operators) working in partnership with machines (transmitters, analyzers, final control elements) with clearly defined tasks and objectives.” A perfect description of every chemical, pharmaceutical, and oil and gas operation in the world.