How Long Will Human in the Loop Decision Making Remain Responsible?

Ryan den Rooijen
Ryan den Rooijen

Fatal autonomous vehicle crashes have consistently made front-page news. Journalists have mused whether we have become too complacent in accepting the capabilities of automated systems, or if our obsession with Artificial Intelligence has lead us to accept certain risks without question. The crash of Ethiopean Airlines Flight 302, seemingly caused by Boeing’s MCAS system, underscored this point. There has even been an example of a human in the loop preventing nuclear war. If this is the case, should we be putting more focus on human involvement in AI processes, to improve overall outcomes?

Paradoxically, the answer might be yes, but not in the long run. Right now we find ourselves in a transition period, where we are just starting to see the application of AI across different domains. In many cases human in the loop is the gold standard, with AI empowering doctors with a greater capability to diagnose pulmonary function tests, for example. How long until the balance tips the other way, however? As AI improves and is embedded into more products and processes, at which point do we become fully reliant on AI, or more importantly, when does human involvement become irresponsible?

Even a trained pilot cannot land these without the help of a flight computer. Photo by Airwolfhound.

One challenge we face is the speed or frequency of decision making required in certain scenarios. The Lockheed Martin F-35 Lightning II stealth aircraft, which due to its aerodynamics suffers from extreme instability, requires constant fly-by-wire assistance to keep it in the air — or even to land. Similarly, greater complexity in data, such as IoT data from industrial operations, can require superhuman ability to process before it is possible to make a decision. Navigating a vehicle on a congested road can require fast reaction speeds too.

Humans are also prone to bias, which can operate on different levels. We readily succumb to irrational tendencies, including an over reliance on our emotions and a preference for the familiar. This is part of being human. Not only does this lead to challenges in developing representative models, but it can also result in a human in the loop overriding a correct AI output because it feels counter-intuitive. It is important to note this has nothing to do with intentions; as Richard Danzig observed: “Error is as important as malice”.

Note that in this example we are not considering the role of humans in training and validating models, which will always be necessary until the point where we have managed to develop a true artificial general intelligence. Being able to constantly tune and update models is essential, particularly to counter concept drift. Given these training loops are asynchronous — i.e. a human does not need to respond immediately if a model requires updated training data — the resulting risk to the overall process is significantly lower as well.

Can AI be considered a good driver? Most humans certainly cannot. Photo by The Atmospheric Fund.

The jury is still out on whether autonomous vehicles are currently safer than the alternative. However, arguing about this is rather myopic. While traffic accidents have been trending down slightly, the U.S. Department of Transportation noted: “Dangerous actions such as speeding, distracted driving, and driving under the influence are still putting many Americans, their families and those they share the road with at risk.” What is clear is that autonomous vehicles will be able to mitigate these factors greatly, especially as the underlying technology improves over time and is trained on more data.

At the end of the day, like with any new technology that is being introduced into a product or organisation, it is critical to understand not just where you are going, but where you have come from. We are heading to an inevitable turning point where whether in driving, diagnosing, or development, AI will outperform humans to such an extent that human involvement in the decision making processes will become detrimental either through lack of scalability or lack of accuracy. Our job is managing that transition and its associated risks.

— Ryan

Artificial Intelligence

Ryan den Rooijen

Former Chief Strategy, Chief Ecommerce, & Chief Data Officer. Currently consultant to private equity.