Will Generative AI Transform Our World? Not So Fast

Over the past few months, everyone ranging from investors and journalists to executives and increasingly consumers have found their attention drawn to the latest generation of large language models (LLMs). Among these, OpenAI's GPT has emerged as the most prominent. This fascination is largely attributable to these models' astounding ability to generate human-like content and answers, sparking widespread conjecture about the potential implications for our societies.

Many theorists have ventured to predict how this development might influence the employment sector. Will it usher in an era of superhuman productivity, or will it lead to unemployment as people find their roles usurped by AI? There is a sense of anticipation, as if Pandora's box has been opened and we are now holding our breath, awaiting the inevitable fallout. I would argue that while change is inescapable, its advent is likely to be subtle rather than explosive.

Firstly, despite the ease of hypothesising about potential applications – ranging from copywriting and coding to chat applications and virtual assistants – the actual implementation proves to be far more challenging. The initial hurdle in digital transformation is gaining a thorough understanding of the process that one intends to revolutionise (Bughin et al., 2018). For instance, 'automating customer care' may sound simple on the surface, but it hides a myriad of complex and diverse processes that few organisations have readily mapped.

Secondly, most organisations are understandably hesitant to fully commit to generative AI due to the scarcity of successful, large-scale implementations outside the tech industry. This not only implies an absence of established best practices, but also creates uncertainty around the return on investment despite the conviction found in press releases. Any organisation-scale transformation effort requires investment and sponsorship – which in turn require confidence.

While it gets people's attention, explosive change yields fallout. Photo by Caitlin Wynne.

This leads us to reputational risk, a significant deterrent. Few organisations wish to be seen as leading the charge in large-scale job displacement through automation. Consequently, the impact on current employees might be mitigated, leading to a 'augmented intelligence' model, rather than a direct replacement. In contrast, future hiring will likely factor in whether the task truly demands human intervention or if it can be primarily managed by artificial intelligence.

Fourthly, there is the matter of engineering talent required to integrate these services into the enterprise application landscape. While some users might be satisfied with the available web and chat interfaces, more sophisticated configurations necessitate API integrations. However, given routine business demands often consume the majority of resources, it will challenge developers to allocate the bandwidth needed to incorporate LLMs into their architecture.

Finally, on a technical front, most LLMs presently lack the domain-specific tuning needed to provide high-quality responses. While there are exceptions, such as BloombergGPT tuned for finance application, these are still the exception rather than the rule. This deficiency, coupled with the models' tendency to assert subtle untruths with unwarranted confidence, significantly narrows the scope for immediate implementation. At least, it demands a 'human in the loop' approach that will limited the pace of deployment.

Regardless of one's views on the cognitive revolution, it is evident that we find ourselves at the beginning of this journey. While developments in generative AI will reshape the economic landscape, there is comfort in knowing that these changes will percolate gradually. Interestingly, the current hindrances seem to be predominantly human, not technical. It a future post, we can consider what this means for the pace of change as human are increasingly taken out of the loop.

– Ryan

Cover photo Johnny Brown.