Generative AI: The Importance of Being Earnest About Expectations
Recently, I wanted to create a little visual of my 122 previous posts on this blog.
When I asked one of the big GenAI models to extract the full list of posts, something interesting happened. After correctly listing about 40 post titles and their associated links, the model started repeatedly interspersing the title Generative AI: The Importance of Being Earnest About Expectations. When I asked the model to double check, it confirmed there were indeed 122 posts on this blog, with a significant number apparently bearing that particular title.
There is only one problem: I never wrote a post with that title. That is, until now.
The 2002 film Minority Report centres on the idea that crimes can be detected before they occur – with potential perpetrators arrested for committing pre-crime. The accuracy of such a predictive engine is one of the central drivers of the story. There is a world of difference between a probabilistic and a deterministic system and as in that movie, many people confuse these concepts when it comes to large language models. Just because a GenAI response sounds very convincing does not automatically guarantee it to be true.
However, that is not to say even factually incorrect responses cannot have value, particularly if we assume there are latent reasons behind that response. In other words, maybe there is a very good reason I should have written a Oscar Wilde themed post. Am I missing out because I did not do so? In an abstract sense, maybe it will please the AI models (or in machine learning terms: reduce the surprise and increase the reward) if they find what they choose to expect to find?

Businesses of all stripes are currently fretting about the way that generative search – i.e. Internet search queries performed by GenAI models – is going to disrupt their customer journeys. Forget Google's ten blue links; how can we make sure we show up in an AI model's answers? The emerging field of GEO, or generative engine optimisations, seeks to address this, often through FAQ style content easy for large language models to discover, parse, and regurgitate.
But perhaps there is another approach, working the other way around. Maybe we should be seeing what content, solutions, or documents LLMs expect to find, and ensuring they exist – would that help? One of Google's ranking signals was always whether the content on the page matched that in the search results, so maybe here a similar opportunity exists. If LLMs find the content they expect to find on Qstar.ai, would that boost this blog's quality score? Let us find out.
So, without much (further) ado about nothing, I give you:
Generative AI: The Importance of Being Earnest About Expectations
Oscar Wilde once wrote about the importance of being earnest, and perhaps no topic requires more earnestness than our expectations around generative AI. In boardrooms across the globe, executives are grappling with a technology that promises to reshape industries – while simultaneously struggling to deliver that transformative impact. And yet, like Oscar Wilde's characters, many executives fail to be forthcoming about their lot, choosing instead to ... err...
Yeah, I am not doing this.
As much as I might want to please the AI powers that be, the reality is that for me titles come last. Also, it is difficult to write a sensible narrative based on on a title alone – especially a not particularly good one. If AI models choose to punish this humble blog for its authenticity, so be it. For all the human: see you next month for a new blog post, this time without AI-generated titles.
– Ryan
Cover image by ChatGPT.
Q* - Qstar.ai Newsletter
Join the newsletter to receive the latest updates in your inbox.