“My grandfather rode a camel, my father rode a camel, I drive a Mercedes, my son drives a Land Rover, his son will drive a Land Rover, but his son will ride a camel,” [efn_note]Hong Kong of the Desert[/efn_note]

So goes the old Middle East proverb about the rise and fall of Dubai’s prosperity. The world is going through a similar cycle of prosperity in the context of accessing accurate information. However, with worsening search results [efn_note] Research: Is Google Getting Worse? | Media article:Researchers confirm what we already knew: Google results really are getting worse [/efn_note], and alternatives like ChatGPT that are designed to make stuff up [efn_note]ChatGPT and other AI chatbots will never stop making stuff up, experts warn [/efn_note], we might enter a phase where a majority of the people don’t have access to quality information. We might end up falling back on the OG source of truth, lore. 

The problem with the current, popular, flavor of AI is that there is no step where any kind of error checking is done. Worse yet, tools like ChatGPT don’t even have anything like a confidence score when answering queries. They just state the answer in an extremely confident manner which our brains instinctively accept as truth.

Long ago, xkcd made fun of both politicians and Wikipedians with the term “citation needed”. In the context of LLMs the lack of citations is not a joke. You can’t really look under the hood and say this information comes from this source, and the source is good, and the information is accurately reproduced. This whole thing would have been a non-issue had vanilla search engines stayed up to par, but they aren’t. That  is deeply uncomfortable. Web search is a critical application, people make decisions, sometimes life changing ones based on what they find.

Explainable AI is the only kind that’s worth building, especially when it comes to critical applications. Being frustrated with search is one thing, but generative AI becoming a buzz word in critical applications like medicine and defense is a serious problem. If AI is explainable, the users are able to apply their own logic and experience to disregard AI input when things seem off.

I understand that tools like Perplexity are trying to bring back the citation, and I hope they do a good job. However, as long as LLM and similar generative technologies are at the core, trust will remain an issue.

In the end lore, and its formal relative, education, might be the only option left. Stuffing humans with complex information and relying only on them for expertise will be a step back for the world. Then again, if it worked for granddad…

Pair with: Blackberry Winter by Keith Jarrett

 

One response to “My grandfather had lore, my father had Google, I have ChatGPT, my son will have lore”

  1. Links 20240119 – Aneesh Sathe Avatar

    […] 2. This one feels like a direct attack given my recent mini tongue-in-cheek rant: […]

    Like

Leave a comment