Chatbots, Bats & Broken Oracles

I had the strangest conversation with my son today. There used to be a time when computers never made a mistake. It was always the user that was in error. The computer did exactly  what you asked it to do. If something went wrong it was you, the user, that didn’t know what you wanted. After decades of that being etched in today I found myself telling him that computers make mistakes, you have to check if the computer has done the right thing and that is actually ok. A computer that hallucinates also provides a surface for exploration and seeking answers to questions. 

Boys Wading (1873) by Winslow Homer. Original public domain image from National Gallery of Art

In her book, Open Socrates, Agnes Callard draws our attention to the differences between problems and questions. I’ll get to those in a bit, but the fundamental realization I had was that until recently all we could use computers (CPUs, spreadsheets, internet) for was solving problems. This started all the way back with Alan Turing when he designed the Turing test. He turned the question of what is it to think into the problem of how do you detect thought. As Callard mentions, LLMs smash the Turing test but we still can’t quite accept the result as proof of thinking. What is thinking then? What are problems? What are questions? How do we answer questions? 

Problems are barriers that stand in your way when you are trying to do something. You want to train a deep learning algorithm to write poetry, how to get training data is a problem. You want something soothing for lunch, getting the recipe for congee is the problem. The critical point here is that as soon as you have the solution, the data, the recipe, the problem disappears. This is the role of technology.

When we work with computers to solve problems we are essentially handing off the task to the computer without caring that the computer wants to or even can want to write poetry or have a nice lunch. So we ask the LLM to write code, we command google to give us a congee recipe. Problems don’t need a shared purpose, only methods to solve them to our satisfaction. Being perpetually dissatisfied with existing answers is the stance of science. 

Science and technology are thus tools to move towards dealing with questions. Unlike problems which dissolve when you solve them, questions give you a new understanding of the world. The thing with asking questions is that there is no established way, at least in your current state, to solve them. Thus asking a question is the first step of starting a quest. In terms of science the quest is better understanding of something and you use technology along the way to dissolve problems that stand in your way. 

 AI lets us explore questions with, rather than merely through, computers. Granted that most common use of AI is still to solve problems, LLMs and their ability to do back and forth chat in natural language does provide the affordance to ask questions. Especially, the kind that seem to come pre-answered because we are operating from a posture where not having an answer would dissolve the posture altogether.

The Socratic Co-pilot

As a scientist, the question “what is it to be a good scientist?” comes pre answered for me. Until I am asked this question I have not really thought about it but rush to provide answers. Scientists conduct experiments carefully, they know how to do use statistics, they publish papers and so on. However, this still does not answer what it is to be a good scientist. Playing this out with an AI, I assert “rigorous statistics,” the AI counters with an anecdote on John Snow’s cholera map and I’m forced to pivot. None of these by themselves answers the root question, but it allows generation of some problems which can be answered or agreed on. This is knowledge.

Knowledge draws boundaries, or as I have explored earlier, creates places around the space that you wish to explore. In the space of “being a good scientist”, we can agree that the use the scientific method is an important factor. Depending on who you are, this could be the end of quest. 

Even if no methodology exists for a given problem, simply approaching any problem with an inquisitive posture creates a method, however crude. In his book What Is It Like to Be a Bat?  Thomas Nagel tackles an impossible to solve problem but a great question, through the process of a thought experiment. If I were to undertake this, I may try to click in a dark room, hang upside down. Okay, maybe not the last bit, but only maybe. Even this crude approach has now put me in the zone to answer the problem. Importantly my flapping about has created surface area where others can criticize, as Nagel was. Perhaps future brain-computer-interface chips will actually enable us to be a bat. However, lacking such technology, this is better than nothing as long as you are interested in inquiring about the bat-ness.

This kind of inquiry, this pursuit of answering questions is thinking. Specifically, as Callard puts it, thinking is “a social quest for better answers to the sorts of questions that show up for us already answered”. Breaking that down further it’s social because it’s done with a partner who disagrees with you because they have their own views about the question. It’s a quest because the both parties are seeking knowledge. The last bit about questions being already answered is worth exploring a bit.

Why bother answering questions you already have answers to? This is trivial to refute when you know nothing about a subject. For example let’s say you knew nothing about gravity and your answer to why you are stuck to the earth cause we are beings of the soil and to the soil we must go, the soil always calls us. If that is the worldview then you already have the answer. The only way to arrive at a better answer, gravity, is to have someone question you on the matter. Refuting specific points based on their own points of view. This may come in the form of a conversation, a textbook, a speech etc. I suspect this social role may soon be played by AI. 

Obviously hallucinations themselves aren’t great but the ability to hallucinate is. In the coming years I expect AI will gain significant amounts of knowledge access not just in the form of training but in the form of reference databases containing data broadly accepted as knowledge. In the process we will probably have to undergo significant social pains to agree on what Established Knowledge constitutes. Such a system will enable LLMs to play the role of Socrates and help the user avoid falsehoods by questioning the beliefs held by the user. 

Until now computers couldn’t play this role because there wasn’t enough “humanness” involved. In the bat example, a bat cannot serve as Socrates or as the interlocutor to a human partner because there isn’t a shared world view. LLMs, trained on human generated knowledge would have enough in common to provide a normative mirror. The AI comes with the added benefit of having both infinite patience and no internal urge to be right. This would allow the quest to provide an answer that is satisfactory to the user searching at every level of understanding. LLMs can be useful even before they gain the ability to access established knowledge. Simply by providing a surface on which to hang questions the user can become adept at the art of inquiry. 

So the next time you have a chat with your pet AI understand that it starts as a session of pure space. Each word we put in ties down the AI to specific vantage points to help us explore. Go ahead—pick a question you think you’ve already answered and let the machine argue with you.

Fediverse reactions

Comments

One response to “Chatbots, Bats & Broken Oracles”

  1. The Mind as Semi-Solid Smoke – Aneesh Sathe Avatar

    […] post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human […]

Leave a comment