Wednesday, October 29, 2025

Water Cooler Small Speak, Ep. 9: What “Pondering” and “Reasoning” Actually Imply in AI and LLMs


speak is a particular sort of small speak, usually noticed in workplace areas round a water cooler. There, workers regularly share every kind of company gossip, myths, legends, inaccurate scientific opinions, indiscreet private anecdotes, or outright lies. Something goes. In my Water Cooler Small Speak posts, I talk about unusual and normally scientifically invalid opinions that I, my associates, or some acquaintance of mine have overheard within the workplace which have actually left us speechless.

So, right here’s the water cooler opinion of immediately’s put up:

I used to be actually dissatisfied by utilizing ChatGPT the opposite day for reviewing Q3 outcomes. This isn’t Synthetic Intelligence — that is only a search and summarization device, however not Synthetic Intelligence.

🤷‍♀️

We frequently discuss AI, imagining some superior sort of intelligence, straight out of a 90s sci-fi film. It’s simple to float away and consider it as some cinematic singularity like Terminator’s Skynet or Dune’s dystopian AI. Generally used illustrations of AI-related subjects with robots, androids, and intergalactic portals, prepared to move us to the longer term, simply additional mislead us into deciphering AI wrongly.

A few of the prime outcomes showing for ‘AI’ on Unsplash;
from left to proper: 1) picture by julien Tromeur on Unsplash, 2) picture by Luke Jones on Unsplash, 3) picture by Xu Haiwei on Unsplash

Nonetheless, for higher or for worse, AI programs function in a basically completely different means — a minimum of for now. In the meanwhile, there is no such thing as a omnipresent superintelligence ready to resolve all of humanity’s insolvable issues. That’s why it’s important to know what present AI fashions truly are and what they will (and may’t) do. Solely then can we handle our expectations and make the very best use of this highly effective new know-how.


🍨 DataCream is a publication about what I study, construct, and take into consideration AI and knowledge. In case you are involved in these subjects subscribe right here.


Deductive vs Inductive Pondering

as a way to get our heads round what AI at its present state is and isn’t, and what it may well and can’t do, we first want to know the distinction between deductive and inductive considering.

Psychologist Daniel Kahneman devoted his life to learning how our minds function, resulting in conclusions and selections, forming our actions and behaviors — an enormous and groundbreaking analysis that finally gained him the Economics Nobel Prize. His work is superbly summarized for the common reader in Pondering Quick and Gradual, the place he describes two modes of human thought:

  • System 1: quick, intuitive, and computerized, basically unconscious.
  • System 2: sluggish, deliberate, and effortful, requiring aware effort.

From an evolutionary standpoint, we are inclined to desire to function on System 1 as a result of it saves time and power — sort of like dwelling life on autopilot, not fascinated by issues a lot. Nonetheless, System 1’s excessive effectiveness is many occasions accompanied by low accuracy, resulting in errors.


Equally, inductive reasoning aligns intently with Kahneman’s System 1. it strikes from particular observations to basic conclusions. The sort of considering is pattern-based and thus, stochastic. In different phrases, its conclusions all the time carry a level of uncertainty, even when we don’t consciously acknowledge it.

For instance:

Sample: The solar has risen day by day in my life.
Conclusion: Due to this fact, the solar will rise tomorrow.

As you could think about, this sort of considering is vulnerable to bias and error as a result of it generalizes from restricted knowledge. In different phrases, the solar is most likely going to additionally rise tomorrow, because it has risen day by day in my life, however not essentially.

To achieve this conclusion, we silently additionally assume that ‘all days will comply with the identical sample as these we’ve skilled’, which can or will not be true. In different phrases, we implicitly assume that the patterns noticed in a small pattern are going to use all over the place.

Such silent assumptions made as a way to attain a conclusion, are precisely what make inductive reasoning result in outcomes which might be extremely believable, but by no means sure. Equally to becoming a operate via a number of knowledge factors, we could assume what the underlying relationship could also be, however we will by no means make certain, and being fallacious is all the time a chance. We construct a believable mannequin of what we observe—and easily hope it’s a superb one.

picture by creator

Or put one other means, completely different folks working on completely different knowledge or on completely different circumstances are going to supply completely different outcomes when utilizing induction.


On the flip aspect, deductive reasoning strikes from basic rules to particular conclusions — that’s, basically Kahneman’s System 2. It’s rule-based, deterministic, and logical, following the construction of “if A, then for certain B”.

For instance:

Premise 1: All people are mortal.
Premise 2: Socrates is human.
Conclusion: Due to this fact, Socrates is mortal.

The sort of considering is much less vulnerable to errors, since each step of the reasoning is deterministic. There are not any silent assumptions; for the reason that premises are true, the conclusion should be true.

Again to the operate becoming analogy, we will think about deduction because the reverse course of. Calculating a datapoint given the operate. Since we all know the operate, we will for certain calculate the info level, and in contrast to a number of curves becoming the identical knowledge factors higher or worse, for the info level, there might be one definitive right reply. Most significantly, deductive reasoning is constant and strong. We are able to carry out the recalculation at a particular level of the operate one million occasions, and we’re all the time going to get the very same outcome.

Picture by creator

Apparently, even when utilizing deductive reasoning, people could make errors. As an illustration, we could mess up the calculation of the precise worth of the operate and get the outcome fallacious. However that is going to be only a random error. Quite the opposite, the error in inductive reasoning is systemic. The reasoning course of itself is vulnerable to error, since we’re together with these silent assumptions with out ever understanding to what extent they maintain true.


So, how do LLMs work?

It’s simple, particularly for folks with a non-tech or pc science background, to think about immediately’s AI fashions as an extraterrestrial, godly intelligence, in a position to present sensible solutions to all of humanity’s questions. Nonetheless, this isn’t (but) the case, and immediately’s AI fashions, as spectacular and superior as they’re, stay restricted by the rules they function on.

Massive Language Fashions (LLMs) don’t “suppose” or “perceive” within the human sense. As an alternative, they depend on patterns within the knowledge they’ve been skilled on, very similar to Kahneman’s System 1 or inductive reasoning. Merely put, they work by predicting the subsequent most believable phrase of a given enter.

You possibly can consider an LLM as a really diligent pupil who memorized huge quantities of textual content and discovered to breed patterns that sound right with out essentially understanding why they’re right. A lot of the occasions this works as a result of sentences that sound right have the next probability of truly being right. Which means such fashions can generate human-like textual content and speech with spectacular high quality, and basically sound like a really sensible human. Nonetheless, producing human-like textual content and producing arguments and conclusions that sound right doesn’t assure they actually are right. Even when LLMs generate content material that seems like deductive reasoning, it isn’t. You possibly can simply determine this out by looking at the nonsense AI instruments like ChatGPT often produce.

Picture by creator

It is usually essential to know how LLMs get these subsequent most possible phrases. Naively, we could assume that such fashions simply depend the frequencies of phrases in present textual content after which one way or the other reproduce these frequencies to generate new textual content. However that’s not the way it works. There are about 50,000 generally used phrases in English, which leads to virtually infinite doable mixture of phrases. As an illustration, even for a brief sentence of 10 phrases the combos can be 50,000 x 10^10 which is like an astronomically giant quantity. On the flip aspect, all present English textual content in books and the web are a number of tons of billions of phrases phrases (round 10^12). Because of this, there isn’t even practically sufficient textual content in existence to cowl each doable phrase, and generate textual content with this strategy.

As an alternative, LLMs use statistical fashions constructed from present textual content to estimate the likelihood of phrases and phrases which will by no means have appeared earlier than. Like several mannequin of actuality, although, this can be a simplified approximation, leading to AI making errors or fabricating data.


What about Chain of Thought?

So, what about ‘the mannequin is considering’, or ‘Chain of Thought (CoT) reasoning‘? If LLMs can’t actually suppose like people do, what do these fancy phrases imply? Is it only a advertising trick? Nicely, sort of, however not precisely.

Chain of Thought (CoT) is primarily a prompting approach permitting LLMs to reply questions by breaking them down into smaller, step-by-step reasoning sequences. On this means, as a substitute of constructing one giant assumption to reply the consumer’s query in a single step, with a bigger threat of producing an incorrect reply, the mannequin makes a number of technology steps with increased confidence. Primarily, the consumer ‘guides’ the LLM by breaking the preliminary query into a number of prompts that the LLM solutions one after the opposite. For instance, a quite simple type of CoT prompting may be carried out by including on the finish of a immediate one thing like ‘let’s suppose it step-by-step’.

Taking this idea a step additional, as a substitute of requiring the consumer to interrupt down the preliminary query into smaller questions, fashions with ‘long-thinking‘ can carry out this course of by themselves. Specifically, such reasoning fashions can break down the consumer’s question right into a sequence of step-by-step, smaller queries, leading to higher solutions. CoT was one of many largest advances in AI, permitting fashions to successfully handle complicated reasoning duties. OpenAI’s o1 mannequin was the primary main instance that demonstrated the ability of CoT reasoning.

picture by creator

On my thoughts

Understanding the underlying rules enabling immediately’s AI fashions to work is crucial as a way to have lifelike expectations of what they will and may’t do, and optimize their use. Neural networks and AI fashions inherently function on inductive-style reasoning, even when they many occasions sound like performing deduction. Even methods like Chain of Thought reasoning, whereas producing spectacular outcomes, nonetheless basically function on induction and may nonetheless produce data that sounds right, however in actuality usually are not.


Cherished this put up? Let’s be associates! Be a part of me on:

📰Substack 💌 Medium 💼LinkedIn Purchase me a espresso!


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com