2 MISTAKES = 1 ERROR - IS IT?

Studies can prove almost everything

11/23/20254 min read

A study from Oxford University claims to have proven that Artificial Intelligence does not create thought and only pretends to think by finding patterns within existing knowledge. In a December 2024 paper (old, in AI terms), the researcher argued that AI's data-driven prediction is fundamentally different from human causal logic based on theories. Key points from this and other related research include:

  • Retrospective vs. Prospective: The paper argues that AI uses a "retrospective" and imitative probabilistic approach, while human cognition is "prospective" and capable of generating genuine novelty through directed experimentation.

  • Theory-based reasoning: Human causal reasoning is conceived as theory-based, providing a mechanism for intervening in the world. In contrast, computational models of cognition focus more on information processing and data-driven prediction.

To prove this, they trained an AI with all the articles published in 1633 and then questioned the AI ​​about Galileo Galilei's theory that the Sun, not the Earth, was the center of our system. They did something similar, now with knowledge from the turn of the 19th to the 20th century, and questioned the Wright brothers' theory about a heavier-than-air object flying (when I see this sentence, I think: didn't anyone think about birds? A giant condor is much heavier than air! But anyway...). In both cases, the AI ​​failed to challenge existing knowledge and discredited both Galileo and the flying brothers, and this supposedly happened because it was unable to challenge 99.5% of the scientific knowledge existing at the time.

So, is this the big difference compared to humans? Definitely not. First, we have to consider what AI was used, how it was trained, when it was trained (if the article is from December 2024, this happened at least 6 months or 1 year before) and, above all, a question that we always have to ask ourselves about anything and whose importance I only recently truly understood: who benefits from this or that statement or conclusion?

If someone wants to prove that AI is a hoax, as this and many other studies have already done (including Apple's), it's possible to prove it. Or, at least, to construct a logical argument that seems consistent and irrefutable. But the reason for this is that 99% of people don't have enough knowledge to analyze and dissect such a study. However, the ability to argue is not capable of changing reality: only the perception one has of it. Especially if you subscribe to that argument.

Without being too verbose, I will question two simple points of this study, because it helps to understand the type of reality we live in in the world of information via social networks:

The principle of AI learning is precisely to draw from as many sources as possible to understand what seems to be reality and what does not. If all the knowledge base provided is biased (as in the two cases above), the conclusion will be biased. In this study, by restricting knowledge to what the scientific academy of the time considered science, the AI ​​was led into error. Certainly, it was not fed knowledge from ancient peoples. In fact, 500 years before Galileo, there were already calculations proving that the Earth was not the center of the universe. But this was not registered as science.

When you condition a human being to any truth, however absurd, they also become incapable of questioning. This is the reason why children hate other children simply because their beliefs are different, and this has been happening since the Earth began. So, where is the difference in the behavior of AI? What some more enlightened people do—questioning this existing pattern and proposing a new reality—is generally the exception to the rule. How many revolutionaries have we had in history who were capable of doing what Galileo did? The overwhelming majority considered his thinking heresy, and today, that's what we see with any theory: those who embrace it with their lives and those who reject it regardless of the evidence. That's why some people still think the earth is flat.

The truth is that, despite coming from Oxford, this is a study that only harms the world. It's small, limited, biased, shallow, and only gains some visibility because it has a weighty endorsement. Just because it comes from royalty doesn't make a lie the truth. The reason that caught my attention is exactly this: nonsense that spreads because it has a supposedly high-quality endorsement. This is what communication networks base their dissemination of opinions disguised as news on, companies to obtain investment in bogus projects, and politicians to get elected. All of this, amplified and globalized by content networks that pretend to be unbiased, can only end in the chaos we are experiencing.

Finally, but no less importantly, let's remember two things:

Describing AI right now (let alone a year ago) is superficial, frivolous, dishonest, and biased (both positively and negatively). We are building a new fabric of knowledge that unfolds exponentially, and we are not yet able to predict what it will be like in one year, let alone in ten.

Using the Wright brothers' experiment to talk about aviation already shows the lack of criteria of whoever conducted the study. Despite the importance of the Americans to the advent of aviation, today it is more than proven that they "fabricated" the results, which were only confirmed by Santos Dumont. This is important because serious research cannot be based on partial facts, especially when it yields to beliefs that hide ideological interests. This only shows that this study should be seen as a reference of what not to do. Once again, who benefits from this or that conclusion?

One cannot draw conclusions about something that we do not yet fully understand. At least not its limitations.