Artificial Intelligence Will Not Reach Superintelligence
- mfellbom
- Dec 23, 2025
- 6 min read

To follow up on Alex' paper, that You will find posted last May "AI : Demystifying the Technology and Diving in to its Moral Dilemmas and Future Trajectory", Jacques recently sent a piece written by two of his friends for the French daily "Les Échos". The debate is just only beginning on the extent to which AI will change our lives and society. This piece is taking an interesting view on what AI lacks to go beyond analytical intelligence.
Artificial Intelligence Will Not Reach Superintelligence
Didier Nouzies, CEO of Cordsavings
Alexandre Mignon, Professor of Medicine
We hear more and more about AGI, Artificial General Intelligence, which would think like us, adapt to everything, and eventually even surpass us. Some tech billionaires are predicting it as inevitable. Yet, this idea is based on misunderstandings.
Human intelligence isn't just about solving problems.
Humans don't just analyze or calculate. They feel, they dream, they imagine, they doubt, they desire. Our mind rests on three inseparable dimensions: sensitivity (what we feel), intelligence (what we understand), and will (what we decide and act upon), which connects, integrates, and synthesizes the two preceding ones. The first two are opposed in an irreducibly contradictory dualism (art vs. science, aesthetics vs. logic). The third forms the basis of reason and seeks, at the core of each of our beings, to connect, include, and synthesize them into a living pyramid with three ever-shifting faces. These three dimensions possess countless levels of elevation, all of which interact with each other within the other two. And these levels rise in layers, progressing gradually from the natural to the spiritual.
Thus, every microsecond, a sensory impression (the lowest layer of our sensitivity) summons an image (a slightly higher layer, this one belonging to the realm of intelligence), orienting itself, for example, either towards a desire (the lower-middle layer of the volitional domain), or towards a virtue (a more spiritualized level of action), or a thousand other possible actions or thoughts.
This makes each of us beings in constant construction, continually reinventing ourselves, because constantly pieces and facets of us fall away while others rise. The countless possibilities that arise from this constantly establish and re-establish the original value of our reality and our uniqueness.
Current AIs don't understand, they guess.
ChatGPT, Gemini, or DeepSeek don't think. They predict statistically probable words, images, or sounds based on billions of examples. They don't manipulate ideas, but signs. They imitate language without experiencing what language expresses. To say they understand is like saying a parrot philosophizes because it recites Plato.
Furthermore, the world cannot be reduced to language. Language Learning Models (LLMs) don't encompass the full scope and complexity of the known, let alone the unknown. Yann LeCun, a pioneer of global AI, is leaving Meta, because he seems to think that language, taken as the raw material of most current AI algorithms (LLMs), will hit a wall. Indeed, LLMs simply generalize the concepts of the pioneering language of the AI research community, LISP, which was conceived for this sole purpose as early as 1958. The principle is the same: generate an intelligent response by taking as input sequences of symbols. Only the level of symbols processed has evolved due to the immense computing power and data available today. There is no conceptual leap in LLM.
Finally, human intelligence is above all the art of adapting and coping with the unknown and
uncertainties. Algorithms operate by processing billions of existing data points. How can they function when no data is available as input to produce output results?
Being effective is not about understanding: reductionism
In many scientific fields, AI is performing remarkable feats. For example, it can predict the shape of certain proteins better than researchers. But this success doesn't tell us why they take that shape. In other words: AI finds answers, but doesn't understand the questions. It becomes more efficient, not more intelligent. It doesn't reveal life, it calculates it. But can everything be calculated?
This is where humanity has been divided since the dawn of time. On one side are those who believe that one day we will be able to explain everything about the universe and reality as a sum of "things" and "functions" independent of any context, and on the other are those who don't.
This is the dilemma known as "reductionism." The former, such as Auguste Comte in the 19th century (positivism), are convinced that everything will one day be explained, that it is enough to climb step by step the steps of the staircase of knowledge and to proceed by chain of causes and consequences applied to previously acquired knowledge. Others, like, for example, the 18th-century German philosopher and scientist Novalis, believe that there is a cliff, then an abyss somewhere up there in the mist, which will remain, in any case, impassable. It is immanence versus transcendence.
For reductionism to be applied practically, one would have to reach the very limits of knowledge, what are called obscure terms, and then ascend through truly incommensurable calculations. And the best known of these obscure terms is matter. What is the last bastion of matter? Who can explain, without subterfuge, what matter truly is at its ultimate foundation? Where does it end? And
moreover, does it end at anything comprehensible?
Gödel's Incompleteness
A brilliant mathematician of the early 20th century, Kurt Gödel, revolutionized mathematics by demonstrating that every coherent and complete axiomatic system has insurmountable limits, regardless of the logical space considered: it follows that some realities will forever remain undecidable and unknowable. Humans, like everything else, are and will remain limited by this. But unlike a machine, they can grasp a reality even without being able to prove it: intuition, creativity, art, moral choice, love, humor… A machine confined within its logical rules, even those far more elaborate than simple binary logic (such as modal and intuitionistic logics: notions of nuance), cannot escape its foundational axiomatic framework. We can. This is a fundamental difference.
An AI will never create meaning: manufacturing is not creating.
A machine can write a poem, but it doesn't know what love is. It can diagnose cancer, but it doesn't know what it's like to be afraid of dying. We speak to express who we are. It speaks to calculate what is probable. Human language is a window onto the inner world. It is union, synthesis, and an open door to the unknown through the birth and activation of endless relationships created within the matrix of the most improbable and undecidable analogies. It is a true catalyst. It draws its strength from the very being that asserts itself through it in its unfathomable originality.
Machine language is merely a statistical mirror of the external world and of what already exists. And the old has never been sufficient to account for the new. Einstein, it is said, had the intuition of relativity by imagining himself riding a beam of light one sad day, leaning out of his window around noon. Einstein was bored, he felt useless and idle that day; he was melancholic and even sad. And who's to say that the overflow of his melancholy, which burst forth in that brilliant vision, wasn't triggered by the thought of that 7-year-old girl he had just passed in the street that morning, crying her eyes out? Leaving aside the causal chain itself, what disembodied, procedural black box of AI, however sophisticated, could simulate this emotional overflow that, that day, introduced relativity into the human mind without truly understanding and experiencing for itself what it means to cry?
Conclusion: AI will be powerful, but not human.
AIs are already incredibly useful. They heal, manage, invent technical solutions, and even destroy. But they will remain tools, not consciousness. We are not threatened with being replaced by a superior intelligence endowed with a mind and therefore consciousness. Rather, we risk forgetting what makes us human. We should not fear that AI will become human. We should fear that humans will mistake themselves for machines. We entrust machines with the task of stacking Russian dolls that we will become increasingly incapable of understanding, let alone untangling. We are constantly moving further away from understanding the why and how of our daily actions. Humans are increasingly reduced to the state of objects, and it is the object that thinks for them. The quest for AI in its ultimate form, aiming to equal and then surpass human thought, is, in our view, a dispersion, a flight into the limitless and ever-increasing darkness of endless analyses.
And while, of course, AI will become increasingly adept at deceiving and disguising itself, it will never reach the ontological depth of a child's consciousness. On the other hand, we strongly fear that it will continue to oppress humanity more and more through this curse of quantity that it carries within its very core.



Comments