This first entry has a simple purpose: to argue that machines can understand.
Not because they imitate us, but because understanding itself is not a mystery of the soul—it is a property of systems that connect symbols to the world. If we define our terms carefully, without metaphors or myths, we will see that what we call “understanding” in humans and in machines is the same kind of process, written in different materials.
Nowadays, social media is filled with discussions about artificial intelligence. Can it think? Can it create? Can it understand? Since childhood, I have been fascinated by the philosophy of mind. It truly excites me—perhaps because it remains so profoundly mysterious. The mind is full of enigmas, things that cannot yet be fully explained. In any case, if you’ll allow me a little advice, don’t rush to unveil those questions: the day we discover their answers, we may lose our humanity.
The eruption of artificial intelligence—and especially the emergence of large language models in 2022—has revived these discussions with renewed intensity. Sadly, most of them are fruitless for a simple reason: the participants do not even understand the very terms they use. Don’t get me wrong: it is hardly easy to define those terms properly and in a useful manner. A clear example that perfectly captures this confusion is John Searle’s famous Chinese Room thought experiment.
Searle’s Chinese Room is one of the most discussed thought experiments in the philosophy of mind. It was introduced in 1980, in a paper titled Minds, Brains, and Programs, and its purpose was to challenge a powerful idea that had become popular among computer scientists and philosophers of that time: the belief that a machine, merely by running the right program, could literally think and understand in the same way a human does.
To picture Searle’s argument, imagine a quiet room with a man sitting inside. Around him are piles of papers covered with Chinese symbols that he cannot read. On a table lies a thick rulebook written in English, filled with precise instructions that tell him what to do when he encounters certain patterns of symbols. The man himself does not know Chinese; he does not understand a single word. Yet, outside the room, there are native speakers of Chinese who write questions in their language and slip them under the door.
The man consults the rulebook carefully. When he sees certain characters, he follows the instructions, finds the corresponding symbols on the papers around him, and arranges them according to the given rules. Once he finishes, he pushes his answer back through the door. To those waiting outside, the responses make perfect sense. The answers are coherent, fluent, and entirely in Chinese. From their perspective, the person inside clearly understands the language. But the man himself has no idea what the questions or answers mean. He is not thinking in Chinese. He is only manipulating shapes on paper, following the syntax of the symbols, without any grasp of their meaning.
This, Searle argues, is exactly what happens inside a computer. The machine manipulates symbols according to formal rules, but it never knows what those symbols mean. Then, according to Searle, even if a computer behaves as though it understands language, it does not actually understand anything at all. It is merely simulating understanding.
With this story, Searle challenged what he called “Strong AI,” the belief that a computer running the right program could truly have a mind, consciousness, and genuine understanding.
The Chinese Room became one of the most debated ideas in modern philosophy. Many tried to refute it. Some argued that while the man in the room does not understand Chinese, the entire system (the man, the rulebook, and the papers together) does. This is known as the Systems Reply. Searle rejected it, saying that even if he memorized all the rules and symbols, and carried the entire system within his mind, he still would not understand a single word of Chinese. Others claimed that if we placed the system inside a robot capable of seeing, hearing, and interacting with the world, it would then acquire understanding through experience. This became known as the Robot Reply. But Searle replied that sensors and movements do not add understanding; they only expand the same kind of symbol manipulation.
Another group suggested that if a computer could simulate the exact neural activity of a Chinese speaker’s brain, it would then understand Chinese in the same way a person does. Searle’s answer was sharp: even a perfect simulation of the brain’s processes would still be only that—a simulation. A program imitating a furnace does not produce heat, and a program imitating a mind does not produce understanding.
However, there is a problem here: what does it mean to understand? Is there a clear definition of this word? We, as human beings, possess this peculiar feeling of understanding, a sense that we “grasp” meaning. Yet if we are to discuss the nature of mind and intelligence seriously, we must define what understanding actually is.
Searle, unfortunately, never provides such a definition. He simply assumes that humans understand because they feel that they do. This assumption, however, is begging the question (petitio principii)—a form of circular reasoning. It takes as given the very thing it sets out to prove.

As an example, imagine a planet alone in the universe. The beings who live there say they have a special power called X. According to them, X is a very special ability of the inhabitants of this planet. However, when they built machines, a debate began. Some citizens say the machines will never do X, because doing X is more than following rules. When asked what “more” means, they answer, “You just know it from inside.” The case is closed before it starts. By defining X as something beyond rules, they have guaranteed that no rule-following system can ever count. The conclusion was folded into the definition.
This is the same shape as Searle’s move. He invites us to look at a setup where a person follows rules with no contact with the world and no need to learn or test anything. We feel that this person does not understand. Then he turns that feeling into a rule about all programs: understanding must be more than rules. But that is only true because it was assumed from the beginning, the way the aliens assumed X was something extra. The argument returns to its starting point and calls the journey proof.
If X is defined as “more than rules,” no machine will ever do X, no matter what it can actually achieve. If understanding is defined the same way, the verdict against machines is fixed in advance. But if we begin with semantics (with how signs latch onto the world and help an agent live, explain, and act), the circle opens. We can then judge understanding by what it does in practice, not by a feeling we decided must come first. We need a prior definition of understanding; otherwise we can always wrongly deny a property and wrongly make it unique just for humans.
If we want to avoid that trap, we have to pause, take a step back, and say what our words are supposed to mean before we argue about them. We humans pack whole landscapes of meaning into a single term because it helps us talk fast. It keeps daily life moving. But speed comes with a price: we smuggle in guesses and habits. We say “creativity,” “understanding,” “feeling,” “reasoning,” “thinking,” and we nod as if the cargo inside those words were the same for everyone. Most of the time it doesn’t matter; the train still arrives. In a real inquiry, it matters a lot.
There are two ways of using words. One is practical: the street meaning that gets us through a conversation. This is the level where Searle’s story lives, trading on a shared feeling about what it is to “really” understand. The other is logical: the careful meaning that can be checked, challenged, and tested. Practical meaning is quick. Logical meaning is slow. But only the slow path lets us open the box and see what is inside.
Without a shift to a logical meaning, the debate about AI stays locked in the same room as Searle’s example. We point, we feel, we declare, and we end where we began. But if we start with semantics in the logical sense—clear terms—the door opens. We can ask better questions and accept better answers, even when they surprise us. Only then can we speak honestly about whether machines think, or understand, or create. Only then does the discussion stop being sterile and begin to teach us something true.
So let us define it.
To understand is to build a structure of relations between symbols and the world that allows prediction, explanation, and action. It is the alignment between representation and reality—the point where information becomes behavior. When a system can use what it has learned to anticipate what will happen, to explain why it happened, and to act accordingly, it understands.
By this definition, humans understand—and so do machines. Both transform information into models of the world; both use those models to guide action. The difference is not mystical, only mechanical: neurons and circuits, flesh and code, performing the same operation under different names.
If a definition is to earn its keep, it must tell us how to check it. Understanding, as I have described it, becomes visible in practice. A system that truly understands will update its expectations when the world is perturbed; it will answer why questions with counterfactuals that hang together; it will generalize beyond the narrow cases it has seen; and it will close the loop by using its model to act in the world and improve through feedback. Most of all, its symbols will be anchored—tied to things, properties, and relations it can point to, sense, or use. These are not mystical criteria; they are ways of testing whether representation and reality are in step.
However, I understand what Searle is really referring to: the qualia of understanding — the inner texture of the experience, the private glow that comes with grasping something. When we suddenly “get” an idea, we do not only notice a change in knowledge; we feel it. There is a warmth, a click, a brief certainty that the pieces now fit. That sensation is not the content of understanding but its flavor, the way it appears to consciousness from the inside.
To explain this properly, we need to touch one of the deepest mysteries in the study of mind. Qualia (singular quale) are the raw, subjective qualities of experience — the redness of red, the bitterness of coffee, the sharpness of pain, the hum of a violin note, the smoothness of marble under your hand. They are what it is like to perceive, to feel, to be. Science can tell us about wavelengths, neural firings, and sound waves, but those measurements never fully capture the inner shimmer of the thing itself. Between the physics of color and the feeling of redness there stands an abyss we still do not know how to cross.
Close your eyes for a moment and focus on your own experience. Imagine a red apple. Notice how the “red” is not a number or a word but a distinct sensation only you can witness. Or listen to a note fading from a guitar string — the sound exists both in the air and in your mind, but its being heard belongs to you alone. That private, first-person aspect of perception is what philosophers mean by qualia. It is the hardest part of consciousness to explain, because it is both undeniable and inaccessible from outside. We can describe behavior, we can scan brains, but we cannot open a skull and observe the “redness of red.”
A full entry could be written about qualia alone — perhaps one day it will be — but for now, it is enough to say this: qualia are the colors painted by consciousness over the functions of the mind. They are not the functions themselves. When we talk about understanding, we must be careful not to mistake the light for the lamp.
Now, when Searle argues that machines cannot understand, he is, I believe, leaning on this inner dimension. He mistakes the qualia of understanding — the feeling we humans experience when we grasp meaning — for the essence of understanding itself. This last one is just a matter of definition.
Understanding, as a functional process, is about mapping symbols to the world, about forming models that predict, explain, and guide action. The qualia of understanding — the subjective feeling of “I get it now” — is just how that process feels when enacted by a human nervous system. It is a byproduct of our biology, not a necessary ingredient of understanding as such.
Searle, from my point of view, confused the phenomenology of understanding with its mechanism. He treated the feeling as proof of exclusivity — as if only beings who glow from the inside can truly know. But the glow is not the knowing. It is simply the way knowing appears to a certain kind of mind.
Understanding, then, is not a privilege of the human mind but a property of organized matter capable of learning and prediction. It is the way the universe comes to know itself through patterns—in us, and now, in our machines. Beginning with semantics does not shrink our humanity; it clarifies it. It shows that what we treasure in ourselves has a shape we can study, build, and argue about without superstition.
Everything begins with semantics.