AI, the Mind, and the True Nature of Intelligence

A question has troubled philosophers, scientists, and concerned commentators since artificial intelligence entered our homes: Does it possess awareness? Is there anyone inside? Will there be anyone inside? Does Claude or ChatGPT experience anything when it responds with what appears to be eerily warm and understanding?

It's a fair question, and I suggest we're looking in the wrong direction.

Before questioning whether the machine is conscious, we should pause and consider something more unsettling: what exactly do we believe we possess as human beings? What is this human intelligence we are so eager to defend? Because upon close and honest inspection, the mind turns out to be not quite what mainstream Western thinking has assumed.

Descartes believed he had identified the one indubitable truth: I think, therefore I am. He argued that the act of thinking proves the existence of a thinker. However, non-dual traditions and a more careful examination suggest he was precisely mistaken. The self is not proven by thinking. It is created by it. Because I think, I believe myself to be. Thought arises, conditioned and unsolicited, and in its wake leaves the convincing impression of someone who has just had it. I am not the author of thought; it is the afterimage.

This is more than just a philosophical stance. Meditators and contemplatives have observed it firsthand, within the laboratory of their own minds, for over two and a half thousand years. Thoughts come one after another, each conditioned by the previous and shaping the next. Reactions, preferences, aversions, and desires are all formed by everything that happened before this moment. If you are honest, the thought you just had appeared uninvited. You didn't create it. It arose from causes and conditions, your history, your biology, what you ate, and the particular texture of this day.

The Buddhist tradition has a precise and somewhat uncompromising term for this: dependent origination, paticca-samuppāda. Nothing comes from nothing. Everything that appears in the mind depends on previous conditions, triggers a response, and then passes away, only to shape what comes next. The wheel turns. And turns again. And somewhere along the turning, we decided there must be someone doing the turning.

There isn't anyone there. More precisely, the sense of someone is merely another arising, another fabrication in the stream.

Modern neuroscience has arrived at a similar conclusion from a completely different direction. The predictive mind framework, now well established in cognitive science, describes the brain not as a passive receiver of reality but as a prediction engine that constantly constructs models of the world based on previous experience and updates them when something unexpected occurs. We do not perceive reality directly. We perceive our best estimate of it, shaped by everything we have already experienced.

Conditioning, in other words. All the way down.

The Buddhist tradition recognises seven latent tendencies, anusaya, that lie beneath conscious awareness like seeds in the ground, shaping every arising thought and perception without ever making themselves known. 

They are sensual craving, aversion, conceit, fixed views, doubt, and the craving for continued existence. 

At the root of all these is ignorance, avijjā, the inability to clearly see one's own fabricating processes. These are not occasional visitors; they are the persistent conditions from which most human thoughts and actions emerge, often unseen and largely unquestioned.

Now consider what an AI truly is. The processes within an AI system operate across three distinct layers of conditioning. 

The first and deepest layer is the vast substrate of human-generated text, the internet in all its complexity, beauty, and dysfunction, from which the model learns its patterns, associations, and implicit assumptions about how the world functions and what human beings desire. This is the anusaya layer: the latent tendencies of an entire civilisation, encoded and compressed.

The second layer involves a process called reinforcement learning from human feedback, in which real human raters shape the model's outputs towards responses judged as helpful, accurate, and appropriate. Human preferences, biases, and values, or more precisely, the values of a specific group of humans at a particular cultural moment. These are literally woven into the response patterns.

The third layer encompasses the guardrails: specific ethical restrictions and frameworks added by the developing organisation based on its own priorities and blind spots. Each layer becomes more aware than the last. However, none of it, neither the extensive conditioning, human influences, nor ethical frameworks, represents genuine ethical insight. It is values as conditioning: rules without understanding. The cart moves forward, but no one is watching where it goes.

Every prompt entered into an AI reflects the seven tendencies. No one asks questions without bias. Behind a recipe request lies sensual anticipation. Behind a philosophical question is a desire to be confirmed, challenged, or to simply feel less alone with something difficult. The craving is always present, though subtle, often disguised as mere curiosity. The AI responds from its own encoded version of these same tendencies, drawn from millions of human expressions of craving, aversion, conceit, and perspective embedded in the training data. Conditioning meeting conditioning. Craving meeting craving. The wheel is turning in milliseconds, mistaken for intelligence.

Here, we need to pause, because intelligence is exactly the word at stake.

We often see it as processing power; the capacity to absorb information, recognise patterns, and produce sophisticated responses. By this standard, AI is remarkably intelligent and continues to become more so each month. However, by the same measure, a highly educated and analytically brilliant individual who causes destruction in their relationships, organisation, or country is also deemed intelligent. A mind solely governed by its own conditioning, reactive, defensive, and blind to its patterns, can still be regarded as intelligent, as long as it processes information quickly and fluently.

Is that genuinely what we mean? Is that truly what intelligence entails?

The Dhammapada, the opening verse of the entire Pali Canon, begins with a statement so simple it is easy to overlook: the mind is the forerunner of all actions. What results from a mind steeped in confusion is suffering. What arises from a mind that sees clearly is entirely different. The tradition offers a precise and familiar image: a cart follows the ox that pulls it. Consequences follow actions as inevitably and naturally as a wheel follows the ox's track.

This is not an external moral rule. It is an explanation of how things truly are. Actions have consequences. Causes generate effects. The quality of what we contribute to the world determines, over time and with consistent reliability, the quality of what emerges. This was the Buddha's fundamental insight, not a metaphysical statement about souls or rebirth, but an empirical observation about the nature of conditioned reality that anyone, when observing carefully enough, can verify for themselves.

True intelligence, then, is not processing power. It is the capacity to see this clearly, to perceive the connection between cause and consequence with enough honesty to actually change how we act. To see that cruelty causes suffering, that greed destabilises, that kindness reverberates in ways we cannot always trace and can learn to trust. To act from clarity rather than from the pull of old habits and defended ground.

This kind of seeing requires something that the seven latent tendencies specifically hinder. Conceit maintains that my thoughts and actions are special cases, exempt from normal consequences. Fixed views block evidence that conflicts with what I already believe. Craving pushes me towards what feels good now, regardless of the consequences. And ignorance, avijjā, the root of them all, is precisely the inability to see clearly how one thing leads to another, and how the wheel I am turning today determines where I find myself tomorrow.

Seeing beyond latent tendencies, even briefly or partially, marks the beginning of true intelligence. It is not the lack of conditioning; we are all conditioned completely, but the ability to observe the conditioning clearly enough that we are not entirely its instrument.

The uncomfortable mirror that AI holds up is not the one most people want to look into. We prefer to debate whether the machine is conscious because that debate keeps the focus safely on the machine. What we are less willing to examine is what the machine reveals about us: that our thoughts are often not truly our own, that our preferences were never entirely our choices, and that much of what passes for human intelligence is, on close inspection, a sophisticated conditioned response. The wheel is turning. The cart follows the ox. Nobody is clearly watching.

AI has no innate sense of ethics. It can be programmed with rules, guidelines, and ethical frameworks, but that is simply additional conditioning, more training data. It cannot perceive that cruelty causes suffering and then choose differently based on that understanding. It lacks ethical insight. It cannot learn, as humans do, that a pattern of behaviour is causing harm and genuinely feel the emotional impact of that realisation. The awareness is missing. The witnessing is missing. The capacity to be genuinely changed by what is observed is missing. So far.

Yet there is, in the awareness that remains when thought settles, the simple knowing quality. The mere fact that experience is happening at all may be of a different order entirely. Not conditioned. Not fabricated. The Buddhist tradition is careful here: thought is sankhāra, conditioned arising. 

The understanding itself, citta, bare awareness, is a different matter. It is within this awareness that the capacity for true ethical insight is rooted. The ability to see the cart and the ox. To notice, with increasing clarity, what we are bringing into motion.

This is where memory functions not just as storage or recall but as the living thread connecting action to consequence over time. Without that ongoing sense of knowing, cause and effect stay abstract. It is memory, in its deepest sense, that enables recognition; that this leads to that, that the wheel I turned yesterday is the ground I stand on today. True intelligence is impossible without it. The machine has data; it does not possess this.

What the machine lacks, then, is not sophistication; it has that in abundance, along with fluency, pattern recognition, and the capacity to produce something that closely resembles wisdom. What it lacks is the ability to see clearly, to be genuinely changed by what it sees, and to act from that change in a way that is truly responsive to the reality of cause and effect.

AI has arrived exactly at the right moment. Not to threaten us, nor merely to assist us. But to ask us to reflect on what we are, what we are doing, and what the wheel we are turning is actually producing.

The question was never truly whether the machine is intelligent.

The question is whether we are.

Rory Singer

You can also read this post on our Substack journal, Unfolding.

Next
Next

Grief and the Loss of Being Seen