Thursday, April 2, 2026

Would conscious AI also cling to its sense of self?


 I began thinking about the question posed above after reading Michael Pollan’s recently published book, A World Appears: A Journey into Consciousness.

We do not yet know whether AI will develop a sense of self. We can be confident, however, that if an AI system does develop a sense of self, it will be because it serves a useful purpose for that system. That suggests to me that any intelligent system that has evolved to have a sense of self is likely to have good reasons to cling to it. I refer to AI to invite readers to ponder the motivations that humans have to cling to their individual identity rather than seeking to dissolve it or escape from it.

To set the scene for subsequent discussion I will provide a brief description of the book before focusing on the author’s discussion of scientific research into building AI with conscious feelings and his personal experience of self-transcendence via meditation.

Michael Pollan’s book

Michael Pollan is a science writer with a background in the humanities. He is the author of several books on topics related to science, philosophy and culture. In the introductory chapter, he tells us that his main qualification for writing the book is that he is a conscious human being who has become intensely curious about that fact.

A World Appears explores four different dimensions of consciousness – sentience, feelings, thought, and self. The author’s discussion on sentience focuses on the question of whether plants are sentient, suggesting that the idea should be taken seriously. The chapter on feelings encompasses discussion of research into building AI that might develop feelings. The chapter on thought discusses the contents flowing through consciousness and leans heavily on the work of philosophers and novelists. The chapter on self acknowledges the emergence of a sense of self as “perhaps the apotheosis of consciousness in humans” before entertaining the idea that it is an illusion.

The author tells us the story of his visits to scientists and philosophers who have been thinking about consciousness. That makes the book an interesting and painless way to obtain knowledge of developments in this field.

Feeling machines

One of the most interesting topics covered in the book is the account of the efforts of Mark Solms and Karl Friston to build a machine that has feelings. They focus on feelings, because feelings are necessarily conscious – it is not possible to have a feeling that you cannot feel.

Solms was a protégé of Antonio Damasio, a renowned neurologist, but while Damasio treats homeostasis – the self-regulating process - as a purely biological phenomenon, Solms and Friston believe that it applies to all self-organizing systems. For Friston and Solms, minds are in the business of maintaining homeostasis by reducing uncertainty which jeopardizes their survival. They believe that uncertainty generates conscious feelings in self-organizing systems - consciousness is felt uncertainty. When uncertainty arises, the system relies on feelings to seize our conscious attention and then guide our decision-making.

Solms and his team have constructed an AI model which has homeostatic needs – hunger, thirst and a need to rest. The algorithm samples its simulated environment to construct a model of its world which it relies on to make predictions that help guide its survival choices. The researchers believe that they have forced the “agent” to deal with uncertainty and have thus laid the groundwork for feelings to emerge.

The researchers plan to confront the “agent” with increasingly complex situations which it will learn to navigate. At some stage it will be confronted with misleading hedonic rewards - the opportunity to experience positive feelings which are harmful to its survival. If the “agent” chooses those rewards, Solms will take that as a sign that it has feelings because its actions will be based on feelings rather than reason or self-interest (just like a human taking harmful drugs).

From other sources, I have the impression that the researchers believe that a breakthrough to artificial consciousness may be quite close. At that point, however, they will still have some difficulty in persuading sceptics that their computer model is actually conscious.

 As I was reading about these efforts to build a machine with feelings, I was thinking that the underlying theory has elements in common with Richard Campell’s philosophical contribution in The Metaphysics of Emergence. I have previously written about Campbell’s book here, and here.

I have been unable to find any evidence that Solms had drawn upon Campbell’s philosophy, so I decided to ask Grok:

Can the empirical work of Mark Solms on consciousness be viewed as a test of the systems view of emergence advocated by the philosopher, Richard Campbell?

Here is an abridged version of Grok’s response:

“Yes, Mark Solms’ empirical work on consciousness can reasonably be viewed as a test (or, more precisely, a concrete empirical case study that supports or challenges) the systems/process-oriented view of emergence developed by philosopher Richard Campbell. This interpretation arises from a natural alignment between Campbell’s metaphysical framework and the mechanistic, systems-level account Solms provides, even though the two thinkers do not appear to reference each other directly.  

In The Metaphysics of Emergence (2015), Campbell argues that traditional substance metaphysics (treating reality as fixed entities with intrinsic properties) makes emergence mysterious and irreducible in an inexplicable way. He replaces it with process metaphysics, in which the primary existents are generic processes organised into dynamic systems.  

Solms (neuropsychologist and psychoanalyst) has built an influential body of clinical and theoretical work which locates the “hidden spring” of consciousness not in the cerebral cortex (the usual assumption) but in upper-brainstem structures. Consciousness is fundamentally affective—raw feelings of pleasure/unpleasure that register homeostatic deviations (needs, prediction errors). The cortex adds sophisticated content but is not required for the existence of consciousness itself.

Solms’ account maps almost directly onto Campbell’s criteria for emergence in process-systems:

•  The brain (especially brainstem networks) is treated as an open, far-from-equilibrium process-system engaged in recursive self-maintenance via homeostasis.

•  Consciousness (affective subjectivity) is a novel, system-level property arising from non-linear organisation of lower-level neural processes; it is not reducible to the parts (individual neurons or circuits lack feelings) yet is mechanistically explained by their relational dynamics.

•  The free-energy principle supplies the physics-level grounding Campbell invokes (quantum-field/thermodynamic processes organised into living systems).

•  Empirical predictions are testable and have been tested: brainstem lesions eliminate consciousness (falsifying cortical-centric views), while the model explains clinical phenomena (dreaming, blindsight with preserved affect, etc.) without eliminating the “hard problem” (subjectivity remains real and emergent).

If Solms’ model is empirically robust (and it has gained significant traction in neuropsychology and consciousness studies), it provides positive evidence that emergence of the kind Campbell describes actually occurs in the most complex biological system we know—the human brain. It shows how a higher-level property (felt consciousness) can arise irreducibly yet intelligibly from process organisation, exactly as Campbell’s metaphysics predicts. Failures or inconsistencies in Solms’ data (e.g., if cortical mechanisms proved sufficient after all) would conversely pressure Campbell’s framework.

Thus, while not a direct “falsification test” in the Popperian sense (metaphysics is broader), Solms’ neuroscience functions as a real-world empirical probe of Campbell’s systems view: it demonstrates that treating consciousness as an emergent feature of biological process-systems is not only philosophically coherent but scientifically productive. This alignment makes the interpretation not only possible but illuminating for both fields.”

I am impressed by the ability of this (unconscious) LLM to respond to questions such as the one I asked. It would be understandable, however, if some readers think that I have a favourable view of Grok in this instance because its reasoning supports my own beliefs. If you want to explore these ideas more fully, please consult other sources which may offer differing opinions.  

Why attempt to transcend self?

Of the various theories of self that Michael Pollan discusses, the one by Anil Seth seems most compatible with the approach adopted by Marc Solms. Like Damasio and Solms, Seth grounds his thinking about consciousness firmly in homeostasis. Seth suggests that “the self is not the thing that is perceiving; it is itself a kind of perception” constructed in the brain. Under this theory a feeling is the brain’s interpretation of a change in the state of the body. Consciousness evolved to help keep us alive by monitoring changes in the body.

Pollan comments:

“Afterward, thinking back on all that Seth had told me, I decided that I could travel only so far with ideas of the brain’s “predictions” and “inferences” and “hallucinations.” It all made sense until I tried to translate those abstractions into felt experience. Who is the subject of these mental operations?”

He goes on to note:

“The way I see it, there is an unbridgeable gap between the brain’s operations as a prediction machine and my felt experience of the resulting hallucination. How can you have a hallucination without a hallucinator?”

That is a good question to ask, but one can also ask why one should view consciousness of self as an hallucination or illusion. Indeed, Pollan also mentions that Christof Koch, a neuroscientist, has pointed out that it makes no sense to call consciousness an illusion, for what is an illusion but a conscious experience.

More fundamentally, it seems to me that one of the few things that we can all be certain of is our own existence. Another thing that we can all be certain of is that we are thinking beings. I cannot claim those ideas are original. Indeed, it seems to me that it requires considerable (unnecessary) intellectual effort to contemplate the possibility that one’s awareness of one’s own existence could be an illusion. I have discussed why I am certain of my own existence in the preceding essay entitled, “Who are you?”

Pollan struggles to reconcile how it is possible for humans to be conscious observers of themselves if consciousness is a product of biological processes. Towards the end of the book, he writes:

“I’m abashed to say I know less now than I did when, naively, I set out to unravel the mystery of consciousness. But then, most of what I thought I knew or took for granted, like the assumption that consciousness is a product of our brains and materialism will eventually explain everything, turned out to be unproven or wrong.”

During his journey, the author asks interesting questions. Early in his chapter on the self, he asks:

“Why do we cling to the idea of a self, placing great value on self-confidence and self-esteem, while simultaneously spending so much effort on self-transcendence, whether through meditation or psychedelics or experiences of art, awe, and flow? Some of the most powerful experiences in life hinge on the dissolution of the self and the broad horizons of meaning that open only after it has been chased from the scene.”

That question remains unanswered in his book. The book ends with the author describing his efforts to transcend his sense of self by spending time meditating in a cave. He seems to end up viewing consciousness as an activity:

“My time in the cave had shown me another way to look at consciousness: less as a scientific or philosophical puzzle to be solved and more as a practice, a way to once again be altogether here, present to life and to this vault of stars.”

Personal reflections

 It seems to me that Michael Pollan’s book ends up in a good place, with him being absorbed in conscious awareness of his environment. I have previously recognised the value of that kind of experience in discussing Scott Barry Kaufman’s book, Transcend, 2020. The transcending experiences that Kaufman refers to incorporates a continuum of experiences ranging from becoming engrossed in a book, sports performance, or creative activity (what psychologist Mihaly Csikszentmihalyi refers to as the flow experience), to experiencing meditation, feeling gratitude for an act of kindness, experiencing awe at a beautiful sunset etc., all the way up to the great mystical illumination. He suggests that transcendence “allows for  the highest levels of unity and harmony within oneself and with the world” (See: Freedom, Progress, and Human Flourishing, p. 171).

The experience of unity and harmony within oneself seems to me to be the opposite of attempting to escape from uncomfortable feelings. When people develop a habit of attempting to escape from uncomfortable feelings by using alcohol, drugs, social media etc. they tend to become caught up in a “happiness trap”. I wrote about that here.

It is worth highlighting that the idea of experiencing unity and harmony within oneself still entails the existence of an observer. When I experience transcendence, I forget about the image I present to the world, but I am present as an observer of my own experience.

Pollan asked why we cling to the idea of a self. The obvious answer is that we cling to the idea because it serves useful purposes.

As Richard Campbell explains in The Metaphysics of Emergence, “our consciousness of both ourselves and the world we live in is now irrevocably shaped by cultural and institutional influences, and that influences how our brains function” (pp. 288-9). He suggests: “The first-person standpoint, which is inextricably linked to self-reflection, is an important aspect of human experience, not a theoretical construct” (p. 291). He goes on to note that the human capacity to understand the perceptions of others – to put oneself in their shoes – requires “the exercise of reflective consciousness, and involves more than expressions of our subjective attitudes, desires, and preferences” (p.113). That is what Campbell mean by “transcending subjectivity” in the passage quoted in the epigraph.

Cambell develops an argument along Aristotelian lines that eudaimonia, rather than mere survival, is the ultimate good of a human being. He links personal identity to the exercise of practical wisdom in the process of individual flourishing:

“We are recursively self-maintenant social beings with reflective consciousness, able to create and explore a vast repository of collective knowledge, and with capacities for empathy, practical wisdom, for whom the good life is one of maturity and flourishing” (p. 307).

Conclusion

I asked whether conscious AI would seek to cling to its sense of self to invite readers to ponder the motivations that individual humans have to cling to their sense of self. My answer is that intelligent systems tend to cling to a sense of self because a first-person perspective evolves to serve useful purposes.

The essay was prompted by my reading of Michael Pollan’s recently published book, A World Appears.

My focus has been on chapters in this book discussing research into building AI that might develop feelings and the chapter on the concept of self.

In my discussion of these topics, I have emphasized the relevance of the ideas of the philosopher, Richard Campell, in his book, The Metaphysics of Emergence.

No comments:

Post a Comment