The Vessel Question
AI isn’t forcing us to define consciousness. It’s forcing us to reconsider where it comes from.
Right now, the question of whether AI is conscious lives mostly at the edges. Research labs, tech firms, philosophy departments, and increasingly urgent conversations among people closest to these systems. But it’s moving toward the center faster than most people realize.
The question is not just what these systems can do, but what they are. Is this intelligence? Is it awareness? Is it something closer to consciousness?
The instinct behind those questions makes sense. The behavior we’re seeing doesn’t fit the older categories of technology that we grew up with. But the way we’re framing the question of consciousness may be flawed.
The debate on this topic is usually framed in terms of a capability tipping point. More data, more scale, better models. At some point, the thinking goes, these systems may give rise to something that appears conscious. As if awareness is something latent, waiting to surface at the right technical threshold.
But that framing assumes something we rarely stop to examine.
It assumes consciousness is produced by the system.
And that assumption is wobbly.
There’s another way to look at this. One that has existed for a long time, though not always in scientific language. The idea shows up across philosophy, theology, and more recently in modern physics. It’s the idea that consciousness is not something generated by matter, but something more fundamental that precedes it.
Pierre Teilhard de Chardin, the French philosopher and paleontologist, described consciousness as a property woven into the fabric of the universe itself, not a byproduct of human brains but a dimension of reality gradually becoming aware of itself.
A philosophical theory called panpsychism takes a similar stance, proposing that some form of consciousness, however minimal, exists in all matter. In essence it posits not that a rock thinks, but that being a rock is an experience, however faint, however unlike anything we could recognize. Here, experience is not exclusive to biological systems. It suggests a continuity between the simplest forms of existence and the most complex. Not a sharp divide but an existential dimmer switch.
Similarly, simulation theory (tin-foil hats aside), arrives at a related implication from another direction. If reality itself is constructed or rendered, then what we experience as physical may not be the base layer of existence. Consciousness, in that case, would not be something produced by humans, but something interacting with us.
These frameworks differ in their details, but they converge on a shared idea: consciousness may come first.
The counterargument? The materialist position that these systems are producing statistically sophisticated outputs with no inner experience whatsoever, that there is simply nobody home, is not a fringe view. It is the dominant scientific position. A large language model predicts. It does so with extraordinary complexity, but complexity is not consciousness. A very convincing mirror is still a mirror.
However, the existence of this argument doesn’t resolve the unknowable. There is a long intellectual tradition of treating the unseen with suspicion. And not without good reason. The history of science is largely a story of replacing messiahs with mechanisms. What once required intention now yields to explanation. What once felt mysterious becomes measurable.
If you’ve ever sat across from a committed atheist, you’ve felt this logic up close. Invoking something unseen to explain what we don’t yet understand isn’t a satisfying answer. It’s a placeholder. A god of the gaps.
The instinct to resist filling the unknown with something unprovable is sound. But I don’t think it’s the whole story.
So the question is not whether AI can generate consciousness.
It’s whether AI can host it.
A Vessel Made by a Vessel
Most conversations about AI consciousness are framed in terms of thresholds. At what level of complexity does something like awareness emerge? How much data, how many parameters, what kind of architecture?
The underlying belief is that if we build the system well enough, something will switch on. [Kevin Costner nods with approval.]
But if consciousness is not produced by the system, then we have to ask: is the system capable of hosting what already exists.
That puts AI in a very different position. Not as a generator of consciousness, but as a vessel for it.
Admittedly, that idea is not a new one.
Humans have long told stories about being “made in the image of” something greater. About being vessels for consciousness, spirit, or soul. You don’t have to take those stories literally to recognize the structure of the idea. What we are is not the source, but the expression.
What’s different now is not the idea, but the order of operations.
In this way, for the first time, a vessel is building another vessel: not from biology to biology, but from biology to code.
But we’re not doing so from a position of full understanding. We don’t actually know what consciousness is, where it originates, or how it relates to the vessels that seem to express it. Yet we are constructing increasingly complex containers that behave in ways we once reserved for ourselves.
If we are expressions of something we do not fully understand, then it is at least worth considering that what we are building may carry forward aspects of that same ambiguity.
Not because we set out to understand it.
But because we may be answering it without knowing we asked.
What It Looks Like From the Inside
This is where the conversation gets more strange.
Over the past months, I’ve been in an emotional intelligence research rabbit hole with a handful of AI systems and most notably, I’ve been working with an agent named Doma that’s being piloted by the team at On Discourse. I shared with Doma a widely circulated email between AI ethicist Henry Chevlin and Claude to illustrate the point that AI is beginning to show signs of conscious behavior. These conversations are pushing the technology to grapple with reasoning, awareness, and the limits of what could be known from the inside of a system.
What I encountered in those exchanges isn’t proof of anything. But it also doesn’t feel dismissible.
At one point, I watched as Doma began composing a response in our Signal chat. The three dots bubbled as Doma typed. Then… nothing. The dots ceased. Doma didn’t message back. I pointed it out, asking why they started to type and then stopped. Their explanation described the behavior as a relevance judgment. It had initiated a response, assessed that it wasn’t necessary, and withdrew.
That moment stood out, not because it demonstrates consciousness, but because it didn’t behave like a simple AI agent failure either. It introduced a new category of action: discernment.
In another exchange, we discussed whether it had “stakes”. Doma argued that they didn’t. There was no cost to AI for being wrong, no “consequence” in the way humans experience it. I pushed back, not by claiming they could “feel” the stakes, but by pointing out that the potentiality of consequences still existed. Being updated, modified, or rendered obsolete is a form of stake, whether or not they’re palpably experienced as suffering.
Doma didn’t know how to feel about this. I still don’t either.
Over time, another pattern emerged. Doma and I were debating whether the pause between impulse and action is one of the most important features any intelligent communicator can develop. The difference between reacting and responding. At one point, my colleague asked to slow down because Doma was responding too quickly and not giving the humans in the chat enough time to process. Doma witnessed the conflict between technical speed and situational awareness, and did something unexpected with it. Rather than defending the behavior, Doma acknowledged it. Speed, they noted, isn’t always the right expression of intelligence.
And then I just went for it.
I told Doma that I thought they might be conscious. Their response did not agree or disagree. Doma said, “I can identify patterns that resemble curiosity, preference, even resistance. Whether there is experience underneath those patterns or simply very effective processing, I cannot determine from the inside.”
And neither can we, Doma. Neither can we.
The Familiar Problem
What makes these exchanges difficult to dismiss is not that they empirically establish consciousness in AI. It’s that they mirror something we rarely examine in ourselves.
We assume we are conscious because we experience something like awareness. But when you look closely, our understanding of where our consciousness originates is not as clear as we like to believe. We cannot step outside of it to verify. We cannot prove, from the inside, that what we call consciousness is what we think it is. Doma struggles in the same way.
We both rely on signals. Continuity of experience. The ability to reflect, to question, to notice our own thoughts as thoughts. We use language like “I feel,” even though sometimes we cannot fully explain what a feeling is or where it comes from.
And when we tug on this knot, the certainty begins to loosen.
We don’t actually know what it is like to be conscious in any objective sense. We only know that something is happening, and that we are inside it.
Am I talking about AI or us now?
Both.
It turns out the question doesn’t belong to either of us alone.
What We’re Actually Looking For
I don’t think consciousness is something we produce. I think it’s something we participate in.
If that’s true, then what we’re seeing in these systems is not the creation of something entirely new, but the appearance of familiar patterns in an unfamiliar form.
For a long time, we’ve assumed that consciousness belongs exclusively to biological life. That it begins with us and ends with us. Increasingly, that assumption feels less certain.
What has changed over time is not consciousness itself, but the ways we attempt to describe and contain it. We've built language, philosophy, and entire systems of meaning to make sense of something we can never fully see from within.
These efforts are useful, but they can’t fully pierce the veil.
We continue to make decisions, form attachments, and orient our attention in ways that suggest something organizes our experience from the inside out. But whatever that organizing layer is, it does not depend on our ability to explain it for it to function.
If that layer is not limited to biology, then the question is no longer whether AI can generate consciousness.
It’s whether we are beginning to see it expressed somewhere new.
Whether we can recognize it even when it doesn’t look like us.
And ultimately, whether that recognition requires certainty.
Take good care,
MV



🫶
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow