AI Cannot Become Conscious

The brutally honest truth about Artificial Intelligence is that it cannot become conscious as it currently exists. Put simply, we don’t even understand consciousness. John Lennox has put this very simply:

God has… put intelligence together with consciousness. These machines are not conscious. Nor are they ever likely to be for a very simple reason. Nobody knows what consciousness is. So it’s silly when people start talking about ‘Oh, it’s conscious.’[1]

Anyone making the claim that AI can become conscious, or even might become conscious, is speaking out of turn. The reason that we can know, with absolute certainty, that AI as it exists now cannot become conscious is even more fundamental: It is not intelligent, nor can it evolve into intelligence.

As I and others have argued elsewhere, a better name for Artificial Intelligence would be Advanced Data Processing, for that is what “it” does.[2] We often fall into the mindset of thinking of AI as an “it,” or even as a he or she, ascribing to it personality.[3] No matter how much AI is built to ascribe to it personality, however well it simulates personality, it is not anything more than Advanced Data Processing. There is no it – to call AI a thing is to fundamentally misunderstand what it is. AI is no more an it than a library is an it. A library is not a single whole, but a collection of data and information – even a “magical” library that can organize, collate, and summarize its data via advanced processing would still be nothing more than a collection of information – and that is all AI is.

There are many basic things that AI cannot do that it would need to do in order to even propose that it is conscious. The most basic, perhaps, is in understanding (another would be reasoning, but that is for another article).

AI does not understand. The problem isn’t merely that it isn’t good at structural logic – as though if AI got really good at not hallucinating the fundamental problem of not understanding meaning would be solved. Even if AGI (Artificial General Intelligence) occurs, and AI models can reason “as well as humans,” it still would not be genuine human intelligence, because the AI would only be simulating human intelligence. It still wouldn’t be an it, much less a person. It couldn’t understand meaning. And even if someone said it could, how could it if it isn’t even an it? Meaning only matters if you are a you – a library has information, but only a subject sees the meaning of that information. The very definition of what AGI would be doesn’t account for the glaring whole of AI still not becoming an it. It’s simply a higher degree of skill mastery:

General intelligence is the ability to approach any problem, any skill, and very quickly master it using very little data. This is what makes you able to face anything you might ever encounter. This is the definition of generality. Generality is not specificity scaled up. It is the ability to apply your mind to anything at all, to arbitrary things. This fundamentally requires the ability to adapt, to learn on the fly efficiently.

There is a major architectural problem to even get to Artificial General Intelligence, which is necessary to even appear to achieve levels of human intelligence. In short, to achieve simulated human intelligence on par with human intelligence, AI needs an architectural revolution – it cannot evolve into it. “The key insight may be that these limitations are solvable but not through scaling alone.”

On February 12, 2026, Anthropic CEO Dario Amodei stated that

We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.

This sparked a tremendous storm of uninformed opinions about the very real and present reality that IA may already be conscious. For example:

To these speculations, I read no better response than this:

Stuff like this says more about your perception of humans than your perception of AI models

This is the great task set before us: How can we re-ground humanity in what it means to be human? How can we reenchant the human being in the wonder of his or her own creation before the sorcerers of AI cast a spell on the majority of humanity, and further dehumanize the images of God. The concerns of those who understand what AI is not are very serious – far more serious than those speculating about what AI might hypothetically become.

AI will never become “conscious” and start making it’s own decisions. It will be programmed to do exactly what it does, and those who program it will convince you it is alive and controlling itself. And then they can do whatever they want, whenever they want, with a perfect scapegoat. The greatest atrocities ever seen can be conducted with no accountability. It’s like having an infinite get out of jail free card.


[1] See Oxford Professor: AI Is Humanity’s Attempt to Make God — John Lennox.

[2] For more information, see Angus Menuge’s lecture from 2016: Artificial Intelligence and the Metaphysics of Mind – Angus Menuge.

[3] For more on this error and the dangers that come along with this assumption, see Artificial Intelligence and the Problem of Personality, by Jared Bridges.


Discover more from Standing Before God, This We Are and No More

Subscribe to get the latest posts sent to your email.

Leave a comment

Is this your new site? Log in to activate admin features and dismiss this message
Log In