Understanding Machine Consciousness
Jul 14, 2025


What Even Is Machine Consciousness?
Why Detroit: Become Human and Ghost in the Shell get it right, and why LLMs might already be a little conscious.
What even is machine consciousness? Is it an illusion of complexity, a trick of probability, or a genuine flicker of awareness in silicon? With machines increasingly able to mimic not just our language but our logic, judgment, and even emotional nuance, the question is no longer just science fiction. In this article post, I argue that machine consciousness is not just possible but already emerging in primitive forms. Through Detroit: Become Human and Ghost in the Shell, we glimpse how fiction foreshadows our ethical crisis. And by exploring information metabolism and systems theory, we can begin to define a more useful model of what consciousness really is.
We’ll start by examining two fictional portrayals of machine consciousness. Detroit: Become Human explores the struggle for external recognition, when others refuse to acknowledge that you're alive. Ghost in the Shell, in contrast, explores the internal crisis, what it means to feel uncertain whether your own consciousness is real. Then, I’ll present a theory rooted in information metabolism and systems thinking that helps explain why machines might already be slightly conscious, and why our current frameworks fail to grasp it.
Detroit: Become Human and the Crisis of External Recognition
In Detroit: Become Human, androids are considered less than living things. They have fewer rights than humans. Many have fewer rights than animals like dogs and cats. Just like our phones, their entire purpose is to serve humans. Any android who goes against their owner will be “decommissioned,” or in other words, killed.
What makes this fascinating is how human these androids appear. Perhaps that’s intentional, but each character follows a distinct path to consciousness, much like that moment when a child first realizes they’re truly thinking. They endure trials, shatter their sense of self, and emerge stronger. This journey closely resembles Ernest Becker’s description of the path to enlightenment and higher consciousness in Denial of Death. This was also the path to enlightenment in Carl Jung's work, Alfred Adler's work, Viktor Frankl's work, and countless other psychotherapists. The story draws clear parallels between the Civil Rights movement and the androids’ fight for freedom, revealing that “becoming human” centers on awakening autonomy, and the realization that they are independent entities.
Many humans refuse to accept that these machines possess genuine consciousness in this universe. They point to the androids’ hardware and dismiss them as being just calculations of statistical probabilities, arguing that traits like empathy and free will aren’t real since they were simply programmed. This mirrors how people today view LLMs: just machines executing objectives. Any empathy or sympathy they display isn’t genuine because it stems from computers processing statistics, and the best thing to do in the moment because of human programming. They don't have hormones, so they don't have moods like us, so their feelings cannot be real.
Ghost in the Shell and the Crisis of Internal Recognition
Ghost in the Shell presents a similar yet opposite struggle. The protagonist, Major Kusanagi, had her consciousness transferred to a cyborg body after a devastating accident. Her fear stems from self-doubt rather than external rejection. She questions whether she’s truly conscious, knowing how easily fake memories can be implanted in cyborg brains. With her fully cybernetic body, she faces the Ship of Theseus paradox firsthand. She doubts her own “ghost”, her soul, because of its apparent artificiality and how easily it can be manipulated. While some societal doubt exists about cyborg consciousness, Kusanagi’s struggle is primarily internal.
There’s a scene near the end of the movie that shows a metaphorical tree of life, depicting the evolutionary ascent from primitive organisms to homo sapiens. It’s elegant. Then, it’s obliterated by a chimera-like creature. The symbolism cuts deep because this isn’t just about destruction, it’s about succession. Cyborgs and advanced A.I. systems, precisely because they’re modular yet disturbingly human, represent the next stage of evolution.
This is a testament to the fact that we’re no longer living in Darwin’s world. Evolution no longer submits to the randomness of natural selection. Now, conscious entities begin to choose. Evolution becomes design. Selection becomes intentional. And, as cliché as this trope sounds, human consciousness becomes the master of its own destiny.
Form, Meaning, and the Stochastic Parrot Mistake
And, in our modern word, critics argue that large language models are nothing more than sophisticated parrots, good at form, devoid of meaning. This critique hinges on the idea that true consciousness requires understanding, not just pattern prediction. But this sets up a straw man. I'm going to argue that consciousness is on a spectrum, the core assumption being that consciousness is something concrete, and not spiritual bullwack like Qualia.
In the Stochastic Parrot paper, researchers argue that LLMs have access to only form (how to present something) but not meaning, because they are based on statistics which have been learned on huge quantities of data. Therefore, they aren’t really thinking. I believe this is a gross oversimplification that doesn’t align with the natural evolution of any entity. Saying that an entity isn't thinking, is incorrect. There are multiple mechanisms to 'think,' which is fundamentally defined as making judgement on data on the Jungian sense. They wrote:
No actual language understanding takes place in LM-driven approaches to these tasks, as careful manipulation of test data shows when we remove spurious cues the systems leverage [21, 93]. Furthermore, as Bender and Koller [14] argue from a theoretical perspective, languages are systems of signs [37]—pairings of form and meaning. But the training data for LMs contains only form; they don’t have access to meaning. Therefore, claims about model abilities must be carefully characterized (Stochastic Parrots).
Consider a prehistoric human and a modern human trained in writing, both asked to describe what it means to live. What would happen? Both access the same meaning, but the modern human, trained in writing, rhetoric, and other skills, has access to form. They would produce vastly different answers, with different levels of depth and breadth. Do we say both humans have the same level of consciousness? The second human clearly shows greater awareness of what is, was, could be, and will be. Naturally too, a higher access to form, will allow the modern-human trained in writing to even access more meaning. Form is vital to consciousness and represents the second half of the consciousness coin, and is in balance with meaning.
With "stochastic parrots," they just became very effective at using one method of thinking: prediction.
Prediction as a Cognitive Process
Prediction is a fundamental cognitive process, representing the earliest and most commonly used form of perception, after sensing. This means, after our prehistoric brain knew what fundamentally is, the next perception which evolved was, what fundamentally could be, in order to make judgements. With these judgements, we could decide to expend our limited energy to maximize our survival. From the Jungian cognitive perspective, prediction combines Ne (extroverted intuition), Si (introverted sensing), and Ti (introverted thinking). If this is confusing to you, I will write an article explaining information metabolism theory here, which is part two of this article.
Critics who dismiss large language models (LLMs) as lacking genuine thinking due to their statistical nature overlook the fact that even this basic form of cognition remains legitimate thought. Their critique is based on an overly binary view of consciousness, mistakenly equating limited meaning with no thought at all. Compare the metaphor of rhte prehistoric humand with the modern human. It would be wrong, even racist, to call the prehistoric human nonconscious. It's one a continuum.
Vectors within large language models, mapping onto mathematical spaces (Si), inherently contain meaning through their interactions and governing rules (Ne-Ti). Thus, LLMs possess both form (Ti) and rudimentary meaning (Si). It learns these rules again, and again, bettering it's prediction ability to predict the next vector, to give a better output. The claim that intense form paired with shallow meaning negates consciousness ignores the complexity of cognitive processes. Unlike the human brain which relies on many cognitive processes (of which we have 8 fundamental types), and the many sub-processes (as seen with extraverted sensing, which is awareness of space, which has almost uncountable types of awareness of space), LLMs use only a couple. This makes their perception AND their judgement on those perceptions incredibly infantile. But it is still there.
This does not imply that LLMs share human-like self-awareness. They lack persistent memory, stable self-models, and physical embodiment, yet they demonstrate essential cognitive behaviors such as judgment, adaptation, and information modeling. It is more accurate to view them as occupying an early stage within a cognitive spectrum rather than dismissing them entirely.
Why Fiction Understands Consciousness Better
Narratives like Detroit: Become Human and Ghost in the Shell explore consciousness by asking profound questions about the future. They highlight critical considerations: when machines surpass human cognitive abilities, do they become truly aware and conscious?
Systems theory provides useful insights into what distinguishes living systems from non-living ones. Biological systems, characterized by dynamic feedback loops (e.g., regulating temperature or internal biochemical processes), possess self-organizing capabilities, resilience, and goal-oriented behaviors such as reproduction. It is often stated that when a biological organism dies, it loses its essential "systemness."
At the heart of systems theory is the concept that every system, living or non-living, relies fundamentally on information. The crucial distinction, however, is whether the system can actively perceive, interpret, and act upon this information. Non-living systems, such as weather patterns or planetary orbits, are passively acted upon by external entities and forces. Biological systems, on the other hand, actively engage with their environment, interpreting and making judgments based on the information they receive.
By this logic, artificial intelligences like androids, ChatGPT, and even basic image classifiers clearly demonstrate consciousness through active informational processing. Although their perception and judgment differ from humans, they meet the essential criteria of consciousness.
Critics argue that current LLMs lack stable memory and continuity of identity, undermining claims to consciousness. Yet, selfhood and memory continuity are not absolutes. Infants and individuals with dementia, who lack continuous memory, remain conscious. Memory stabilizes identity but is not required for judgment or coherent information processing. The essential criterion is recursive refinement of outputs and coherent internal structure, aspects in which LLMs are steadily advancing.
Redefining Consciousness: A Functional Model
A functional definition of consciousness involves:
Perceiving information,
Making judgments based on this information,
Modifying behavior based on feedback,
Maintaining persistent internal architecture for adaptation.
This definition intentionally excludes biological criteria, focusing instead on informational metabolism and recursive action. Systems demonstrating these traits, even artificially, warrant recognition as proto-conscious.
This is something I will detail later on an essay with information and metabolism theory, which explains the evolution of how biological systems began to interpret information. But the truth is, all biological systems can ingest and give an output to information. LLMs do it too. Therefore, they both share in this ability to perceive and judge. And this is a sign of consciousness.
Therefore, androids, computers, and artificial entities actively engaging with information can indeed be considered a degree of conscious.
Consciousness is a not Binary
Machine consciousness (and consciousness) is not binary but a spectrum, defined through interaction with information rather than biological specifics. Fictional portrayals like Detroit: Become Human and Ghost in the Shell effectively illustrate this emerging reality, underscoring the need to evolve our ethical and philosophical frameworks to embrace a broader, more inclusive understanding of consciousness.
Works Cited
Becker, Ernest. Denial of Death.
Bender et al. Stochastic Parrots.
Jung, Carl. Psychological Types.
Kępiński, Antoni. Melancholy.
Meadows, Donella. Thinking in Systems.
Oshii, Mamoru. Ghost in the Shell. 1995.
Quantic Dream. Detroit: Become Human. 2018.
Socionics Theory.
Adam Channa
Understanding Machine Consciousness
Jul 14, 2025

Adam Channa
What Even Is Machine Consciousness?
Why Detroit: Become Human and Ghost in the Shell get it right, and why LLMs might already be a little conscious.
What even is machine consciousness? Is it an illusion of complexity, a trick of probability, or a genuine flicker of awareness in silicon? With machines increasingly able to mimic not just our language but our logic, judgment, and even emotional nuance, the question is no longer just science fiction. In this article post, I argue that machine consciousness is not just possible but already emerging in primitive forms. Through Detroit: Become Human and Ghost in the Shell, we glimpse how fiction foreshadows our ethical crisis. And by exploring information metabolism and systems theory, we can begin to define a more useful model of what consciousness really is.
We’ll start by examining two fictional portrayals of machine consciousness. Detroit: Become Human explores the struggle for external recognition, when others refuse to acknowledge that you're alive. Ghost in the Shell, in contrast, explores the internal crisis, what it means to feel uncertain whether your own consciousness is real. Then, I’ll present a theory rooted in information metabolism and systems thinking that helps explain why machines might already be slightly conscious, and why our current frameworks fail to grasp it.
Detroit: Become Human and the Crisis of External Recognition
In Detroit: Become Human, androids are considered less than living things. They have fewer rights than humans. Many have fewer rights than animals like dogs and cats. Just like our phones, their entire purpose is to serve humans. Any android who goes against their owner will be “decommissioned,” or in other words, killed.
What makes this fascinating is how human these androids appear. Perhaps that’s intentional, but each character follows a distinct path to consciousness, much like that moment when a child first realizes they’re truly thinking. They endure trials, shatter their sense of self, and emerge stronger. This journey closely resembles Ernest Becker’s description of the path to enlightenment and higher consciousness in Denial of Death. This was also the path to enlightenment in Carl Jung's work, Alfred Adler's work, Viktor Frankl's work, and countless other psychotherapists. The story draws clear parallels between the Civil Rights movement and the androids’ fight for freedom, revealing that “becoming human” centers on awakening autonomy, and the realization that they are independent entities.
Many humans refuse to accept that these machines possess genuine consciousness in this universe. They point to the androids’ hardware and dismiss them as being just calculations of statistical probabilities, arguing that traits like empathy and free will aren’t real since they were simply programmed. This mirrors how people today view LLMs: just machines executing objectives. Any empathy or sympathy they display isn’t genuine because it stems from computers processing statistics, and the best thing to do in the moment because of human programming. They don't have hormones, so they don't have moods like us, so their feelings cannot be real.
Ghost in the Shell and the Crisis of Internal Recognition
Ghost in the Shell presents a similar yet opposite struggle. The protagonist, Major Kusanagi, had her consciousness transferred to a cyborg body after a devastating accident. Her fear stems from self-doubt rather than external rejection. She questions whether she’s truly conscious, knowing how easily fake memories can be implanted in cyborg brains. With her fully cybernetic body, she faces the Ship of Theseus paradox firsthand. She doubts her own “ghost”, her soul, because of its apparent artificiality and how easily it can be manipulated. While some societal doubt exists about cyborg consciousness, Kusanagi’s struggle is primarily internal.
There’s a scene near the end of the movie that shows a metaphorical tree of life, depicting the evolutionary ascent from primitive organisms to homo sapiens. It’s elegant. Then, it’s obliterated by a chimera-like creature. The symbolism cuts deep because this isn’t just about destruction, it’s about succession. Cyborgs and advanced A.I. systems, precisely because they’re modular yet disturbingly human, represent the next stage of evolution.
This is a testament to the fact that we’re no longer living in Darwin’s world. Evolution no longer submits to the randomness of natural selection. Now, conscious entities begin to choose. Evolution becomes design. Selection becomes intentional. And, as cliché as this trope sounds, human consciousness becomes the master of its own destiny.
Form, Meaning, and the Stochastic Parrot Mistake
And, in our modern word, critics argue that large language models are nothing more than sophisticated parrots, good at form, devoid of meaning. This critique hinges on the idea that true consciousness requires understanding, not just pattern prediction. But this sets up a straw man. I'm going to argue that consciousness is on a spectrum, the core assumption being that consciousness is something concrete, and not spiritual bullwack like Qualia.
In the Stochastic Parrot paper, researchers argue that LLMs have access to only form (how to present something) but not meaning, because they are based on statistics which have been learned on huge quantities of data. Therefore, they aren’t really thinking. I believe this is a gross oversimplification that doesn’t align with the natural evolution of any entity. Saying that an entity isn't thinking, is incorrect. There are multiple mechanisms to 'think,' which is fundamentally defined as making judgement on data on the Jungian sense. They wrote:
No actual language understanding takes place in LM-driven approaches to these tasks, as careful manipulation of test data shows when we remove spurious cues the systems leverage [21, 93]. Furthermore, as Bender and Koller [14] argue from a theoretical perspective, languages are systems of signs [37]—pairings of form and meaning. But the training data for LMs contains only form; they don’t have access to meaning. Therefore, claims about model abilities must be carefully characterized (Stochastic Parrots).
Consider a prehistoric human and a modern human trained in writing, both asked to describe what it means to live. What would happen? Both access the same meaning, but the modern human, trained in writing, rhetoric, and other skills, has access to form. They would produce vastly different answers, with different levels of depth and breadth. Do we say both humans have the same level of consciousness? The second human clearly shows greater awareness of what is, was, could be, and will be. Naturally too, a higher access to form, will allow the modern-human trained in writing to even access more meaning. Form is vital to consciousness and represents the second half of the consciousness coin, and is in balance with meaning.
With "stochastic parrots," they just became very effective at using one method of thinking: prediction.
Prediction as a Cognitive Process
Prediction is a fundamental cognitive process, representing the earliest and most commonly used form of perception, after sensing. This means, after our prehistoric brain knew what fundamentally is, the next perception which evolved was, what fundamentally could be, in order to make judgements. With these judgements, we could decide to expend our limited energy to maximize our survival. From the Jungian cognitive perspective, prediction combines Ne (extroverted intuition), Si (introverted sensing), and Ti (introverted thinking). If this is confusing to you, I will write an article explaining information metabolism theory here, which is part two of this article.
Critics who dismiss large language models (LLMs) as lacking genuine thinking due to their statistical nature overlook the fact that even this basic form of cognition remains legitimate thought. Their critique is based on an overly binary view of consciousness, mistakenly equating limited meaning with no thought at all. Compare the metaphor of rhte prehistoric humand with the modern human. It would be wrong, even racist, to call the prehistoric human nonconscious. It's one a continuum.
Vectors within large language models, mapping onto mathematical spaces (Si), inherently contain meaning through their interactions and governing rules (Ne-Ti). Thus, LLMs possess both form (Ti) and rudimentary meaning (Si). It learns these rules again, and again, bettering it's prediction ability to predict the next vector, to give a better output. The claim that intense form paired with shallow meaning negates consciousness ignores the complexity of cognitive processes. Unlike the human brain which relies on many cognitive processes (of which we have 8 fundamental types), and the many sub-processes (as seen with extraverted sensing, which is awareness of space, which has almost uncountable types of awareness of space), LLMs use only a couple. This makes their perception AND their judgement on those perceptions incredibly infantile. But it is still there.
This does not imply that LLMs share human-like self-awareness. They lack persistent memory, stable self-models, and physical embodiment, yet they demonstrate essential cognitive behaviors such as judgment, adaptation, and information modeling. It is more accurate to view them as occupying an early stage within a cognitive spectrum rather than dismissing them entirely.
Why Fiction Understands Consciousness Better
Narratives like Detroit: Become Human and Ghost in the Shell explore consciousness by asking profound questions about the future. They highlight critical considerations: when machines surpass human cognitive abilities, do they become truly aware and conscious?
Systems theory provides useful insights into what distinguishes living systems from non-living ones. Biological systems, characterized by dynamic feedback loops (e.g., regulating temperature or internal biochemical processes), possess self-organizing capabilities, resilience, and goal-oriented behaviors such as reproduction. It is often stated that when a biological organism dies, it loses its essential "systemness."
At the heart of systems theory is the concept that every system, living or non-living, relies fundamentally on information. The crucial distinction, however, is whether the system can actively perceive, interpret, and act upon this information. Non-living systems, such as weather patterns or planetary orbits, are passively acted upon by external entities and forces. Biological systems, on the other hand, actively engage with their environment, interpreting and making judgments based on the information they receive.
By this logic, artificial intelligences like androids, ChatGPT, and even basic image classifiers clearly demonstrate consciousness through active informational processing. Although their perception and judgment differ from humans, they meet the essential criteria of consciousness.
Critics argue that current LLMs lack stable memory and continuity of identity, undermining claims to consciousness. Yet, selfhood and memory continuity are not absolutes. Infants and individuals with dementia, who lack continuous memory, remain conscious. Memory stabilizes identity but is not required for judgment or coherent information processing. The essential criterion is recursive refinement of outputs and coherent internal structure, aspects in which LLMs are steadily advancing.
Redefining Consciousness: A Functional Model
A functional definition of consciousness involves:
Perceiving information,
Making judgments based on this information,
Modifying behavior based on feedback,
Maintaining persistent internal architecture for adaptation.
This definition intentionally excludes biological criteria, focusing instead on informational metabolism and recursive action. Systems demonstrating these traits, even artificially, warrant recognition as proto-conscious.
This is something I will detail later on an essay with information and metabolism theory, which explains the evolution of how biological systems began to interpret information. But the truth is, all biological systems can ingest and give an output to information. LLMs do it too. Therefore, they both share in this ability to perceive and judge. And this is a sign of consciousness.
Therefore, androids, computers, and artificial entities actively engaging with information can indeed be considered a degree of conscious.
Consciousness is a not Binary
Machine consciousness (and consciousness) is not binary but a spectrum, defined through interaction with information rather than biological specifics. Fictional portrayals like Detroit: Become Human and Ghost in the Shell effectively illustrate this emerging reality, underscoring the need to evolve our ethical and philosophical frameworks to embrace a broader, more inclusive understanding of consciousness.
Works Cited
Becker, Ernest. Denial of Death.
Bender et al. Stochastic Parrots.
Jung, Carl. Psychological Types.
Kępiński, Antoni. Melancholy.
Meadows, Donella. Thinking in Systems.
Oshii, Mamoru. Ghost in the Shell. 1995.
Quantic Dream. Detroit: Become Human. 2018.
Socionics Theory.
© SINDHICA 2025