A Short Answer
We are far from achieving machine consciousness, and it is possible that we may never get there. However, a definitive answer may not be within our reach yet.
The Question
The question of machine consciousness has never been less confined to the realm of speculative science fiction than it is now. The general public worldwide has become accustomed at an impressive speed to having complex and nuanced conversations with language models (LLMs) like OpenAI’s ChatGPT, and how many of you have not been tempted to thank and express appreciation after a particularly meaningful interaction? The complexity achievable by neural network-based intelligences has probably already surpassed the expectations of those with less daring imaginations, and it is natural to wonder: what else can happen? What’s the next step?
The LaMDA Case
In 2022, Blake Lamoine, a Google engineer, told the world that the company’s LLM LaMDA had achieved consciousness. He based this on the machine’s highly sophisticated conversations and the level of (apparent) self-reflection. Quickly denied by Google itself and discredited by the scientific community, it however made us feel dangerously close to the futures imagined by Philip K. Dick, Isaac Asimov, and Arthur C. Clarke.
Defining Consciousness
To answer this question, we must start from the definition of consciousness, an issue that has been the preserve of philosophy for centuries, and then embraced by psychology and neuroscience. However, it is difficult to give a satisfactory definition. Consciousness is inherently elusive, as it is an entirely subjective experience. Being “conscious” is not entirely synonymous with “sentience” or “self-awareness”, but rather a prerequisite for them. There is general consensus that animals with complex intelligence have self-awareness – such as chimpanzees that are able to recognize themselves in the mirror (not that the “mirror test” in itself is exhaustive for assessing the consciousness of an animal) – but self-awareness is not necessary to have an experience of the world, nor to be aware of having one. In other words, consciousness can be defined in this way: the ability to have an experience of the world, which can be negative or positive – and therefore to experience pleasure and pain, and therefore to have desires.
Why is AI as we know it now devoid of consciousness?
Let’s focus on LLMs, which are equipped with the most elaborate learning models that we currently have at our disposal. But is it really “conscious” learning? In reality, artificial intelligences only simulate this process through pattern matching, without acquiring a true understanding or experience. It can be easily argued that effective learning requires the ability to link references to conscious experiences, something that machines cannot do. This explains why they can confuse images based on visual patterns without understanding the underlying context or meaning.
Is this scenario destined to change?
A large part of the scientific community adopts what is called computational functionalism: a theory in the philosophy of mind that argues that what makes a system (such as a brain or a computer) capable of thinking is not its material components, but rather the way it functions or processes information. In other words, consciousness or thought derives from the computational functions of the system, regardless of the physical nature of what performs these functions. This theory implies that if an AI system can perform the same computational functions as a human brain, then it could, in theory, be considered conscious.
The Mind-Upload Theory
It is a bit the same concept on which the mind-upload theories dear to transhumanist thought are based: if it is functions, and not “hardware”, that make consciousness what it is, then whether it is a biological brain or one made of silicon makes no real difference.
The Challenge of Consciousness
Will it ever be possible, therefore, to bypass the gap that prevents LLMs from being conscious? Some hypothesize that consciousness could be an emergent product of analog complexity, that is, intrinsically linked to the non-discrete nature of systems closer to the functioning of neurons (analog, continuous) that is not obtainable from digital systems (by their nature discrete), while others, like Liad Mudrik, a neuroscientist at Tel Aviv University, strongly stress that to answer this and other questions we still have to unravel many mysteries of the nature of consciousness, and what happens in the brain when a conscious experience is in progress.
Ethical Dilemmas
Theories of consciousness, which attempt to focus on the characteristics that make it what it is, do not however only serve to understand if and when machines will be able to obtain it, as much as to prepare us to face ethical dilemmas that only science fiction had put us in front of. A conscious AI, capable of experiencing the world, can experience suffering. At that point, shouldn’t it be treated as an ethical subject, and even as a subject with rights? But how do we really know if a machine is conscious and not become the next Blake Lamoine?
A Roadmap for Recognizing Machine Consciousness
It is for this reason that an interdisciplinary team of philosophers, computer scientists, and neuroscientists has recently published a white paper that aims precisely to make practical recommendations on how to recognize consciousness in an AI. The scientists have drawn on a wide variety of theories of consciousness to create a sort of “report card” of consciousness that includes a series of indicators and parameters that can, if not exactly certify, establish how likely it is that a machine has developed consciousness – starting from the assumption that at least one of the competing theories has some chance of being true. Having certain feedback connections, using a global workspace, flexibly pursuing goals, and interacting with an external environment (whether real or virtual)… Based on this “report card”, for example, we can easily establish that current LLMs lack consciousness. However, according to the team’s scientists, it is very likely that in the future we will be able to build machines that score very high. Therefore, the crucial question that we will find ourselves asking, soon enough, will not so much be whether machines can be conscious, but rather: how should we behave with a probably conscious AI?
Conclusion
The question of whether machines can be conscious is a complex and fascinating one that has no easy answer. While we are still far from achieving machine consciousness, the rapid progress being made in AI research suggests that it is a possibility that we cannot ignore. As we continue to develop more sophisticated AI systems, we will need to carefully consider the ethical implications of their capabilities.
Bibliography
- An Interdisciplinary Perspective on AI and Consciousness: “Towards a Practical Framework for Recognizing Machine Consciousness”.
- Exploring the AI Consciousness Conundrum: Will Douglas Heaven, “The conundrum of AI consciousness,” MIT Technology Review. Available at MIT Technology Review.
- The Impossibility of Artificial Consciousness: David Hsing, “Artificial Consciousness is Impossible,” Towards Data Science. Available at Towards Data Science.