READING

Why the First Question May Be the Last One We Answ...

Why the First Question May Be the Last One We Answer

In Cognitive science, there is nothing as tantalising nor as frustratingly fruitless as the issue of consciousness. Besides immediate questions like  “What is the meaning of life”, “why are we here”, and “God(s), anybody?”, no other has caused so many thinkers to bang their head against the wall like “what am I, and why do I feel things?”. In the history of modern western science, Descartes was the first to give this question a systematic stab, arguing that the conscious mind ( the “I think” in “I think therefore I am”) originated from a non-physical, god-given soul that interfaced with and controlled the body through the pineal gland of the brain, without needing to submit to or be explained by the normal, physical laws of nature. While contemporary scientists were likely not as troubled as I am at the implication that pineal-y deficient animals— such as the Hagfish but also all invertebrates (97% of animal species), — cannot possess consciousness, Descartes’ explanation, known as Cartesian Dualism, grew increasingly at odds with the growing scientific consensus that all matter in the universe was physical in nature, a position known as physicalism.

The tension between the mainstream physicalist world view and Cartesian dualism, coupled with the inherently intractable nature of subjective experience, made the subject matter of consciousness a taboo subject for centuries. Behaviourists like B.F Skinner ignored mental states completely in their analysis of behaviour, but even today, more modern neuroscientific frameworks tend to view the brain in purely functional terms, often referring to consciousness only indirectly, through more empirically amenable concepts like “awareness” and “attention”. While concepts like these can explain what the brain does, and offer mechanisms for how it does it, there is still a missing ingredient from the analysis: how it feels to be the being acting out these processes. One could fully explain, in purely mechanical terms, why someone recoils in pain when they stub their toes: Nerve fibres, carrying a “distress signal”, indicate there’s been a large, sudden, mechanical pressure from the toe to the brain, whereas neurons integrate this signal and output a motor command signal that causes the toe to withdraw. This entire chain of cause-and-effect can be explained without ever referencing pain, and indeed there does not seem to be any room for subjective experiences like pain to play any sort of causal work in the kinds of explanations that neuroscience gives to describe behaviour.

This “explanatory gap” between the functional & mechanical explanations of neuroscience and the subjective experience of consciousness is what prompted philosopher David Chalmers to term the issue as “The Hard Problem of Consciousness” in 1994, igniting decades of fierce and oftentimes personally aggressive debates. In fierce opposition is prominent analytic philosopher Daniel Dennett, who claims that subjective experience is but a fancy parlour trick, signifying nothing more than a cartoonishly simplified representation of the the real stuff going on in our brains, and requiring no further explanation. For people like Chalmers and Descartes’ ghost, this is like saying water isn’t wet. For people like Dennett and the majority of AI scientists, Chalmer’s obsession with the subjectivity of consciousness sounds a lot like a philosopher trying to preserve his job prospects by claiming that their problem is inherently different from any other problem ever encountered.

Rather than arguing over the theoretical merits of the question, however, it’s important to see how consciousness is readily applicable to current fields. In the ever-advancing field of artificial intelligence, for example, solving the Hard Problem could prove crucial, not only in furthering the pursuit of creating AI that faithfully imitates humanity, but also in terms of shaping our relationship to it on a societal level.  It’s trivially true that understanding why consciousness exists and how it works in humans would obviously help us in achieving truly sentient, intelligent artificial life (provided that’s what we want). What is missing, however, and what is likely more important, is that if we intend to continue creating AI that is increasingly indistinguishable from actual humans on a functional level – AI that does everything a human can do – and we aren’t concerned with the possibility that this means it will also *feel* everything a human can feel, we’ll quickly walk ourselves into an ethical conundrum. At its heart, consciousness is simply the subjective experience of being. We can each only be sure of our own consciousness. While it’s safe enough to assume everyone we meet is conscious just as we are, in reality we have no way of knowing, and it’s not necessarily the case that we’ll be so generous in our assumptions with artificial beings. In light of the uncertainty that exists in regards to consciousness, its source(s), its mechanisms, and even its mere presence in a “being” other than ourselves, what ethical considerations are due to AI? If we can’t know for certain whether we are creating sentient beings, capable of pain and suffering, are we wrong in pursuing the creation of these entities to be used as tools? If we succeed in making machines sentient, are we ethically responsible toward them? Do we owe them the same rights and considerations we give humans? Animals? Or are we to treat them the same way we treat a laptop?

The theoretical question of whether we can ever explain how we clumps of matter have come to feel things may be secondary to the immediate, pressing concern of how we behave towards sentient beings. Consciousness remains infuriatingly inscrutable, and while efforts to understand it seem to have obfuscated more than illuminated, it remains relevant and important to interpreting human experience, advancing the scientific field, and to our understanding of right and wrong. In the chaos of the real world, where the certainty of scientific knowledge fades to a shadow of doubt, meaningful decisions have to be made. This is where ethics begins- where one hard problem opens into a multitude.


Featured Image: A state of introspection by Sio Mont

Additional Reporting and Editing by Nicolas Botero