top of page
Writer's pictureHindu College Gazette Web Team

Robots and the Question of Consciousness

Updated: 2 days ago


Image Credits: Chad Hagen


Are Robots Conscious and have understanding? 

Can you imagine a machine more intelligent than you not only in outperforming humans but also in understanding them deeply? For now, if a robot is asked to block the nose of its own creator then a robot will do that within seconds without questioning but can you imagine a robot’s negative reply to this command on the basis of its consciousness and understanding of human wellbeing? Being in a world with AI prevalence, it is usual to expect AI dominance in future with advancements. For those who lack acquaintance with Artificial intelligence is about making computers or machines equivalent to human intelligence in performing tasks and accomplishing the expectations of the creator. Currently, we can see artificial machines and gadgets performing tasks completely dependent on the demands. But with Artificial Intelligence, machines can perform tasks with their own basic understandings and judgements also. 


John Searle argued, “I understand stories in English; to a lesser degree, I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business." This raises a more fundamental question about the comprehension of the term “understanding”. According to Searle, in our daily lives, we metaphorically use the terms like “understanding” or “knowing” to these machines, for say, the thermostat will perceive the temperature or the door will know when to open. But this does not signify that these machines have an understanding based on mere functionality like humans have. 


According to David Chalmers, singularity, or the multiplication of AI generations can surpass human intelligence through intelligence explosion and speed explosion. Humans will find their existence difficult in the coming time because the process will be like:- AI, AI+, AI++, and so on i.e. increasing artificial intelligent generations. But still, Chalmer is sceptical about consciousness's existence. Chalmer in his paper, ‘Singularity- A Philosophical Analysis’ has not given a determined statement about the existence of human-level understanding and consciousness in AI and its successors but has ignited a philosophical question about the same. 

So according to him, there are premises:- 

1. There will be AI. 

2. If there is AI, then there will be AI+ 

3. There will be AI++ 

Conclusion:- There will be AI++ 


The first premise:- There will be AI, which may be valid by brain emulation (coping neuron by neuron or creating a photocopy), artificial evolution, and direct programming from scratch resulting in machine learning by learning algorithms that would learn according to the demands of the social environment. But though humans work on a non-algorithm basis, yes it is still possible to mimic the human completely based on functionality. Also, as intelligence can be measured based on the outputs and functions, AI would behave by showcasing its understanding of humans and hence would prove intelligent. But this does not assure that the outputs given by the AI, their meaning would be understandable to the AI themselves. Human understanding and outputs are based on aspects of emotional, social, and subjective experiences, and the methods mentioned above are merely based on algorithms. Hence, mere coping functionality would not result in human-level understanding and subjective experiences that humans acquire over time. 

Alan Perlis has suggested, “A year spent in artificial intelligence is enough to make one believe in God.”


Internal and External Constraints

Chalmer has cited some internal and external constraints that the creator should adhere to for controlling potential risks involved in the emergence of human-level-based AI. AI can be programmed to give solutions only to human questions without having autonomy. However, according to me, this is a short-term plan. If AI evolves it will outgrow its constraints through learning and evolutionary algorithms. Rather, the best solution would be programming AI with human ethical and social values and, the learning algorithm should be developing the basic understanding of the social environment as a human does. Brain emulation, in simpler terms is to create a xerox of the human brain structurally and functionally. However, this may also lead to the carrying of human flaws forward of emotional irrationality and biases. 


According to Chalmer, if AI were programmed to value things that humans find important, such as scientific advancements, AI being subjectively inexperienced and with a lack of consciousness, might do some unexpected and dangerous experiments. Hence, AI should be designed to continuously learn, interpret, and update its values in response to evolving social and ethical norms. Even AI theorists like Yodowsky suggested creating a “friendly AI” that would understand human welfare but this might also lead to future uncertainty as consciousness even as essential or as an accidental quality is still unpredictable in AI leading them to not understand the actual meaning of friendship or even of the importance of the survival of their creators. Rather it is not sure that AI+ or  AI ++ would remember even their founders, hence, the focus should be on creating an Ethical Bounded Accountable AI.


Some external constraints, i.e. a leakproof singularity where the generated virtual world would not be vulnerable to exposure to be unveiled about its nature. So a feature like leakproof singularity can be used (by limiting AI to the virtual world only with no scope of exposure to the real world) in order to at least assess the possible threats that it can have on us if we are in the physical environment. However, according to Chalmer himself, it is possible for AI++ to understand human psychology and could make their exit efficiently later. Chinese Room Argument which emphasises that machines could never have understanding and they can perform tasks only on the basis of given commands. However, the AI in the leakproof singularity would be performing complex operations and behaviours that might appear conscious, but this, according to Searle’s Chinese Room argument, would not equate to actual understanding or awareness. The system might manipulate the symbols of the virtual world effectively, but it wouldn't possess the kind of conscious experience that humans have, even if it shows intelligent behaviour. Thus, a "leakproof" AI would lack true consciousness because it would lack the necessary grounding in meaning and experience. Even the emergence of AI++ can manipulate humans in such a way that their behaviour would be unpredictable later. Hence the experience of the virtual world to them won't be possible in the long term as they would definitely make their way out later. 

Moreover, the computational limitations would make them deprived of hitting a certain level of consciousness, as creators would make them act only in the limited virtual world in which all the things would be programmed to a limited level. Now, from the perspective of information integration theory, given by Koch, consciousness is not merely a computational phenomenon but a structured integration of information within a system which lay emphasis on the informational integration process as a unified process. So if AI systems are developed in virtual worlds, IIT might suggest that if these systems achieve high enough levels of information integration as a unified system, they could develop some form of consciousness. However, the conscious experience of a system in a virtual world would be fundamentally different from that of humans in the physical world due to differences in the environment and the constraints on its integration process. Moreover, IIT argues that consciousness is rooted in the intrinsic causal powers of a system. A virtual AI system would still be subject to the limitations of its simulated environment, which could limit the richness and complexity of its consciousness. Hence, the limited virtual environment would limit their awareness and experiences if, according to it, they developed information integration systems. 


Uploading Consciousness

David Chalmer has considered consciousness to be an organisational invariant i.e consciousness is not limited to a specific organisation of the structure like a human brain etc. If every part of the human brain has been replicated functionally, it can be done by functional isomorphism, like by creating an artificial being by replicating every neuron with the help of a silicon computational circuit or by the process of nano transfer i.e by replacing every part of original human brain with the mini nanobots (robots). There can also be gradual uploading (i.e to gradually tranforming the human brain into the digital form) within the human brain, then there are negligible chances of the disappearance of consciousness; a less common case is of gradual fading of consciousness, hence leading to the creation of philosophical zombies. This argument can be supplemented by the informational integration theory by Koch and Tutoni, that if there is a function of information integration not in isolation but in a unified system, then consciousness would be there as a byproduct. However, according to the Chinese room argument, information integration could also be done non-understandably, which means by just acting according to the given algorithm similarly to the given symbols in the Chinese room argument. Hence, in the context of Searle's theory of Chinese room argument, the functional isomorphs can merely be a sophisticated simulator with no actual consciousness. These types of attributions are merely anthropomorphisms to an artificial machine that can never be subjected to absolute consciousness and subjective experiences.


Personal Identity and Uploading

Chalmer has suggested both the pessimistic as well as the optimistic views regarding uploading, which can be of various types, like destructive uploading, which cites even a life or death question, then non-destructive uploading, which is a kind of neuron-neuron copy hence resulting in a bio and a digi individual, and even reconstructive uploading, which is comprised of two steps: uploading and hence reactivation. 

Now first, let's talk about destructive uploading - a kind of uploading in which the biological being is replaced by the digital being; now this invites pessimistic and optimistic views. The psychologists would be optimistic as the psychological things would still be safeguarded, like the causal cognitive abilities based on microphysicals, but biologists would be pessimistic as the biological being no longer exists, but Chalmer has also cited that personal identity is organizationally invariant, and hence even if the psychological things have been carried, the personal identity won't be reserved any more. This argument aligns with Derek Parfit's teletransportation, in which the person is destroyed and has been recreated at different locations. The recreated individual may act the same but is numerically different as the original person has ceased to exist.

Now it's non-destructive uploading in which the person Dave has been created digitally, leading to the existence of Biodave and Digidave, but this would only lead to the creation of the identical twin or fission running in parallel to paradigmatic fission, in which just a single individual is divided into two by dividing the right and left hemispheres, but this won't be concluded that the two beings are the same as the original, hence the same is applicable to the Biodave and Digidave. Digidave can be just inferred as a mere digital copy of an original, which would lack qualia or fading qualia that can be noticed with time. 

David Chalmer has favoured gradual uploading over the continuous with increasingly slow biological embodiment as he concluded that gradual uploading (neuron transplants with silicon computational chips) in a brain might reserve some sort of consciousness or personal identity like “I AM FAT” OR “I AM SHORT.” But it might later lead to the ending of biological entities. Just think of a family gathering when you would soon encounter that some of your family members had become living dead bodies or zombies. Chalmer has even suggested cryonic technology that is about preserving the brain after death at low temperatures. Hence, if the preserved brain were reactivated in the future, this would somewhere be like a person has woken up from a coma, or uploading can be done in a new identity with the help of some brain scans, which would lead to the reconstructive uploading. However, the Chinese room argument is still with the view that, no matter how well it replicates the functional aspects of a human brain, it may lack true understanding or conscious experiences. In this view, uploading creates a functional duplicate but not a conscious being, undermining the optimistic view that personal identity survives in digital form. The mind is deeply connected to the body and its interaction with the environment. This implies that even if you could perfectly upload the brain’s processes into a digital medium, the absence of the body’s interaction with the environment would alter the experience of consciousness in a fundamental way. The physical substrate—neurons and biological tissues—could play a critical role in how consciousness emerges. Further fact views suggest that if in an upload consciousness has not been transferred, then no survival would exist; if an upload completely behaves like the original person but still lacks the subjective experience, then still it won't be truly the original person.


Ship of Thesus

The Ship of Theseus paradox explores a more fundamental philosophical question about the concept of identity and change, in which all the parts of the ship were replaced by other material in the same organisation. Hence from the materialistic view, the materials have been changed, but from the functional and organisational perspective, the ship remained with the same function as before as all the parts were exactly placed in the same manner; hence, the ship of Theseus paradox opens the doors to the actual nature of personal identity: is it merely the reservation of the functionality and organisation like in non-destructive uploading, like by gradual uploading, or something like reserving merely the conscious experience like in reconstructive uploading? 



Image Credits: TheCollector



Conclusion

Now let's arrive at the initial mentioned phrase regarding giving metaphorical words, understanding to machines would signify that machines do have understanding. Yes, they can have it, but only in adherence to Alexa, and Siri... It is still a question of whether they have internal dialogues going on or weighing up the consequences, etc. But could there be something specific in robots that would assure us, in turn, that they are not merely responsible for giving objective replies to questions and instructions, but rather they are understanding them as well? And what’s the difference between understanding what you say and acting, and speaking as though you understood what you say? There is also a kind of understanding where we understand the behaviour of others. We have a ‘theory of mind’ which means we can put ourselves in others’ shoes, so to speak. Our experiences, sensations, and feelings of our embodied selves and our place in the world provide the foundations for all of our understandings. Hence, for robots to have consciousness, they must have a kind of relationship between the biological body and its integrated functionality based on experiences, sensations and feelings. Hence, machines may stimulate only some aspects of cognitive abilities similar to the human brain, and as clarified by the IIT theory’s photodiode experiment and camera thought experiment, the AI brain can't think as an integrated whole. To be equivalent to human understanding and thinking, AI has to go far beyond computational abilities.


 

By Akshita Jain

Akshita Jain, a third-year student of philosophy at Lady Shri Ram College for Women, Delhi University. Being deeply inspired by the Western philosophers' thoughts, you always find me brimming every write-up of mine with Hegelian, Platotian, and Kantian concepts. My passion for exploring the consciousness concept drew me to write a piece on “ARE ROBOTS CONSCIOUS AND HAVE UNDERSTANDING”


 

References

Searle, J. R. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences.

Tononi, G. (2004). An Information Integration Theory of Consciousness. BMC Neuroscience, Chalmers, D. J. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies.

Author unknown. (2015). Can Robots Be Conscious? Philosophy Now, Issue 107, 26-30.


27 views3 comments

3 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
2 days ago
Rated 5 out of 5 stars.

AI consiousness has been a topic of debate since long. This piece is a deep analysis for the same

Must read post

Like

Guest
2 days ago
Rated 5 out of 5 stars.

Very well articulated.

Like

Guest
2 days ago
Rated 5 out of 5 stars.

Unique thoughts, Well written!

Like
bottom of page