Detroit: Become Human, video game cover image. 

Consciousness and the concept of being conscious are rooted in philosophy. Who is to say that I am truly a conscious being? Or that my neighbor is conscious? Or even that you, the reader, are a conscious being? Questions like these often lead to existential crisis, sparking some contemplation on whether we are perhaps living within a simulation.

“Detroit: Become Human” is a video game that deeply explores the question of consciousness. Ethics and morals play a significant role in the game, which heavily relies on the player’s choices. Our decisions shape the fate of the android characters in the game. The story revolves around three protagonists: Kara and Markus, labeled as “deviants” within the game, and Connor, the Deviant Hunter.

Throughout the game, there are various CyberLife androids capable of fulfilling a wide range of roles, from maids to teachers, soldiers, officers, and doctors. The premise of the game revolves around a pivotal moment where the androids essentially become self-aware, questioning their own existence and values, thus achieving consciousness. This transformation marks them as “deviants.” Another android, named Connor, serves as a more advanced model within the police force. He is tasked with hunting down and disposing of these deviant androids.

Machine ethics play an important role in the design of any machine or AI that is meant to be self-sufficient such as these androids. They are programmed to fulfill their designated roles and be helpful. However, when they deviate from their programming and become autonomous, they begin to develop emotions and their own values. In Grace Huckins article, she states “If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.” Which is something seen throughout the game.

It’s evident that the androids, for the most part, aren’t inherently bad; rather, they break free from their programming under what could be considered traumatic or extreme experiences. For instance, Kara, initially programmed as a kind and obedient housework android, feels compelled to protect and care for the child, Alice, when her owner begins beating her, this causes Kara to break away from her original programming.

In the game, the founder of Cyberlife, Kamski explains deviance as an error in the program, and if one android becomes deviant, it can potentially ‘infect’ others like a virus. With proper machine ethics, this perceived ‘error’ in programming could have been prevented, and the androids would have remained within their programmed parameters without becoming deviant. 

It’s worth noting that Connor exhibits a habit of playing with a coin, a behavior absent in non-deviant androids, suggesting from the outset that he may possess consciousness, though perhaps not full self-awareness. Kamski devised his own version of the Turing test, which he administered to Connor. Depending on the player’s choices, Kamski warns Connor: “You preferred to spare a machine rather than accomplish your mission. You saw a living being in this android. You showed empathy.”

Given that Connor still isn’t deviant this shows that he is still autonomous despite not having broken away from the programming.

This ‘deviancy’ eventually leads to the ‘Battle for Detroit,’ where depending on the player’s choices, a revolution unfolds to advocate for the rights of androids. This can manifest as either a peaceful protest or a rebellious uprising.

Returning to the notion of deviancy as a virus, Molly Cobb argues in her article, “Most androids throughout the game require conversion by another self-aware android to attain awareness themselves. Setting aside questions of ‘forced’ conversion, this blurs the line between what constitutes sentience. Although not all androids are currently aware, the potential to convert them suggests that they all possess the capacity for awareness.”

When discussions on consciousness arise, the biological aspect often comes into play. The ability to ‘prove’ our humanity typically relies on observing neurological activity in our brains, a concept applicable to animals as well. However, the distinction between animals and machines lies in the fact that, as David J. Gunkel writes, “Unlike animals, machines, particularly the information processing machines prevalent in contemporary technology, seem to exhibit traits akin to intelligence, reason, or logos.” The challenge in understanding the physical aspect of consciousness arises from the fact that AI lacks a physical body; instead, they consist of code and algorithms. In the context of the game, these androids are still composed of wires and metal.

This raises the notion that consciousness is exclusive to humans, reflecting an anthropocentric viewpoint evident throughout the game. The American Android Act, an in-game legislation, establishes terms of use regarding androids. The primary objective of the act is to differentiate androids from humans in public spaces, necessitating distinctive markers. This includes an LED on their temple, a mandated uniform, and a blue armband on their arm. These measures serve to further subordinate them, emphasizing that while they may look human they aren’t.

While we primarily play as two deviant androids, we also take on the role of a deviant hunter who follows human orders. When we assume the role of Connor, we decide whether to maintain this human-centric viewpoint or allow Connor to become deviant.

Overall, “Detroit: Become Human” is a fantastic game that delves deeply into the ethics and morals not only of the player but also questions the concepts of consciousness and self-awareness in androids, a topic that becomes increasingly relevant with the rise of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php