By Molly Cobb

Warning: The following includes some rather large plot spoilers!
One of the most interesting things about video games by developer Quantic Dream is the strong focus on decision-making. The ability to drastically alter the story being told just by making one choice over another can be almost overwhelming. Even what can appear to be minor choices can have long-term effects, ranging from relationships with other characters to determining who lives or dies. In Detroit: Become Human, the focus of this decision-making is on android sentience and, consequently, android rights. The game takes place in a future where androids are essentially household appliances. However, there are an increasing number of incidents of androids gaining sentience and revolting against their owners. The game follows three of these androids, with the decisions of the player determining the path each takes in the course of this revolt. Though it could be argued there are no ‘right’ or ‘wrong’ options being given to the player per se, the choices that can be made do encourage exploration of human and android ethics and morals, as well as understanding of the growing relationship and tensions between humans and machines in our own contemporary society (and in the imagined future).
By having the player experience the world and the story via three separate androids (all originally designed for separate roles and all exhibiting different personalities), Quantic Dream put the player in the position of not only making choices on behalf of someone else, but on behalf of a machine. Thus, the player is already faced with a choice even before they begin playing: does the player play as themselves, making decisions as a human would? Or do they play as their character, making decisions as they expect an android responding to programming would? Either approach can lead to drastically different stories, ranging from everyone (both playable and non-playable characters) surviving and a successful android revolution opening the way to talks about android rights, to the complete opposite, in which everyone, or nearly everyone, dies, with the android revolution unsuccessful and the destruction of self-aware androids. Decisions which must be made by the player include, for example, whether to follow human commands or not, or whether to assist the revolution or hinder it (choices which often depend on which character you are currently playing). Many, if not all, of these decisions will be made by the player based on their own ethics and morals, as well as attitudes towards AI. For example, if a player fears an android uprising or thinks android sentience is not possible, it is likely they will make different choices to a player who would consider sentient androids ‘alive’ and deserving of equal rights to humans. Alongside this, ethical decisions concerning who is prioritised must be made; does one focus on themselves or others? Whatever the player’s opinion, they must choose whether their characters’ own rights and freedoms are more important than those of other androids, or even humans, in the game.
Of the three playable android characters, only one develops sentience of their own accord, regardless of the player (Markus, the face of the revolution). Of the two remaining, at the point in which Kara (a housekeeping android) is put in a position to gain sentience, the player either chooses to have her go against her programming and become self-aware, or follow her programming and be destroyed. Choosing to follow her programming also results in the death of a young girl, Alice. If a player makes it past this point, and makes certain decisions along the way, it is eventually revealed that Alice is an android as well. This revelation then raises questions as to whether the player would feel differently about Alice if this were known earlier. Would her death be less shocking? Has the player treated her differently than they would do an android child by thinking she was a human child? For example, would the player not have put Kara in certain dangerous situations to protect Alice had it been known she wasn’t human? Do we feel manipulated in some way for being made to think she was human, thereby possibly affecting some of the decisions made? If we do feel manipulated, and would have treated her differently, what does that say about the player’s opinions regarding android rights, assertions that they are ‘alive’, or deserve to be treated the same as humans? Revelations such as this throughout the game result in demonstrating how it’s possible to have some inherent bias in decision-making, exposing prejudices even the player may not have been aware of. Through this type of storytelling, discussions of certain issues which can affect choices can be encouraged, both in the game and in contemporary society.
The third android, Connor, is designed to work with the police in order to track down ‘deviant’ androids (those who have become self-aware). Eventually, based on decisions made by the player, you can also ‘choose’ for Connor to become self-aware. However, it is revealed later on from those who have tasked Connor with this mission of exposing the sentient androids that Connor becoming self-aware was accounted for and expected in the course of him completing his mission, which again questions ideas of sentience and free-will. As such, is Connor (or any android for that matter) really self-aware? Or simply continuing to follow programming? As well, Connor is the only character which can die without his storyline ending. If a decision made by the player results in Connor’s death, he will be replaced in the next chapter by another android of the same model. Besides demonstrating the practicality of androids in the police force or military (a theme examined peripherally throughout the game), it also indicates the interchangeability of androids, specifically non-sentient androids, as Connor is able to die a total of eight times throughout the game before his death becomes irreversible. It is implied that, if Connor were sentient, his death would be permanent through the loss of his consciousness, no matter how many times the physical machine were replaced. The sheer demonstrable difference between what ‘dying’ means for a non-sentient android (referred to as ‘shutting down’) and for a sentient one points to the key question threaded throughout the game: what does it means to be alive? Both player and in-game characters’ (in)abilities to sympathise and develop a relationship with a Connor that is constantly being replaced further indicates how ethics and morals can shift and alter depending on whether androids are seen as replaceable machines or conscious people.
However, unlike Markus, most androids throughout the game need to be converted by another self-aware android in order to become aware themselves. Questions of ‘forced’ conversion aside, this creates a blurry line between what counts as sentience, as, though not all androids are currently aware, the ability to convert them implies that they all have the capability of being aware. As such, is being capable of awareness the same as being aware? Are these androids granted the same rights as sentient androids even though they themselves are not sentient (yet)? Implied questions such as these throughout the game demonstrate not only the importance of whether the potential for self-awareness should be treated the same as actual self-awareness, as far as ethics and the law are concerned, but that how individuals (in this case the player) respond to these concerns indicates how ethical and moral decisions impact not just a specific situation but resonate through every situation that consequently stems from any one decision.
How one person’s choices can have a lasting impact on someone else is demonstrated most clearly by a single interaction in the game. The theme of android intelligence and android rights is perhaps most strongly highlighted by a single ethical/moral decision required from the player; a choice made by the player about the android that appears on the title screen, Chloe. Chloe is present on the game’s title screen to discuss the player’s choices, make comment on their progress, and generally interact with the player. After a successful android revolution, Chloe poses a question to the player; will the player allow her to leave to join the revolution (if allowed, she does actually exit the title screen, never to be seen again) or deny her request and make her remain as an in-game hostess. As with many of the other android interactions throughout the game, this further raises questions of gender issues, such as whether female androids would be treated differently to male androids, or whether certain roles would only be fulfilled by certain genders (would players respond differently to this request if it were made by a male android?) With one final choice, Quantic Dream brings all the questions of ethics, morals, android rights, and human/machine relationships into one simple yes or no answer which forces the player to solidify whether they consider the androids in the game to have achieved sentience or not, and whether, as a result, they deserve freedom or not, with all the ethical and moral entanglement that choice entails.
One thought on “The Question of Ethics in Detroit: Become Human”