The Sublimity of Artificial Intelligence in Netflix’s “Black Mirror”

            Edmund Burke defined the Sublime as anything that terrifies and astonishes us, particularly whatever is vast, ambiguously defined, or extremely powerful. In his view, a subjects’ ability to strike fear in the heart of its viewer—to make them feel small or helpless—was what designated it to be known as sublime. Since then, subject matter that, in this view, is considered to be sublime has taken many forms over the years ranging across all sorts of categories–both springing from nature and of things man-made. As advances have been made in technology however, Burke’s concept of sublimity has found a seemingly-permanent home in the genre of science fiction. Most recently, it has found a new niche within the human imagination—artificial intelligence. The mechanics of A.I.—how it works, how it’s made, and most importantly, what it’s capable of doing—are nowhere near fully understood by the majority of people, making it an excellent concept to be explored in fiction and a perfect conduit to understanding Burke’s idea of sublimity for modern audiences. This is particularly so in the Netflix original series Black Mirror.

            Black Mirror’s stand-alone episodes chronicle all sorts of potential issues that have come (or could come in the future) with the digital age—consumer’s obsession with competition reality shows (“Fifteen Million Merits”), social media as a person’s main source of validation (“Nosedive”), and using virtual reality programs as sources of escapism (“U.S.S. Callister”) to name a few—and thus could be used as evidence supporting the sublimity of almost every aspect of technology. When considering purely A.I. though, one episode in particular stands apart from all the rest: season two’s “Be Right Back.” In it, a grieving widow, Martha, is signed up for a program that puts her deceased husband’s social media posts, text messages and emails into a cloud-based algorithm that re-creates his mannerisms and voice, allowing her to keep talking to him over the phone. Martha talks to it day and night, even going so far as taking a picnic with her cell phone and refusing calls from her sister. She finds out that, for a fee, all of the information the program has accumulated can be placed into the artificial mind of an android copy of him that looks, sounds, and feels almost exactly like her husband. Moments later, we see as the droid is delivered to her door, activated, and is up walking and talking.  Martha’s expression at this moment, is a perfect representation of Burke’s

concept of the sublime:

GFvV

Martha first touching her replica husband

Martha is excited, but simultaneously in shock that he’s there, as well as cautious and fearful of the droid. She doesn’t know what this version of her husband is capable of, how he’ll react to situations, how people will react to him, et cetera, et cetera. The android thus falls squarely in the category of things that “excite the ideas of pain and danger” while together being delighting.

Initially, everything is good. The droid has enough of her husband’s information to be able to hold a conversation with her, it does what she asks, and even pleases her sexually, but the tone of the episode begins to shift when she starts to focus on not what the droid has, but what it lacks. Instances where her husband would have argued, the droid blindly follows her orders. He doesn’t need to eat, breathe, or sleep which unnerves Martha. He has no genuine emotions, but can act and mimic in such a convincing manner that, if she hadn’t known consciously he was not real, very easily could have been her husband. It has no inherent sense or right and wrong and depends solely on her to tell it what to do and she is visibly uncomfortable being the sole authority in the relationship. By this time though, she’s formed an enormously co-dependent attachment to the droid and cannot get rid of it. At the end of the episode, after having fast-forwarded many years, this object that was supposed to have pacified the grief of losing her husband has only intensified it to the point that she cannot even look at it, and  she has hid it in her attic.

Martha, at first reviled by it, went into using the program anyways as a simple way to find closure at the sudden loss of her husband, but ended up swirling deeper and deeper into a hole she could not climb out of; one that went on to affect her relationships with the real people in her life. Similarly, it could be argued, to how social media started out as a way to stay in touch with people but has, now run largely by algorithms for advertising and increased traffic, become a springboard for things like cyber-bullying and hate speech.

As with this example, when the concept of artificial intelligence is explored in fiction, it mostly follows the general structure of a seemingly good idea playing out badly or unpredictably, and thus it derives is sublimity. We as a society, hard as we may try, cannot predict the future. We don’t know how to control artificial intelligence–and keep it under control (see HBO’s Westworld for more on that). We cannot predict what lines of computer code will be capable of in five, ten, twenty, or a hundred years from now. We can ask ourselves all day things like, what would happen if we suddenly replaced human armies with machines? Can engineers code in a system of ethics? Who would get to decide what that code should be? What if the droids one day out of the blue decide to turn on us? Does humanity, as the creators of artificial intellect, have some sort of inherent right to rule over it? Ethicists and computer specialists have argued over these questions and others like them in recent years to seemingly no end, but ultimately they cannot be answered until humans decide to put plans and experiments to test them into motion, and that terrifies us. We don’t, and presumably will never, know how the advancements we make in technology, and A.I. in particular, are going to play out when we plan them; yet knowing that we cannot anticipate every possible thing, we continue to push for better, faster, stronger, more life-like tech at an alarming rate because it absolutely fascinates us, and If Burke’s ideas of sublimity are to be believed, that’s as much of a justification as we’ll ever need.

Works Cited

Brooker, Charlie, director. Black Mirror: “Be Right Back”. Netflix Official Site, 25 Dec. 2015.

Burke, Edmund. A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and  Beautiful. Cambridge University Press, 2014.

Hayley Atwell, Domnhall Gleeson. Dir. Chris Booker. image via gifer https://gifer.com/en/GFvV Accessed 30 October, 2018.