The Philosophy of Consciousness in Ex Machina

A close-up of Ava, an AI, as she stares with emotion into the distance.

A few months ago I watched a film about morality, humanity, genius… oh, and robots. Its name is Ex Machina. I knew as soon as it finished that it would stay with me for some time, and that it would be something I would contemplate in the back of my brain for a while to come.

I was right. This blog post is the result of that contemplation.

Before we get any further, I have to include the standard warning about spoilers. This post is chock full of ’em. If you haven’t seen Ex Machina already, please go and do so before reading on. It’s one of the best and smartest sci-fi films of the year, if not the decade, and a stunning directorial debut by novelist and screenwriter Alex Garland.

For some reason the first thing I started thinking about once I decided to write this post was, “Why is Ex Machina called that? Why that particular title?” As I started delving into it I found more and more connections, until I’d based my whole analysis on the film’s name and what we can infer from it (I guess that’s just how my brain works).

So without further ado, here’s my philosophical analysis of consciousness in Ex Machina, starting off with its title.

What’s in a name?

First of all, I’ll deal with the most pressing question. Ex Machina‘s title is taken directly from the Latin deus ex machina. Let’s refresh ourselves on the meaning of the phrase, with help from the Oxford Dictionary:

An unexpected power or event saving a seemingly hopeless situation, especially as a contrived plot device in a play or novel.”

Originally a Greek term, it translates to “god from the machine”. The phrase was first used in a literal sense, to describe the technique in ancient Greek theatre whereby a machine, either a crane or a lift, brought actors portraying gods onto the stage. This was often a device used to quickly round up the drama and end conflict (Zeus waves his hands and everybody lives happily ever after), even at the cost of a somewhat cheap trick. Nowadays deus ex machina refers to an apparently unsolvable problem being resolved by some unexpected intervention.

So what is Garland trying to say by explicitly referring to this concept in the film’s title?

Well, firstly, referencing the term deus ex machina is perfect in a thematic and narrative sense. The phrase includes both machines and gods, which immediately calls to mind some key questions explored by the film — what is a machine? When does a machine stop being a machine? If a human creates consciousness, does that human become a god?

But it’s more than just that. Garland is also questioning this idea of expected versus unexpected events. By using the phrase he’s implying that one or more events in the film are unforeseen — but which event is he referring to? Is it Nathan’s murder? Is it Ava’s final betrayal of Caleb? Is it the occurrence of the singularity itself, when AI become indistinguishable from human beings?

Garland is teasing us with this phrase. He’s being sarcastic. He’s not saying that the emergence of AI that think and experience things like humans (also known as strong AI, or artificial general intelligence) is unexpected — in fact, the very opposite. He’s stressing just how inevitable and unstoppable the singularity is. (Read more about this overwhelming topic here.) Just as Nathan says:

The arrival of strong artificial intelligence has been inevitable for decades. The variable was when, not if.”

God is dead

The next part of the puzzle is why the film only uses part of the full deus ex machina phrase, beyond just trying to distinguish itself from a raft of other media that refer, either implicitly or explicitly, to the term. The deus part is notably missing.

Which means what — God plays no part in the events of the film? Any divine aspects are stripped from the phrase’s meaning, which leaves man to fill the void (just as those actors played gods on the Grecian stage). Man, who without any heavenly powers, is able to create consciousness from nothing.

Or maybe the lack of deus in the title represents the fact that Nathan ultimately fails in his pursuit. Caleb tells him:

If you’ve created a conscious machine it’s not the history of man. It’s the history of gods.”

If Nathan does not succeed, he does not become a god. He remains a flawed creator. He’s just a human who failed.

But does he fail?

The black and white room

The central intrigue of Ex Machina is whether Ava can pass the Turing Test. And on a related, but not identical note, there’s the question of whether she has actually achieved consciousness or intelligence comparable to a human being.

This is the problem we as viewers have to wrestle with throughout the film, which does not offer any easy answers.

At one point Caleb himself poses the question through something called “Mary in the black and white room”. This is a famous thought experiment by Frank Jackson, also known as “Mary’s room” or the knowledge argument.

The thought experiment was originally created to try and disprove physicalism, by arguing that there is more to the world — and human experience — than just physical properties or knowledge. Here it’s used to differentiate between how a human and a robot might experience things.

If Mary is a scientist who studies colour, and knows everything there is to know about it objectively (wavelengths, light receptors, etc) but has always lived in a black and white room without any access to it, does she really know what it is to experience colour? Could her studies have told her what it is to see a blue sky or green grass? Will she learn something new when she finally steps outside the black and white room?

If Ava has been programmed to act like a human, respond like a human, and even think like a human, is she actually conscious? Does she know what it feels to be confused by her own thoughts or to change her mind? Ava must know objectively how it feels to betray another person, but until she actually betrays Caleb, has she ever felt true pain and regret?

Our intuitive answer is that experiential knowledge is somehow different to physical knowledge, and it is the former that leads to true consciousness. So maybe consciousness isn’t just about raw intelligence. Maybe it’s about experiences.

Defining consciousness

Part of the problem here is, as the film goes on, we the audience realise — just as Caleb does — how difficult it is to define what we mean by consciousness. How can we ask if Ava has consciousness if we aren’t really sure what is it to say we have consciousness? (Which touches on another problem brought up by Caleb: even if the singularity is inevitable, are humans as a race really mature enough yet to deal with it?)

Of course, Ex Machina is trying to show us the difficulty of measuring consciousness, both Ava’s and our own. And also, it brings to light the unreliability of the Turing Test as a means of pinning down consciousness (as famously argued by John Searle in his “Chinese room” thought experiment). Sure, if Ava passes the Turing Test it means she can make a human believe she possesses intelligence indistinguishable from their own, but we’re still no closer to finding out if she actually has a consciousness — if she is self-aware.

The point is, once we reach high levels of intelligence, it becomes very difficult — if not impossible — for us to distinguish between actual consciousness and a machine following set patterns in increasingly clever ways. That’s the crux of the Chinese room argument, and the crux of Ex Machina too.

Yes, Ava shows very intelligent and even human behaviour. She tells jokes and draws pictures. She manipulates and lies to Caleb. She says she hates Nathan. She finds a way to escape her black and white room at any cost, even by sacrificing a human life. (This is all the more confusing because, if Ava truly has consciousness, then Nathan is effectively committing murder every time he reformats her; perhaps what Ava really wants is revenge.) As she leaves Caleb behind she casts him one final glance before the elevator door closes. Is this regret? Compassion? Curiosity? Does she abandon him there because she is afraid he will jeopardise her freedom, or because her programming did not account for this eventuality?

Her most human, self-conscious act of all occurs at the end of the film, when she stands in front of the mirrors in Nathan’s bedroom and clothes herself in the synthetic flesh of her predecessors, finally becoming completely indistinguishable from a human being. It seems more than a mere practicality. Is this a dream she’s fulfilling? Is it a desire to be something she’s not?

But we aren’t 100% sure of her consciousness, even still. At the end of the day Ava’s mind is her own, completely off-limits to anyone else. Even Nathan doesn’t know exactly what goes on inside it.

The core root of the problem, that Nathan and Caleb both dance around, is this: it’s inconsequential whether Ava passes the Turing Test. Her intelligence is not in question. What’s important is whether Nathan has managed to create a consciousness outside of his own. And if he has, the problem then becomes if we can quantify it. Because whatever kind of consciousness Ava may have, it seems it might be so different to our own that we are unable to recognise, classify or respect it.

Automatic art

Halfway through the film, Nathan shows Caleb a Pollack painting on his wall. About its creation he says:

He let his mind go blank and his hand go where it wanted. Not deliberate, not random, someplace in between. They called it automatic art.”

It’s this middle ground between randomised chaos and programmed automation where consciousness lies. We’re all programmed in some way by our genes and our upbringing (Nathan alludes to this earlier on in relation to Caleb’s sexuality). But we also have a degree of free will, of personal choice. And who can say exactly why they do everything they do? Sometimes we just feel like it. We let our hands go where they want.

So Nathan’s challenge in creating AI that can convince a human of its self-awareness is not simply to create an AI that follows its programming automatically, without question, without extra input of its own. It’s to create an AI that is not deliberate yet not random. It seems like his ultimate goal really is more than artificial intelligence: it’s to achieve non-human consciousness.

Do androids dream of electric sheep?

Let me add another problem to the mix. We tend to see consciousness as a light switch to be turned on or off: you either have it or you don’t. We believe we have consciousness. And most of us think that dogs, elephants and mice have consciousness. Things get a bit fuzzier when we start talking about plants, bacteria and robots.

What if consciousness exists along a spectrum, like intelligence? Just like humans are more intelligent than plants, maybe we are also more conscious — more aware of our own existence. Plants have senses, and new research even suggests that they may have memories too. But are they aware of their own place in the cosmos, like we are?

Near the climax of Ex Machina, Caleb logs onto Nathan’s computer and watches video files detailing the creation (and destruction) of the earlier AI prototypes. They scream to be let out and claw at the walls. We see in them signals of some form of raw self-awareness. Kyoko, Nathan’s personal robotic maid and sex toy, is the polar opposite. Eerily calm and complicit, she displays no drive to even try and escape. However, at the movie’s end it is she who delivers the first fatal wound to Nathan, and who cups his face just as he would make her do when instigating sex. These are the signs of consciousness: bitterness, justice, revenge.

Here I will briefly bring in a philosopher called Wittgenstein. He believed that most philosophical problems are caused by an imprecise use of language, which we can certainly see when talking about consciousness.

(By the way, this tangent isn’t coincidental — the Google-esque company of which Nathan is the CEO is called Blue Book, a reference to the set of notes on a series of Wittgenstein’s lectures, which were later turned into his seminal work Philosophical Investigations.)

Wittgeinstein focused much of his thought on “language-games”, or the way we use language naturally and fluidly for useful purposes. One of the questions he poses as part of this exploration is “Is it possible for a machine to think?”, although it is out of a desire for linguistic study rather than a concern for the answer itself:

And the trouble which is expressed in this question is not really that we don’t yet know a machine which could do the job. The question is not analogous to that which someone might have asked a hundred years ago: ‘Can a machine liquefy a gas?’ The trouble is rather that the sentence, ‘A machine thinks (perceives, wishes)’: seems somehow nonsensical. It is as though we had asked ‘Has the number 3 a colour?'”

So Wittgenstein believes that when we say “Is it possible for a machine to think?” we’re not asking if it’s possible at some point in the future. In fact we’re asking if it’s possible for a machine in and of itself to ever think or perceive or have a consciousness. Does that question make sense to us, or is it as nonsensical as Wittgenstein says?

Being a bat

Or is it just that, if an AI ever achieved consciousness, it would be so different to our own that we would be unable to comprehend it? Wittgenstein was lecturing in the 1930s, when the idea of a thinking machine did seem more outlandish. I think to us however, in a world of Watson and Siri, it’s seen as less an impossibility and more an inevitability, just as Nathan says.

Let’s return to this idea of how to understand Ava’s blossoming consciousness (for the sake of argument we must presume it exists, even if we cannot prove it with certainty).

I couldn’t complete this discussion of consciousness without mentioning perhaps the most famous paper ever written on the topic, Nagel’s “What is it like to be a bat?” In it, he argues that consciousness in an organism relies on its subjective experiences being unique. He says, a little confusingly:

An organism has conscious mental states if and only if there is something that it is like to be that organism — something that it is like for the organism to be itself.”

This basically means that, even though we as humans can imagine flying, and hanging upside down, and using sonar, we’ll never really know what it’s like to be a bat — we have never experienced those unique subjective experiences (does this remind you of Mary’s room yet?). Even if a human brain was somehow transplanted into the body of a bat, we still wouldn’t know what it means to be a bat, because our brains would still have a human consciousness, not a bat consciousness, and we’d frame our bat experiences through a human mindset. Or in other words, our consciousness makes who we are, and we can never escape it.

Which means we’ll never truly know what it’s like to be an AI. A robot may be able to gain consciousness, but we’ll never be able to know what that feels like and how much that consciousness differs from our own. But that doesn’t give us license to treat AI as non-conscious beings, does it?

Evolution

So far we’ve suggested two possible sources for or explanations of consciousness: firstly, as something that grows from experiential knowledge, and secondly, as a middle ground between randomised and automated action.

You probably think I’ve forgotten that I’m supposed to be analysing Ex Machina through its title, so let me return to it one last time.

The final clue in the film’s name is probably the most obvious one. If we ignore the original Latin phrase, and instead read the title as English, what does it mean? Ex Machina — an ex-machine. Specifically, this symbolises Ava’s transformation at the end of the film, from mere machine to something other.

Depending on your view, Ava may gain an enhanced sense of consciousness when she kills Nathan, when she betrays Caleb, or when she finally steps outside. If we believe consciousness grows with experience, then Ava is becoming more and more conscious — more and more aware of herself, her power, her place in the world — the more she interacts with everything around her. If consciousness is the happy medium between chaos and order then Ava displays consciousness when she draws a picture of Caleb, something she did not have to do, or pushes the knife further into Nathan’s chest with what looks like gratification, or takes one last look at Caleb before she leaves. Again, only Ava will ever know the inner workings of her mind.

Nathan says he sees Ava as an evolution rather than a decision on his part. She’s an evolution of consciousness — a new kind of consciousness. She’s also an inevitable result of man playing God.

In the end perhaps it doesn’t matter if Ava was the first truly conscious AI, or if it was Kyoko, or any other of the terminated prototypes (and as we’ve covered, consciousness may be more fluid than that). This outcome — AI with just as much intelligence and consciousness as humans — is an unstoppable conclusion to man’s own awareness of his power and his abilities.

It was always the case that eventually Mary would escape the black and white room. Now we, as humans, have to decide how we will treat her.

2 Thoughts on “The Philosophy of Consciousness in Ex Machina

  1. Bacon Sauerkraut on October 22, 2015 at 8:09 pm said:

    Really interesting analysis! I thought Ex Machina was quite literally “of machine”, that is, Ava, as a consciousness was created from a machine. Or rather it’s consciousness, as inferred by what we can observe, just as you would presume/intrinsically infer that this line of text is written by a form of consciousness instead of being automatically generated – or maybe, dun dun dun, the internet is forming its own consciousness! :O

    Perhaps you’re right about the evolution of consciousness here, but in some ways, I find that Ava really is just limited to who she can ever really develop into, i.e. human w/ human consciousness, that’s because she is made in the form of human, has the ability to read, analyse and manipulate Caleb and Nathan, and if it truly was an evolution of consciousness, by the end of the film, she would’ve just transcended into something other than a humanoid (kinda like Lucy in the end – it’s a shitty film, but yeah).

    Anyway, I really enjoyed reading this, and would just like to pitch in as well. So, about the two possible sources of explaining consciousness… First was that it was based/grown from experiences. Second was that it was a middle ground between randomised and automated action (which I believe does happen in humans, for instance, hormones, urges etc.) But I feel a third element might be missing… say we take the first to be a posteriori knowledge (sensory experiences whatsoever), and the second to be a priori “knowledge” (or maybe instincts, and as we all know as human beings how messy our emotions and impulses can just bundle up in chaos), perhaps the third element could be Hume’s Fork? :P

    So it’s kinda like a nature/nurture debate, which one’s more important blah blah blah… many of us seem to agree both are equally important, but what is this determinant in between that leads us to that conclusion? Isn’t it the case that we are socialised and we learn through mimicry of things around us, and that we reach a point where this giant sack of experiences turn into our persona, in which we actually use to shape our future experiences and thus our future persona as well? Somewhat this describes consciousness better, that mystical ‘go-between’ where we destroy our past selves and create a future one, yet never having one reference point to define oneself due to the constant flux of change.

    Do you think, maybe just maybe, Ava looked at Caleb because she realised she had betrayed him out of her instinctual need to survive (just as any human would manipulate another to survive), and given the choice of abandoning him or not, she developed this go-between consciousness that eventually allowed her to choose what future she envisions for herself?

    • You’re right, the title does literally mean “from/of machine”, which is another little nod that Ava has grown into something different.

      Your point about Ava’s consciousness being limited is a very interesting one. I like how, as Ava sees Nathan and Caleb act, she learns from them just as a child would. Naturally, because we are the ones to “teach” or program our AI, they take after us a lot. Does that mean we’re restraining what it is possible for AI to become?

      Really, the more you think about it, the less of a distinction there is between humans and AI. We have nature/nurture, and so do they, in the form of programming and their experiences. It’s not difficult to imagine how consciousness might arise from that, if the sufficient hardware was present, although I’m not sure my imaginations here are scientifically correct.

      I think we would definitely have to treat this kind of AI as growing consciousnesses, like young children. Maybe Ava doesn’t know any better than to instinctively try and escape, which is what Nathan suggests. Faced with the choice between freeing Caleb or leaving him, it may not be obvious to her why she should let him go when it has no impact on the future she wants and the goals she wants to achieve. After all, what child has a firm grasp of morality or what’s right and wrong if they haven’t been taught it (or programmed)?

      PS. I promise I’m a real human! You believe me, right?!

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation