top of page

The Matrix: A Prelude to the Cyberpunk Parable


The Matrix: A Summary

The movie begins with a computer hacker named Neo, who is compelled by the question of what the Matrix is. He finds Morpheus, someone who knows the answer and offers him an unusual choice: take a blue pill and he’ll wake up in his bed, none the wiser; however take a red pill, and he will learn the truth.

Neo chooses the truth and learns that the Matrix is a digital world designed to fool its inhabitants into thinking that it is real—and that he has been one of those inhabitants his entire life. He thought he was living a free American life in the year 1999; in reality, his body was locked in a pod, floating in goo, and being fed experiences by a computer (the Matrix), along with millions of his fellow humans, around the year 2199.

Morpheus believes Neo is “The One”—the one prophesied to free all of humankind from the Matrix. So, after being unhooked from the Matrix, Neo and his new friends hack back into the Matrix to seek a prophetess called the Oracle. They find her, only to have her tell Neo that he isn’t The One. On the way back out, one of his colleagues, Cypher, betrays the group to Agent Smith, a sunglasses-wearing sentinel program tasked with killing rebels like Morpheus and Neo. In a final showdown, after saving Morpheus, Neo defeats Smith by essentially deprograming him from the inside. Apparently, Neo was The One after all.

Descartes's Problem

The Matrix expressed the worries of one of the most famous works in all of philosophy: Meditations on First Philosophy, in which Descartes is looking for a solid ground on which to base all knowledge. To that end, he is looking for a belief that cannot be doubted—and thus takes seriously even the most ridiculous ways that his foundational beliefs could be false.

It may seem obvious, Descartes said, that “I am here, sitting by the fire, wearing a winter dressing-gown.” But he has dreamed such things before and been just as convinced. He considers his condition, shakes his head, and admits that it certainly feels like he’s awake. But then again, he has felt the same surety while dreaming. As Morpheus puts it: “Have you ever had a dream, Neo, that you were so sure was real? What if you were unable to wake from that dream? How would you know the difference between the dream world and the real world?

So, Descartes realizes, he could be dreaming; there is no way to prove to himself he’s not. However, this doesn’t make Descartes doubt the existence of the world. After all, the ideas in his dreams come from his experience during waking life. So, he can’t always have been dreaming.

But then Descartes considers an alternate possibility for the source of those ideas: What if “some malicious, powerful, cunning demon has done all he can to deceive me”? What if “the sky, the air, the earth, colors, shapes, sounds, and all external things are merely dreams that the demon has contrived as traps for my judgment”?

If true, not even the world exists. And because a lifetime of experiences fed to Descartes by such a demon would be indistinguishable from a lifetime of experiences of the real world, there is no way to prove that this isn’t true. Indeed, no matter what “test” Descartes preformed to see if this was true, the demon could simply fool him into thinking he had passed the test when he had not. The Matrix is a technological variation on this same problem; the Machines are the evil demon, and the Matrix is their method for imputing sensations.

The upshot is that you can’t know for sure that you aren’t being fooled by a demon or stuck in the Matrix. Thus, you can’t know the world is real. And if you can’t know something as basic as that (something that seems to undergird our entire belief system), it seems you can’t know anything at all. Knowledge is impossible. If you were just a brain in a vat, floating in a pod of goo, being fed sensations by a computer to make you experience a fake world, your entire life would consist of the same kind of experiences that it has consisted of. There is no way to prove this isn’t happening; any test you preformed could simply be sabotaged by the system itself.

But is this argument sound? And should we care about knowledge in the first place? To evaluate this argument, we must first clearly understand what knowledge is.

What is Knowledge?

Almost 2,500 years ago, Plato in the Theaetetus defined knowledge as “true belief with an account.” In other words, you know something when you believe it, it’s actually true, and you have good reason to think it’s true. And that’s essentially the definition that’s accepted by philosophers today: knowledge is justified true belief. It’s agreed that all three are necessary:

1. You can’t know something without believing it;

2. A belief can’t count as knowledge unless it’s justified (it can’t just be a lucky guess);

3. You can’t know something if it’s false. You can know that something is false, but you can’t know something that is false.

Descartes worried that knowledge was impossible because it was impossible for any belief to be justified. You couldn’t be justified in believing the world was real because you couldn’t be certain that you weren’t being fooled. But the philosopher that most directly influenced the Matrix was Jean Baudrillard, and he argued that knowledge was impossible because there was no such thing as truth. In his most famous book, Simulacra and Simulation, Baudrillard argues that the postmodern world (the world since around World War II, after the invention of computerized technology and ubiquitous media), consists merely of simulacra. In other words, we no longer interact with things, but merely images and representations of things: signs, copies, models. We are inundated with propaganda and deception from politicians and from media outlets.

The influence is obvious, but the similarity between Baudrillard’s philosophy and the Matrix stops there. Baudrillard doesn’t call for us to pull the wool off our eyes in an effort to return to the real world and learn the truth (as Neo does) and get others to do the same (as Morpheus does). Indeed, Baudrillard calls such a sentiment naively utopian. Instead, he concludes from this that existence is meaningless and that there is no real world or truth to seek.

To say that such a conclusion is unjustified is an understatement. It is certainly true that electronic technology has altered our perception of reality and made it easier for politicians and the media to mislead us. But from that, it does not follow that there is no reality and no truths about it. Indeed, Baudrillard’s conclusion contradicts itself. It can’t be, as he says, a “fact that there is no truth.” If that’s a fact, then it’s true. Baudrillard’s mistake is that he has confused epistemology (the study of knowledge) with metaphysics (the study of reality). Our continued exposure to simulacra may make it difficult to know the way the world is, but from that it does not follow that the world is no way at all. According to most philosophers, and the general public, a proposition is true if it simply corresponds to the way the world is. It’s called the correspondence theory of truth. And if a proposition does correspond to reality, then it’s true—regardless of whether you are aware of that correspondence or not.

So, if knowledge (albeit justified true belief) is impossible, it’s not because it’s impossible for a belief to be true. In fact, notice that, in the correspondence theory, things could even be true in the Matrix. Indeed, we can’t say, as Trinity does, that the Matrix isn’t real (unless what we mean is that it is artificial). The Matrix still exists. Otherwise, what is Morpheus trying to free people from? It’s just that the nature of the Matrix is different than those in it assume; it’s digital rather than material.

Interestingly, Cypher tells Trinity that he thinks the Matrix could be more real than the physical world; consequently, he wants to have his memory erased and then be plugged back in to the Matrix. Cypher wants to trade his difficult life on the outside for a life of ease on the inside; he wants to trade the uncomfortable truth of reality for the blissful ignorance provided by the Matrix. And that’s what makes the Matrix so philosophical. Because Cypher is a villain of the film, we must conclude a moral of the story to be that this isn’t the way a person should be. Knowledge is more important than pleasurable experience. Ignorance is not bliss; ignorance is slavery. It takes courage to do so, but you should seek and embrace the truth even when it makes life hard—even when the truth is uncomfortable. You should not just “believe whatever you want to believe” as the blue pill allows you to. You should take the red pill; you should seek out the truth at all costs.

This has been the mantra of philosophers since Socrates, who illustrated the life of a philosopher in this way: the philosopher is able to break through the chains put on us by society that make us see the world a certain way—Baudrillard’s simulacra. Thus, through careful logic and reason, a philosopher can come to see the world as it really is. And once he or she does, he or she can never turn back. Unlike the cowardly Cypher, the philosopher would never again gladly embrace ignorance—no matter how horrible or inconvenient the truth might be.

Should We Always Choose Knowledge Over Ignorance?

In the Matrix, despite the fact that they started out in the pretend world, all our protagonists are glad to be freed from it. They reject the familiar for the truth, even though it’s inconvenient. And the same seems true for us. People seem to agree that a life filled with ignorance about the nature of the world is not as meaningful as one absent that ignorance.

Knowledge is intrinsically valuable. But that’s not the only value of knowledge. Understanding the way the world works also helps you navigate and manipulate it. So, you should also value your ability to attain knowledge and resist efforts to rob you of that ability. Just like in the Matrix, the truth can sometimes be uncomfortable, and being comfortable is not more important than understanding reality. Willful ignorance is not only pitiful, but it can endanger the rest of us—just as Cypher’s desire for ignorance put Neo, Trinity, and Morpheus in danger. This makes such willful ignorance not only epistemically unvirtuous, but morally reprehensible.

But it’s important to note that knowledge isn’t the only thing that is intrinsically valuable. After all, the Matrix doesn’t just make one ignorant; it makes one a slave to the Machines. Freedom is also important. And that’s partly why Cypher wanted to be plugged back in. In response to Trinity saying that Morpheus had set him free, Cypher says, “Free? You call this free? All I do is what he tells me to do. If I have to choose between that and the Matrix, I choose the Matrix.”

Knowledge is valuable, but so is freedom and happiness. What makes Cypher so villainous is that he doesn’t care at all for the value of knowledge and is willing to sacrifice the lives of others for his own hedonistic pleasure. It’s bad enough to prefer ignorance to knowledge; it’s even worse to kill others to get it.

Is Having Knowledge Impossible?

Does the fact that we can’t be certain that we aren’t in the Matrix mean that having knowledge is impossible? In short, no. The problem is that there are essentially two hypotheses that are consistent with the evidence of your experience:

1. Either you are actually awake and experiencing a physical world;

2. You are being fooled in some grandiose way, such as by the Matrix, into thinking the world is real when it’s not.

And there is no test that you can perform to prove which hypothesis is true. In science, two explanations can account for the same data, so you have to delineate between them by appealing to other scientific criteria:

1. Which hypothesis is simpler (that is, which hypothesis makes the fewest assumptions)?

2. Which hypothesis has wider scope (that is, which hypothesis explains the most without raising unanswerable questions)?

3. Which one is more conservative (that is, which one better aligns with what is already well established)?

These things are, by definition, what a good explanation should do. So, whichever explanation aligns with the most of these criteria is the best explanation. We can do the same kind of thing with Descartes’s problem. What is the better explanation for your experiences: that you are experiencing the world right now or that you are being fed sensations by a supercomputer like the Matrix?

The Matrix explanation isn’t simple: it assumes the existence of the world, and the existence of a giant powerful computer in that world. The realworld explanation only requires the latter. And the Matrix explanation also has very little scope. It raises all kinds of unanswerable questions—about how the computer works, who built it, why, and how it causes our experiences. But we actually have a pretty good idea of how the universe (if real) came into existence and how it would cause your experience.

So, even though we can’t prove which hypothesis is true, we can show which one is better (which one is most likely), and thus which one is more rational to accept. And thus, you can have knowledge. Can you be certain? No. Even if you can be certain of your own existence, as Descartes argued (“I think, therefore I am”), you can’t (as Descartes tried to do) build up certainty about the entire world from there. But because knowledge doesn’t require certainty, you don’t have to. Knowledge is simply “justified true belief.” You are justified in believing that which is most likely.

Reloaded and Free Will

Soon after the Matrix, two sequels were released:

- Matrix Reloaded

- Matrix Revolutions

Both of these films are the most underrated science-fiction sequels of all time. They were panned viciously by critics and audiences, but they are actually great sci-fi, primarily because they do what sci-fi does best: they raise, and even take a stance on, important philosophical issues. They tackle philosophical topics clearly and directly (especially the topic of free will) and do so in a fairly sophisticated way, evoking and echoing arguments used by professional philosophers.

Reloaded opens with the revelation that the Machines are digging their way to Zion, the underground home city of the remaining humans. The Machines intend to destroy it. Morpheus sees it as an act of desperation because of a recent exponential increase in the number of people being freed from the Matrix. He believes Neo will soon fulfill the Oracle’s prophecy and end the war between the humans and Machines. Right on cue, the Oracle calls Neo for a meeting. And it is here that we get our first hint at the philosophical topic of the movie. When the Oracle offers him a piece of candy, Neo asks her a simple question: If she already knows whether he is going to take it, how can he freely choose whether to take it? With this, the film is borrowing from a philosophical problem that goes as far back as the 6th century and to a philosopher named Boethius. He believed that God’s perfection entails that God has foreknowledge of the future. But if God knows what you will do before you do it, Boethius asked, how can you freely decide to do what you do?

According to the traditional understanding of free will, known as the libertarian understanding, to freely choose to do something, you must have alternate possibilities. Choosing and not choosing to do the action in question must both be possible. But if God already knows you are going to choose to do something (let’s call it action A), not deciding to do that action is impossible. To be able to decide not to do action A, you must either have the power to make God’s past belief false (which you can’t do because God can’t be wrong), or have the power to change what God’s past belief was (which you can’t do because the past is fixed).

To be clear, the argument is not suggesting that God makes you decide to do action A. He doesn’t. But the fact that God already knows that you will logically entails that you can’t decide to do otherwise (nothing but what God already believed you would decide is possible), and thus, when you do action A, you are not deciding to do so freely. If God knows the future, then it must already be written. It must already exist. But if the future already exists, it can’t happen any other way.

Reloaded and Determinism

The Oracle goes on to tell Neo that to fulfill the prophecy and end the war, Neo must make his way to the computer mainframe—called the Source. And he can only do so with the help of the Keymaker, a program that has been kidnapped by an ancient program called the Merovingian. And when Neo, Morpheus, and Trinity find the Merovingian in a French restaurant, he also raises serious doubts about free will.

Morpheus thinks they have chosen to be there, but the Merovingian argues that “Choice is an illusion, created between those with power and those without. There is only one constant, one universal causality—action, reaction, cause and effect.”

As he gives a beautiful woman in the restaurant a piece of cake, programmed to elicit a sexual response from her, he says, “This is the nature of the universe. We struggle against it, we fight to deny it, but it is of course pretense; it is a lie. Beneath our poised appearance, the truth is we are completely out of control. Causality. There is no escape from it. We are forever slaves to it.”

Merovingian is echoing another philosophical argument that we are not free—an argument rooted in determinism, an idea that goes back to the ancient philosophers Leucippus and Democritus. Determinism is the suggestion that the universe is a deterministic system, in which everything that happens is a causal result of physical events governed by physical laws. For example, once you break the billiard balls on a pool table, with perfect knowledge of physics, you could predict the path of every ball by simply doing the math.

Like those philosophers before him, the Merovingian thinks that the universe is a deterministic system and that our brains (which cause our actions) are just a part of that system. If the Merovingian is right, according to Christian philosopher Peter van Inwagen’s consequence argument, free will indeed is an illusion. This is because determinism and free will are incompatible. If determinism is true, then everything that happens is a consequence of the laws of physics and past facts. Nothing else could happen but what they entail, and no human has ever had any choice or control over the laws of physics or distantly past facts. And this is what the Merovingian is suggesting. Neo, Morpheus, and Trinity have no free will.

Indeed, if our brain is just another part of that physical system (basically a biological machin), then you could, with the proper knowledge, simply look at the brain and predict how it will respond to any given stimuli, just as you could predict the behavior of a program. It’s all just a result of how its components (whether they are microchips or neuron), are wired and fire. And because our brains are responsible for our actions, if determinism is true, then our behavior is perfectly predictable, and thus not free.

Reloaded and Compatibilism

Technically, the Merovingian is wrong. The universe is not a deterministic system. Quantum mechanics has taught us that determinism is false. On the quantum level, individual events happen randomly and without a cause all the time. However, unfortunately the randomness of quantum events cannot rescue human free will.

First, as Peter van Inwagen points out, indeterminism is just as incompatible with free will as determinism is. Even if our decisions are the result of random quantum events in our brain, then we still aren’t free because we aren’t the cause of those events. We can’t be. Nothing is. Indeed, their being random entails that they are not caused.

Second, determinism is still true in a different way. Quantum randomness, which occurs on the micro level, is essentially averaged out on the macro level of larger objects. For example, the decay of individual radioactive atoms is random, but if you have a collection of them, you can deterministically predict when half of them will decay. This is called adequate determinism, and it is because of this that physical laws can be used to predict the behavior of larger physical systems. Because the brain is such a system, even though it may be impossible to predict specific quantum events within it, the outcome of the brain’s activity is likely still deterministic.

All of this makes it difficult to defend the notion that humans are free in the libertarian sense. Consequently, some philosophers have suggested an alternate understanding of what it means to be free. It’s called compatibilism because it suggests that free will and determinism are compatible. It dates back to Aristotle and is defended by modern-day philosophers, such as John Martin Fischer. The essence of the suggestion is that an agent freely performs an action as long as that action flows or follows from some part of the agent. To modify Fischer’s argument, which was originally about moral responsibly, we might say that an agent’s action is free as long as it is the result of a conscious, rational, deliberative process. If the agent thinks about what to do and then the outcome of that process causes the agent’s action, the agent acted freely.

The problem with this understanding of free will is that it doesn’t align with our intuitions about what free will is. According to the theory, as long as you are acting in accordance with the result of your rational deliberation, then you are acting freely—even if outside forces are what ultimately caused that rational deliberation to occur as it did. And that doesn’t seem right. To see the problem, consider again when the Merovingian gave the beautiful woman a piece of cake programmed to elicit a sexual response. Although it happens offscreen, the events that follow clearly indicate that the Merovingian followed her to the ladies’ room to receive a sexual favor. That is why his wife, Persephone, gets upset and betrays him. This is morally wrong—something similar to using a date rape drug.

But what if the program the Merovingian wrote reprogrammed her brain to rationally conclude that she should perform a sexual favor for the Merovingian? On compatibilism, we would have to say that she chose to do what she did of her own free will. But clearly this is not the case. What she did was not up to her; it was forced on her from the outside. Thus, her action was not free. The Merovingian is, indeed, guilty of rape. This causes a problem for our free will because the outcome of our rational deliberations is not up to us either, but instead is forced onto us from the outside. Our ultimate desires are a result of our brain structure, which is ultimately a result of our environment and DNA. They program us, just like the Merovingian programmed the beautiful woman.

Reloaded and Threats To Free Will

The threats to free will get even worse once Neo rescues the Keymaker, makes his way to the Source, and meets the Architect—the program who designed the Matrix. The Architect is the ultimate intellectual, and what he reveals is as mind-blowing as the moment Neo awoke from the Matrix in the first movie. The first Matrix was a perfect world, without suffering or evil, that failed because its human subjects were unable to accept it. The Architect thus redesigned it to include evil. But it still failed. Unable to understand why, the Architect consulted an “intuitive program, initially created to investigate the human psyche”: the Oracle.

The Oracle realized that the Matrix couldn’t work unless those plugged into it, and humanity itself, had a genuine choice as to whether to accept or reject the Matrix. To create this choice, the Machines created Zion, a city devoted to giving people the choice to reject the Matrix that would also serve as a place for people to live if they did. This solution worked; indeed, 99.9% accepted the program. But this created a new problem: Over time, Zion would grow, freeing more and more people, and eventually the Matrix would be empty. To deal with this, the Machines simply decided that (when things started to get out of hand) they would reset the entire system: destroy Zion, select a few humans from the Matrix to repopulate it, and reboot the Matrix itself. But again, for this to work, a choice must be offered.

So, the Machines decided to select one exceptional individual and give him special powers to essentially make him a messiah, and thus a spokesperson, for the humans. They would then trick him into going to the Source with a prophecy about him being able to end the war. Once there, however, they would reveal the deception and force him to instead choose between cooperating with the Machines’ plan or allowing “the extinction of the entire human race.” Neo learns that there were five “chosen ones” who have preceded him, and that they all chose to cooperate.

But Neo is a bit different; the previous “chosen ones” weren’t in love with Trinity, while Neo is. But this actually creates a problem. As Neo is making his decision, Trinity is about to be killed by an agent. So, Neo rejects cooperation to try to save her. But this brings us back to the topic of free will because what the Architect says as Neo is making this choice seems to suggest that the Merovingian is right about free will being an illusion: “We already know what you’re going to do, don’t we? Already I can see the chain reaction, the chemical precursors that signal the onset of emotion, designed specifically to overwhelm logic and reason. An emotion that is already blinding you from the simple and obvious truth: She is going to die, and there is nothing that you can do to stop it.”

This is worse than the Architect simply predicting Neo’s choice by looking at the deterministic mechanisms in his brain. It seems that Neo’s decision isn’t even arising from a conscious, deliberative process; it’s just coming from his emotions. And emotions arise from a primitive portion of the brain called the limbic system, which isn’t even conscious. If our actions are not only the result of predictable deterministic process in our brain, but of unconscious processes, it would seem that even on compatibilistic understandings of free will, we are not free.

Revolutions and Belief in Free Will

In the final film, the Architect and the Merovingian are wrong; the humans do have free will. In fact, it seems that it is merely by an act of free choice that Neo ends the war between the humans and Machines at the end of Revolutions. Upon their last meeting in Revolutions, the Oracle admits to Neo that her foreknowledge is not perfect. She cannot “see beyond a choice [she does] not understand.” This seems to mean two things:

1. She can’t see beyond her own choices.

2. She also can’t see beyond choices that are truly free.

She didn’t know that Neo was going to reject the Architect’s offer, she doesn’t know whether Neo will be able to end the war, and she doesn’t know whether the risky choice she is about to make will pay off. At this point, both Neo and the Machines are facing the same problem. In Reloaded, the program Agent Smith had become a computer virus, copying himself, over and over, onto other programs. By Revolutions, Smith has taken over a large portion of the Matrix, and the Machines outside the Matrix rightly fear that he will start taking them over, too. In an effort to stop him, the Oracle is going to let Smith copy himself onto her, thus absorbing all her powers—including her ability to predict the future. She willingly intends to let Smith do this. But why? What purpose could it serve?

We learn the answer after Neo strikes a deal with the Machines; if he helps them stop Smith, then they won’t destroy Zion. The Machines plug Neo directly into the Matrix so that he can square off with Smith one last time. Smith chooses the copy of himself that overwrote the Oracle to face Neo. As Neo keeps getting beaten down, he still chooses to fight. Smith asks Neo why he persists. Neo responds, “Because I choose to.”

Once Smith has smashed Neo to the ground one last time, the Oracle’s foresight kicks in, and Smith sees a vision that this is the end. But when Neo chooses to get up again, Smith is surprised. That was supposed to be the end. But Smith doesn’t realize the limits of the Oracle’s power: that she can’t see past free choices—in this case, seeing past whether Neo will choose to keep fighting. Now uncertain of the future, and thus fearful of Neo, Smith panics and copies himself onto Neo. This gives the Machines direct access to Smith, and they purge the system of Smith, killing Neo in the process.

Did the Oracle always know that Neo would make the right choice? No, because he had to make it of his own free will. But she believed. In a way, the entire movie is a conflict between those who don’t believe in free will (such as Smith, the Merovingian, and the Architect), and those who do (such as the Oracle, Neo, and Morpheus).

bottom of page