Elon’s Demon

The famed entrepreneur Elon Musk, known for his electric car company Tesla, and his space-faring venture SpaceX, has said, in a recent interview at Recode’s Code Conference that there is a high probability that we live in a computer simulation maintained by a technologically more advanced civilisation. Musk’s comments were, due to his techno-visionary status, picked up by media outlets all over the world. But is there anything in his argument besides shock value?

Musk’s argument is simple. Given the staggering rate of technological progress, especially in the fields of computing, and virtual reality, we will soon be able to create simulated environments so life-like that they will be indistinguishable from the external world for those inhabiting them.

“Forty years ago, we had pong … Two rectangles and a dot. That was what games were. Now 40 years later, we have photo-realistic 3D simulations,” mused Musk of the situation until now. He continues: “If you assume any rate of improvement at all, then the games will become indistinguishable from reality”. As interest in video games is unlikely to wane:

“[T]here would probably be billions of [machines capable of running simulations]”. Thus, Musk argues, “[…] it would seem to follow that the odds that we are in base reality is (sic) one in billions.”

Due to the short length of the interview, there are some ambiguities in Musk’s argument, though holding them against him would be uncharitable. Instead, let’s see whether the most straightforward reading of Musk’s argument gets us anywhere. What would it take to simulate an environment so well that it would be indistinguishable from reality? At the least, it would require duplicating the sensory input our brains receive from the outside world in the appropriate format and to the degree of precision relevant for the operations performed by the brain on this input. This device would probably have to interface directly with the brain, as any mediating equipment would produce surplus sensory input. If you wear a headset, for example, you can feel its weight, which would disrupt the parity of inputs. The simulated environment would also have to be responsive to the (pretend) motor actions which we produce in response to the environmental input. All in real time, no glitches allowed.

The plausibility of this setup depends on the nature of the relationship between the organism and the environment. How fine-grained a difference in environmental conditions can our sensory systems register? Which is the salient information extracted from the sensory flow? And to what extent do our motor actions co-determine the content of the sensory input we receive?

Fractured Reality
Artwork by Alex Gowan-Webster, Created using Audacity Databending – CC0 License

In recent years, cognitive science has shifted towards models of the mind emphasising close coupling of the organism with the environment, and the importance of long-term body-world interactions in the operation of the mind. The most radical researchers even reject the idea that brains process information at all, preferring instead to describe the organism and the environment as a dynamical system, where one part cannot be adequately described without reference to the other. The dynamical systems underlying intelligent behaviour are often thought to involve variables whose values can change continuously rather than discretely, and sometimes they are described as chaotic. This would require the simulation to be arbitrarily precise, which is not an easy feat to accomplish.  Brute physical limitations on the speed of transduction might make this impossible to pull off. Additionally, predicting the behaviour of such a system is notoriously difficult. But be that as it may, it is I think clear that extrapolating from video-game technology alone is not enough to figure this out.

And then there is the conceptual problem highlighted by the late philosopher Hilary Putnam in his essay Brains in a Vat. Putnam imagines a scenario quite similar to Musk’s perfect simulation device – brains connected to supercomputers which simulate reality. According to Putnam, a brain in a vat could not entertain the thought “I am a brain in a vat”. This is because words and thoughts refer to their objects in virtue of being appropriately causally connected to these objects. When I think about Elon Musk, my thoughts have Musk as their object because I have seen a picture of him, and heard others use this name to designate this particular person. There is a link between Musk and my use of these specific marks on the page, which explains why the marks have occurred on the page.

But for the brain in a vat, the word “vat” is not connected to the vat in the right way, as the brain never encountered it. Or rather, if the word “vat” is connected to the vat in the right way, then all other words the brain uses are too since ultimately everything the brain encounters originates from the vat. Now, because we can, in fact, entertain the thought that we are brains in vats, it follows that we cannot be in this situation. Musk’s suggestion that we are in a simulation is parallel to Putnam’s thought experiment. If we were in a simulation, our thoughts (purportedly) about the simulation and its source would not be in the right causal relationship with the simulation and its source to achieve reference. This is not a straightforward argument, but something tells me it did not enter into consideration when Musk and his brother were discussing this issue.

Maybe Musk does not want to debate brains in vats, though. He might think that we ourselves are artificially intelligent programs – subroutines in a simulation which encompasses the whole universe. This possibility was entertained by Oxford University’s Nick Bostrom. Bostrom argued that if we assume that minds are analogous to computer software, and can be implemented on a variety of platforms (a popular position, but not without its detractors), then at least one of the following must be true:

“(1) the human species is very likely to become extinct before reaching a ‘posthuman’ stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of its evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.“

Bostrom’s argument, though similar to Musk’s, is more subtle, because he appreciates the scale of resources required to make running such simulations possible. His posthuman civilisations are so advanced that they can harness the power of stars, achieve effective immortality, and are capable of many other feats which we can only fantasise about. Musk thinks that assuming any rate of progress we are bound to get there at some point. Those more sceptical of unstoppable linear technological progress, like myself, can safely choose option 1 of Bostrom’s trilemma. This trades some of the argument’s sensational bite for much-needed depth. And then remember, that the arguments I marshalled against Musk work against Bostrom too. In particular, Putnam’s objection that the very thought of being in a simulation implies its own falsity, because of the way in which reference and intentionality depend on causal relations between the thinker and the referent.

All in all, not so fast, Elon!

Matej Kohár is a masters student of Mind, Language, and Embodied Cognition at the University of Edinburgh. If you are interested in the topics raised, he recommends checking out G.E. Moore’s “A Defense of Common Sense” and Tim van Gelder’s “What Might Cognition be, if not Computation?”.

How about you, do you think it’s plausible we are living in a simulation?

FacebookTwitterReddit

2 thoughts on “Elon’s Demon

  1. Nice to see some thoughts on the topic. When I first heard these comments from Elon I thought ‘that man is neither a neuroscientist nor making videogames’ but to be honest I find the fact he considers such things to be an indication of good philosophical health. And your response seems fun and in the spirit of debate.

    Firstly, if I may, I just want to correct a small error you made. You say ‘Musk thinks that assuming any rate of progress we are bound to get there at some point. Those more skeptical of unstoppable linear technological progress’ however, Elon mentions with regularity that windows of technology open and close but that if you look at the trend over time generally going upwards (he most often makes this point in regards to rockets). (1). I think though it is easy to have a view of technological progress being a strong force when one is making technology, I have the same bias.

    I think that if I am standing in a room which is closed to the outside world then I can imagine that there is something outside the room but I won’t necessarily be able to guess correctly what it is. In the same way although I can imagine that I am in a simulation, if I have been in it my whole life then I am unlikely to be able to correctly guess what real life is like. At which point you get a similar state to old beliefs about higher levels of reality and our souls ascending to new levels of consciousness when we die -leave the simulation.

    This all being said, I’m happy to naively believe we live in the real world.

    (1) ‘An asteroid or a super volcano could destroy us, and we face risks the dinosaurs never saw: an engineered virus, inadvertent creation of a micro black hole, catastrophic global warming or some as-yet-unknown technology could spell the end of us. Humankind evolved over millions of years, but in the last sixty years atomic weaponry created the potential to extinguish ourselves. Sooner or later, we must expand life beyond this green and blue ball—or go extinct’

    I think it is also clear from something like the fall of Rome to the renaissance that knowledge does not proceed linearly.

  2. This is going to be an important issue for the next generation, so it’s good that we’re starting to analyse it now. I am confident that Elon Musk and Nick Bostrom are wrong, but not for the reasons proposed above.

    In particular, this argument is wrong: “According to Hilary Putnam, a brain in a vat could not entertain the thought ‘I am a brain in a vat’. This is because words and thoughts refer to their objects in virtue of being appropriately causally connected to these objects.” In fact, Putnam has misconstrued the nature of meaning and reference, as we shall now see.

    Consider, if you will, Thomas Anderson sitting in the Nebuchadnezzar, plugged into the vast virtual reality of the Matrix. His avatar, Neo, is sitting in a (virtual) noodle bar, enjoying a (virtual) bowl of stir-fry noodles with vegetables. Trinity calls him on his mobile ‘phone, and asks “Where are you, Neo?”, and he answers, “Hey, Trin, I’m in the noodle bar. Come and join me.” Half an hour later, Trinity slinks into the noodle bar and sits down next to Neo. She orders some egg noodles and they sit and chat. “Whoa, Trin, these noodles taste so much better than that gloop they feed us on the Nebuchadnezzar”, and Trinity assents, “Hell, yeah, Mouse was right to complain that that stuff tastes like snot.”

    Observe that Neo and Trinity are able to communicate effectively about things in the Matrix world (e.g. the noodle bar) and in the external world (e.g. the Nebuchadnezzar’s canteen). This is despite the fact that the noodle bar does not exist and, according to Putnam, Neo’s words cannot ‘refer’ to the noodle bar: as the noodle bar does not exist, the words cannot have a causal connection with it.

    Where has Putnam gone wrong? He goes wrong by adhering to the physicalist ideology that we live in a physical world, and we know about physical stuff and can talk meaningfully about it. That ideology is untenable and, in fact, incoherent. All that you ever experience is the contents of your own mind: the qualia and spatiotemporal structures of qualia. You may say that you are sitting on a chair, reading this text on a computer screen and drinking a mug of coffee, but what those statements really refer to is your stream of conscious experiences: you have a tactile experience that you label as feeling your bum on the chair, you have a visual experience that you label as seeing the text, you have an olfactory experience that you label as the smell of coffee. It’s all in your mind. If Trinity ‘phones you up and asks whether you left any coffee in the jar for her, and you say “Yes”, what this is referring to is the counterfactual of going to the cupboard and seeing an adequate quantity of coffee in the jar. It is not referring to any putative physical jar. If your experiences, and hers, are consistent with the state of affairs labelled as ‘enough coffee in the jar’ then ipso facto there is enough coffee in the jar.

    Putnam is peddling the myth that what a statement means is different from how we evaluate its truth status. Wittgenstein was closer to a correct analysis when he said that “In most cases, the meaning of a word is its use.”

    Let’s go back to the Matrix. When Neo and Trinity make statements about the noodle bar, they are referring to the sequence of conscious experiences that they have when they are in the state labelled as ‘in the noodle bar’. Those experiences are constitutive of the meaning and of the truth-test for the statements. The fact that the noodle bar is generated by a computer is irrelevant to this fact. When they make statements about the Nebuchadnezzar’s canteen, then again their experiences ‘in the canteen’ are constitutive of the meaning and truth-test of those statements. It doesn’t make any difference that (at least in the first Matrix film) the Nebuchadnezzar is in a ‘real world’.

    Once we get this myth out of our heads, that meaning and truth-tests have different bases, and we become genuine empiricists, then we see Putnam’s argument as wholly specious. Of course, the brain in the vat CAN entertain the thought that s/he is a brain in a vat, and can meaningfully articulate that thought in the sentence, “I might be a brain in a vat”. What that statement means is “If somebody switches off this Matrix simulation, and switches my visual data feed to the CCTV in the lab, then I will experience the visual sensation labelled ‘seeing myself as a disembodied brain in a vat’; and if they let me read the software code, I shall see all the programming logic of this simularion, and maybe they’ll even let me change the code and thereby change the world when they reload the Matrix.”

    It is so utterly obvious that a brain in a vat can entertain the thought that s/he is a brain in vat, it staggers me that people still cling to Putnam’s argument. Maybe it’d be better to skip philosophy classes and just watch the Matrix trilogy.

    So, this argument against Elon Musk fails.

Leave a Reply