Tag Archives: Sabine Hossenfelder

Emergent, not dead

When physicists and philosophers talk about the universe, which they do a lot, they often talk about what is fundamental and what is not. What Is not fundamental is described as emergent, meaning that it emerges from what is fundamental. In the world of physics what is fundamental are the elementary particles and forces of which all other things are comprised. Everything else is emergent. That includes all combinations of fundamental things, starting with the atomic elements, the molecules, materials and substances, objects in space—planets, stars, and galaxies—and of course, all the organisms and entities that exist on objects in space, such as bacteria, plants, animals, and humans. All these things are described as systems created from components of fundamental particles and forces.

So far, so good.

The story gets more complicated when physicists and philosophers talk about causation and agency. There is a view among many that what is fundamental is more real than what is not. Emergent things are either not real or at least somewhat less real than what is fundamental. And even if admitted to be real, emergent things such as systems have less power—less causative power—than what is fundamental. Under this common view, all things and events in the universe result from the movements and interactions of fundamental particles and forces. The actions and interactions of emergent things and systems result from and are caused by fundamental particles and forces. Exclusively. Causation moves in only one direction, from what is fundamental to what is not. There is no reverse causation or feedback loop from emergent things to fundamental things.

Does downward causation break the laws of physics?

Downward causation refers to the power of things that are not fundamental, i.e., all emergent things and systems, to exercise causation or agency. Such top-down causation is often described as supernatural and a violation of physical laws. Physicist Sean Carroll talks about focusing on one atom in a finger of his hand and predicting its behavior based on “the laws of nature and some specification of the conditions in its surroundings—the other atoms, the electric and magnetic fields, the force due to gravity, and so on.” Such a prediction does not require “understanding something about the bigger person-system.”[1] It goes without saying that the action of moving his hand is not relevant to predicting the motion of the atom.

Physicist Sabine Hossenfelder calls it a “common misunderstanding” that a computer algorithm written by a programmer controls electrons by switching transistors on and off or that a particle accelerator operated by a scientist causes the collision of two protons to produce a Higgs boson. In both cases it is the deeper fundamental physical composition, i.e., the neutrons, protons, and electrons, that explain the events; it is simply useful to describe the behaviors of the systems (the computer, the accelerator, the programmer, the scientist) in practical system-level terms.

[W]e find explanations for one level’s functions by going to a deeper level, not the other way around…. [A]ccording to the best current evidence, the world is reductionist: the behavior of large composite objects derives from the behavior of their constituents….[2]

The assumption of determinism

These assertions are not entirely uncontroversial. First, there is no universal agreement that the behavior of higher-level things can always be explained by looking at lower-level things and the behavior of constituents.[3] Systems admittedly are combinations of fundamental things, but those combinations result in properties and behaviors that don’t occur at lower levels. Many of the properties relevant to the behavior of emergent systems don’t even exist at the level of fundamental particles and forces. Trying to explain all emergent system behavior by describing the behavior of fundamental particles is somewhat like trying to explain a computer game by describing the opening and closing of logic gates on integrated circuits.[4] You might learn what’s occurring in the computer hardware, but you wouldn’t be able to play the game.

There also seems to be an assumption that “explained by” is equivalent to “caused by”. If you can describe the properties and behavior of a system in terms of particles and forces, then the behavior of the system is caused by those particles and forces. The ability to describe a system in terms of fundamental particles and forces seems relatively established, i.e., when an arm moves, that movement also constitutes the movement of many billions of tiny particles under the influence of fundamental forces. That much is uncontroversial. But whether those particles and forces also can decide to move the arm does not follow quite so logically or incontrovertibly.

That last step requires another key assumption—that the behavior of systems is completely determined by the behavior of fundamental particles and forces. It requires a conclusion that “using the laws of physics to move my arm” is equivalent to “having my arm moved by the laws of physics.” In other words, it assumes complete determinism, which means the behavior of the universe can be analogized to a long chain of dominoes stretching back to the Big Bang 13.8 billion years ago, all falling in a deterministic pattern. Your arm, my arm, and any decision to raise any arm are all dominoes in that chain.

The problem with dominoes

On the face of it, a long chain of dominoes seems a simplistic and brittle design architecture for 13.8 billion years of history. But putting aside the fragility of the design, there is a more fundamental problem with a picture of the universe based on a chain of dominoes—our deepest theory of physical reality says that what is fundamental is not wholly deterministic. Quantum evolution is not deterministic but probabilistic. It integrates uncertainty, probability and indeterminacy into what is fundamental. Determinism relies on an unbroken chain of events and causes. Quantum mechanics breaks the causative chain at a very deep level—the level of fundamental particles and forces.

The problem with indeterminacy

The story does not end there, however. Because quantum indeterminacy does not run rampant through the macroscopic world. Nor does it not cause quantum mechanics to produce nonsensical, random, or chaotic results. No, in fact, despite breaking the causative chain of determinism, quantum mechanics produces extremely accurate predictions and is one of the most successful tools ever created by physics; it is the foundation of much of our advanced technology. Microscopic quantum indeterminacy simply does not result in ubiquitous macroscopic indeterminacy.

The reason is that the seemingly random indeterminacy of quantum state reduction, i.e., what we might call quantum jumps, occurs within the probability distribution of the quantum wave function. As a result many, many microscopic quantum jumps average out to produce aggregate results predicted by the wave function. The laws of probability cause those many, many trillions of tiny quantum interactions to produce a macroscopic world that looks like the world predicted by the wave function and by classical physics. The macrocosm does not look like the quantum world; it looks like Newton’s classical world.

So have we come full circle? Does quantum indeterminacy break the causative chain of determinism and then fail to affect the macroscopic world at all? Does it average out so completely that it becomes irrelevant to emergent systems?

Probabilities are not dominoes

We don’t know the full answer—yet. But it seems vanishingly unlikely that something as fundamental as quantum indeterminacy plays no role in the macroscopic world.

It is true that portions of the macroscopic world seem to act in a largely understandable way consistent with a more determinist view of physical behavior. And yet we know that if we drill down deep enough into the behavior of macroscopic systems, we will find beneath the surface both practical and theoretical uncertainty limiting what we can measure and know about quantum behavior.

We also know that there is a difference between predicting the probability of something happening and predicting what actually happens. There is a tension between those things, a dynamic that makes a difference, even in the emergent world. Probabilities are predicted distributions over many occurrences. In any one occurrence, the particular result is not predictable. So even if the broad-scale average behavior of emergent systems were predictable, the behavior of each system in each event is not. Nature presents us with an average, not an absolute, picture of the macroscopic world; classical physics works as an approximation of quantum physics only because of averages and scale.

Unpredictable variation, in fact, is a requirement for application of the laws of probability. Probability results in a meaningful representation of behavior only if there exists a large number of different events whose outcomes average into a distribution. That requires the occurrence of events which are not individually predictable. In other words, for the aggregate behavior of systems to converge on a meaningful probability, individual systems must have the ability to do something improbable. That must be true for any system whose actions are not predictable with 100% probability. Anything short of 100% requires that the system must on occasion do something less than 100% probable—something improbable or unlikely or even random.

That, of course, is exactly what many emergent systems do. From tumbling bacteria[5] to complex weather patterns to human beings, complex emergent systems on any given day do not conform to the average. Instead, they engage in deeply unpredictable behavior which fits a model of the universe based on probabilistic evolution, at both the microscopic and macroscopic levels.

Emergent systems learn to do random things

Natural selection may teach biological systems to do exactly that. Neuroscientist Kevin Mitchell theorizes that complex biological systems take advantage of the chance introduced by quantum indeterminacy to exert causal influence.

[T]he really crucial point is that the introduction of chance undercuts necessity’s monopoly on causation. The low-level physical details and forces ae not causally comprehensive; they are not sufficient to determine how a system will evolve from state to state. This opens the door for higher-level features to have some causal influence in determining which way the physical system will evolve. This influence is exerted by establishing contextual constraints: in other words, the way the system is organized can also do some causal work. In the brain, that organization embodies knowledge, beliefs, goals, and motivations—our reasons for doing things. This means some things are driven neither by necessity nor by chance; instead, they are up to us.[6]

Emergent systems evolve a design architecture that leverages indeterminacy without breaking the laws of physics.

The universe is not deterministic, and as a consequence, the low-level laws of physics do not exhaustively encompass all types of causation. The laws themselves are not violated, of course—there’s nothing in the way living systems work that contravenes them nor any reason to think they need to be modified when atoms or molecules find themselves in a living organism. It’s just that they are not sufficient either to determine or explain the behavior of the system.[7]

In particular, he describes how organisms use indeterminacy, embodied in “an inherent unreliability and randomness in neural activity,”[8] to exercise causative power in an extraordinary way: “[O]rganisms can sometimes choose to do something random.[9]

Self-governing systems constrained by probability

Is it possible that the universe can construct autonomous, self-governing, decision-making systems? Can fundamental particles and forces create causation engines that are constrained by the laws of physics and probability but not fully determined by the particles and forces that build them?

Philosopher of physics Jennan Ismael argues that determinism does not rule out the existence of autonomous systems “with robust capabilities for self-governance.”[10] Self-governing systems can have the “felt ability to act spontaneously in the world, to do what [they] choose in the here and now, by whim or fancy, free of any felt constraints.”[11] These emergent systems cannot violate the laws of physics, but they can use them to their own advantage. They can choose without any other local force or subsystem compelling them to do so; they even can engage in capricious or random behavior in defiance of any attempt to predict their actions.

The catch is that this relatively unconstrained freedom exists only for subsystems of the universe where local laws and states are subject to exogenous interventions and no other subsystem can exercise complete control. The big picture is still governed by the global laws of the universe, where there can be no exogenous interventions (because the universe includes everything). Determinism still rules, operating with global laws at the global level. But at the local level, there is freedom for self-governing systems to influence each other and exercise autonomy.

Ismael rejects the notion that quantum indeterminacy changes this picture. And yet her compatibilist description of reality, and her distinction between local freedom and global determinism, looks and feels almost like the universe described by Mitchell—a universe in which the door is open for systems to evolve causative power. Ismael describes the development of the self with autonomous and self-governing capabilities in a way that is very like how Mitchell describes the evolution of free agency through natural selection.[12] In the universe described by both Ismael and Mitchell, fundamental particles and forces enable the existence of emergent systems that exercise agency even to the point of choosing random behavior.

What if the picture Ismael offers is almost entirely correct, except that quantum indeterminacy and probability govern at the global level? Such a world would look and feel like the world she describes, but it would not assume a global principle of absolute determinism. It would be governed by probability at both the microscopic and macroscopic levels. Instead of circumscribed local freedom, self-governing systems would have the relative free agency described by Mitchell, allowing and encouraging them to exercise causative power to do things for reasons and even to do unexpected things.         

What if that is who we are?

It is a truism that ideas can be powerful. Yet it is difficult to describe an idea in the language of fundamental particles and forces. The Pythagorean Theorem has influenced the history of mathematics, but what would the theorem look like represented only by fundamental particles and forces? Perhaps the brain of Pythagoras could be represented as a system constructed from fundamental things, but how exactly would particles and forces represent the mathematical concepts employed by Pythagoras—concepts which undoubtedly have exercised causal influence on other mathematicians, engineers, and scientists? The same question can be asked about the concepts of quantum mechanics. Fermions and bosons may behave quantum mechanically, but could they conceptualize quantum mechanics?

Unless we conclude that concepts have no causative influence—even the concepts of quantum mechanics—emergent systems must be able to exercise some causal power, including through the creation of ideas and concepts.

The inference seems inescapable that the universe and the fundamental particles and forces that comprise it can construct emergent systems with causal power—systems that can’t move the atoms of a finger by breaking the laws of physics, but can choose to move a hand.

Emergent, not dead.


[1] Carroll (2016), p. 109.

[2] Hossenfelder (2022), pp. 88-89. She does acknowledge that there are unanswered questions about the connections between the layers. “Why is it that the details from short distances do not matter over long distances? Why doesn’t the behavior of protons and neutrons inside atoms matter for the orbits of planets? How come what quarks and gluons do inside protons doesn’t affect the efficiency of drugs? Physicists have a name for this disconnect—the decoupling of scales—but no explanation. Maybe there isn’t one. The world has to be some way and not another, and so we will always be left with unanswered why questions. Or maybe this particular why question tells us we’re missing an overarching principle that connects the different layers.” Ibid., p. 89 (emphasis in original).

[3] See e.g., Anderson (1972), Ellis (2020).

[4] Analogy suggested by a passage in Ismael (2016), p. 217.

[5] Biologist Martin Heisenberg describes the ability of certain bacteria to initiate random tumbles in a search for food and a favorable environment. Heisenberg (2009).

[6] Mitchell (2023), pp. 163-164 (emphasis in original).

[7] Mitchell (2023), pp. 168-169.

[8] Mitchell (2023), p. 175 (emphasis in original).

[9] Mitchell (2023), p. 175.

[10] Ismael (2016), p. xi.

[11] Ismael (2016), p. 228.

[12] And also similar to the picture developed by Daniel Dennett. Mitchell (2023), p. 151. Dennett (2017).

Electrons R Us

“Einstein could not bring himself to believe that ‘God plays dice with the world,’ but perhaps we could reconcile him to the idea that ‘God lets the world run free’.” – John Conway & Simon Kochen, “The Free Will Theorem”[1]

Are fundamental particles the source of free will in the universe? More specifically, does the unpredictable quantum behavior of electrons and other micro particles enable macro-level free choice?

Philosophers have puzzled over questions like these since Democritus and Epicurus.[2] The free will theorem of mathematicians John Conway and Simon Kochen addresses the quantum version of the question, famously asserting that if humans have free will, then electrons also have free will.[3] The theorem proves mathematically that the universe cannot be deterministic because the quantum behavior of particles is not determined by the past history of the particles or the past history of the entire universe. Quantum behavior is non-deterministic, therefore “[n]ature itself is non-deterministic.”[4]

Why do particles behave in unexplained ways?

Physicists have long observed that particles behave in a curious and unpredictable way during quantum evolution. In the initial phase of evolution, particles and their wave functions evolve over time according to the Schrödinger equation, with predictions of particle behavior changing in an expected and deterministic way. In this phase the future direction and behavior of a particle and its wave function is determined by its prior direction and behavior. In a later phase of quantum evolution, however, when the predicted behavior of a particle is tested with a measurement, something different happens. Instead of behaving in a predicted and determined way, the wave function seems to collapse, and the particle jumps to a specific measured state which cannot be predicted with specificity.[5] Physicists cannot say why or how the specific result occurs in that instance. It is in the range of possible results predicted by the Schrödinger equation, but the mechanism by which the particular result is chosen remains unclear.

Theorists have attempted to explain this behavior by suggesting the existence of unknown or hidden factors which determine the result. The theories assume that the relevant variable simply has not been discovered yet, but its discovery will explain the particular path taken by the particle and its wave function to reach the particular result in each instance. These are called hidden-variable theories.

Electrons make “free” choices

Conway and Kochen analyzed mathematically whether it is possible for hidden variables to determine the outcome of quantum reduction. Relying on non-controversial facts of quantum mechanics, they showed that if an experimenter is free to choose the experiment conducted on a particle, then it can be proven mathematically that the particle is “free” to choose the particular measurement result.[6] In other words, if the experimenter’s choice of how to conduct the experiment is not predetermined by an unknown factor, then it is impossible for the particle’s choice to be predetermined by an unknown factor.[7] The particle is as “free” as the experimenter, and the measurement result chosen by the particle can never be predicted by any preexisting event, variable, or information in the prior history of the universe.

Does the unpredictability of fundamental particles help explain human free will?

The established view among many physicists and philosophers of science is “no”. Fundamental physics is said to offer only two choices—strict determinism or pure randomness—neither of which leaves any room for human judgment or free will.[8]

In contrast, Conway and Kochen argue that the choices made by electrons are not purely “random” or “stochastic” but are more accurately described as “free” or “semi-free”. They believe that a form of “free” choice built into the quantum foundation of the universe may offer a basis for human “free” choice and will.[9]

Free or random

Quantum reduction does have some features not fully consistent with pure randomness. The seemingly “random” results of measurement are not arbitrary but fall within the range of possible results predicted by the Schrödinger equation. Over repeated measurements, the results also average out and approximate the results predicted by both the Schrödinger equation and deterministic principles of classical physics. Perhaps most significantly, particles in a state of superposition produce correlated measurement results. When one entangled particle is measured, with an unpredictable result, a measurement performed on a second twinned particle, entangled with the first, is correlated to the result of the first measurement and therefore more predictable. The twinned, entangled particles do not behave in a completely random way.[10]

Some believe that the alternative to determinism is randomness, and go on to say that “allowing randomness into the world does not really help in understanding free will.” However, this objection does not apply to the free responses of the particles that we have described. It may well be true that classically stochastic processes such as tossing a (true) coin do not help in explaining free will, but … randomness also does not explain the quantum mechanical effects described in our theorem. It is precisely the ‘semi-free’ nature of twinned particles, and more generally of entanglement, that shows that something very different from classical stochasticism is at play here.[11]

Conway and Kochen wrote as mathematicians, not neuroscientists, so offered no empirical evidence or theories to explain how the quantum behavior of particles might influence macroscopic entities such as ourselves.[12] But they had a strong belief that it was possible.[13]

Can random occurrences in the microcosm enable non-random evolution in the macroscopic world?

Even if quantum behavior were random, is there reason to believe that random action at the quantum level gives rise to non-random evolution, or something like choice, at the macroscopic level?

We know that random variation in nature can result in non-random evolution. An obvious example is quantum reduction itself, which is governed by the laws of probability. Those laws cause seemingly random results to average out and produce the appearance and reality of non-random macroscopic evolution. Natural selection is also an obvious example; it is based on the principle that random changes and genetic variations drive non-random evolution of species over time.

A less obvious example is the role that randomness and indeterminacy may play in the evolution of reason-based decision-making and free agency. In his book Free Agents: How Evolution Gave Us Free Will, neuroscientist Kevin Mitchell challenges the position that “indeterminacy or randomness doesn’t get you free will.”[14] He argues instead for a direct connection between indeterminacy and the development through natural selection of reasoned judgment and meaning.

The idea is not that some events are predetermined and others are random, with neither providing agential control. It’s that a pervasive degree of indefiniteness loosens the bonds of fate and creates some room for agents to decide which way things go. The low-level details of physical systems plus the equations governing the evolution of quantum fields do not completely determine the evolution of the whole system. They are not causally comprehensive: other factors—such as constraints imposed by the higher-order organization of the system—can play a causal role in settling how things go.

In living organisms, the higher-order organization reflects the cumulative effects of natural selection, imparting true functionality relative to the purpose of persisting…. The essential purposiveness of living things leads to a situation where meaning drives the mechanisms. Acting for a reason is what living systems are physically set up to do.[15]

Uncertainty leads to interpretation, prediction, and the creation of meaning

Mitchell maintains that “indeterminacy at the lowest levels can indeed introduce indeterminacy at higher levels.”[16] If that is true, and indeterminacy is ubiquitous at both microscopic and macroscopic levels, the process of resolving that indeterminacy becomes a fundamental feature of physical existence.

For living systems, resolving indeterminacy means confronting uncertainty. Organisms, as a matter of biological necessity, must deal with a level of unreliability and randomness in the environment. It is built in. There is no escape from it.

With incomplete knowledge about expected occurrences in the environment, organisms learn to interpret events and predict what will happen in order to adapt behavior to threats or opportunities. Organisms that do this well tend to persist better than organisms that predict less well.

For organisms with neural systems such as ours, interpretation of events further leads to the imposition of meaning on the world in order to act and persist within it. The meaning given to events becomes important to survival, and acting in ways that are consistent with that meaning becomes crucial.[17] Creating meaning and acting for reasons helps us survive in an environment of uncertainty and indeterminacy. Natural selection therefore results in organic systems that specialize in interpretation and meaning and choice.

Indeterminacy means organisms can choose to behave randomly

Living systems also learn to use randomness to their benefit. Mitchell describes how the neural structures of our brains have evolved to reflect and take advantage of the uncertainty around us.

There is an inherent unreliability and randomness in neural activity that is a feature in the system, not a bug. The noisiness of neural components is a crucial factor in enabling an organism to flexibly adapt to its changing environment—both on the fly and over time.[18]

The system succeeds, not just despite uncertainty and randomness, but also because of it.

[O]rganisms have developed numerous mechanisms to directly harness the underlying randomness in neural activity. It can be drawn on to resolve an impasse in decision making, to increase exploratory behavior, or to allow novel ideas to be considered when planning the next action. These phenomena illustrate the reality of noisy processes in the nervous system and highlight a surprising but very important fact: organisms can sometimes choose to do something random.[19]

The ability to harness randomness enables the creativity that characterizes brains like ours and enhances our ability to survive and grow and persist. Mitchell cites the two-stage model of free will proposed by William James as a model for how organisms use randomness and indeterminacy to broaden the options available for decision-making.[20] Ideas spring to mind in a seemingly, or actually, random way, but then the organism applies judgment and decision-making to choose the option that suits the requirements of the system in that moment.

In humans, we recognize this capacity as creativity—in this case, creative problem solving. When we are frustrated in achieving our current goals or when none of the conceived options presents an adequate solution to the current problem, we can broaden our search beyond the obvious to consider new ideas. These do not spring from nowhere but often arise as cognitive permutations: by combining knowledge in new ways, by drawing abstract analogies with previously encountered problems in different domains, or by recognizing and questioning current assumptions that may be limiting the options that occur to us. In this way, humans become truly creative agents, using the freedom conferred by the underlying neural indeterminacy to generate genuinely original thoughts and ideas, which we then scrutinize to find the ones that actually solve the problem. Creative thoughts can thus be seen as acts of free will, facilitated by chance but filtered by choice.[21]

Similar to how new biological variations appear randomly in nature, but then are selected or eliminated through natural selection, humans rely on inherent randomness for creative inspiration, while implementing the constraints and systems of meaning that determine how we persist and why.

This model thus powerfully breaks the bonds of determinism, incorporating true randomness into our cognitive processes while protecting the causal role of the agent itself in deciding what to do.[22]

Quantum evolution and natural selection have given us the ability to resolve the indeterminacy at the heart of the universe by confronting uncertainty and harnessing it to the service of creativity, decision-making, and meaning. That is our superpower.[23]

We choose like electrons

So if Mitchell is correct that quantum indeterminacy permeates the universe and enables the evolution of choice and free agency, are Conway and Kochen also correct? Are we like electrons in a truly fundamental way?

Electrons make something like free choices through the process of quantum reduction. In that process the universe around the electron undergoes a deep transformation. Before the process the electron exists in an unrecognizable quantum world of infinite superpositioned possibilities; after the process the electron becomes part of a recognizable reality of finite events and things. The process transforms possibilities into mathematical probabilities which resolve into one unique occurrence in spacetime. The electron therefore has a superpower, too—it can resolve probabilities into unique outcomes.

Our superpower is very much like that. We are made of fundamental particles like electrons and we are creatures like electrons. The universe we inhabit is constructed through the process of quantum reduction. Second by second, the quantum world of possibilities transforms itself into the concrete world of spacetime. Our world is fundamentally about uncertain possibilities and probabilities resolving into the certainty of actual events.

That ubiquitous uncertainty is reflected in the structure and operation of our brains. By making decisions amidst uncertainty, we participate in the universal process of transforming possibilities into unique, concrete events. Natural selection has taught us to use the randomness that is foundational to that process; we use it for creative inspiration and to generate options for decision-making. We sometimes make random choices—intentionally.

The ability to make random choices—just as an electron does—may be crucial to the ability to make non-random, reasoned choices. John Conway perhaps had this in mind when he said that the free will theorem also could be called the “free whim theorem”.[24] Without the freedom to make random choices, making reasoned choices through judgment and logic may amount to nothing but determinism. True free will necessitates freedom to choose, and the “free whim” of the electron may be exactly what gives us that freedom.

Electrons R us.


[1] Conway and Kochen (2006), p. 27.

[2] Democritus argued that all action in the universe is determined by the movements of atoms. Epicurus, one of his followers, theorized that atoms swerve periodically in a way that breaks the chain of deterministic causation and preserves a conceptual basis for human freedom of action.

[3] In a follow-up article Kochen broadened the proof to demonstrate that the free behavior of particles is not dependent on the free behavior of humans. Kochen (2022).

[4] Conway and Kochen (2009), p. 230.

[5] This unexplained behavior is called the “collapse of the wave function”, also quantum state vector reduction, quantum state reduction, or simply quantum reduction.

[6] “[O]ur assertion that ‘the particles make a free decision’ is merely a shorthand form of the more precise statement that ‘the Universe makes this free decision in the neighborhood of the particles’.” Conway and Kochen (2006), p. 15.

[7] Conway and Kochen did not give credence to the proposition that experimenters are not free to choose their own experiments. “It is hard to take science seriously in a universe that in fact controls all the choices experimenters think they make. Nature could be in an insidious conspiracy to ‘confirm’ laws by denying us the freedom to make the tests that would refute them. Physical induction, the primary tool of science, disappears if we are denied access to random samples. It is also hard to take seriously the arguments of those who according to their own beliefs are deterministic automata!” Conway and Kochen (2006), p. 24.

[8] See e.g., Hossenfelder (2022).

[9] “Indeed, it is natural to suppose that this latter freedom [of particles] is the ultimate explanation of our own.” Conway and Kochen (2009), p. 230.

[10] “Although we find ourselves unable to give an operational definition of either ‘free’ or ‘random,’ we have managed to distinguish between them in our context, because free behavior can be twinned, while random behavior cannot (a remark that might also interest some philosophers of free will).” Conway and Kochen (2006), p. 25.

[11] Conway and Kochen (2009), p. 230.

[12] “In the present state of knowledge, it is certainly beyond our capabilities to understand the connection between the free decisions of particles and humans, but the free will of neither of these is accounted for by mere randomness.” Conway and Kochen (2009), p. 230.

[13] “The world [the free will theorem] presents us with is a fascinating one, in which fundamental particles are continually making their own decisions. No theory can predict exactly what these particles will do in the future for the very good reason that they may not yet have decided what this will be! Most of their decisions, of course, will not greatly affect things — we can describe them as mere ineffectual flutterings, which on a large scale almost cancel each other out, and so can be ignored. The authors strongly believe, however, that there is a way our brains prevent some of this cancellation, so allowing us to integrate what remains and producing our own free will.” Conway and Kochen (2006), pp. 26-27.

[14] Mitchell (2023), p. 280.

[15] Mitchell (2023), pp. 280-281.

[16] Mitchell (2023), p. 159.

[17] “[T]he higher-order features that guide behavior revolve around purpose, function, and meaning. The patterns of neural activity in the brain have meaning that derives from past experience, is grounded by the interactions of the organism with its environment, and reflects the past causal influences of learning and natural selection. The physical structure of the nervous system captures those causal influences and embodies them as criteria to inform future action. What emerges is a structure that actively filters and selects patterns of neural activity based on higher-order functionalities and constraints. The conclusion—the correct way to think of the brain (or, perhaps better, the whole organism) is as a cognitive system, with an architecture that functionally operates on representations of things like beliefs, desires, goals, and intentions.” Mitchell (2023), pp. 194-195.

[18] Mitchell (2023), p. 175 (emphasis in original).

[19] Mitchell (2023), p. 175 (emphasis in original).

[20] Mitchell (2023), pp. 187-192, citing Doyle (2010).

[21] Mitchell (2023), p. 191 (emphasis in original).

[22] Mitchell (2023), p. 188.

[23] “This capacity to generate and then select among truly novel actions is clearly highly adaptive in a world that refuses to remain 100 percent predictable.” Mitchell (2023), p. 191.

[24] As reported by Jasvir Nagra in notes on a talk given by Conway in 2004. “He said he did not really care what people chose to call it. Some people choose to call it ‘free will’ only when there is some judgment involved. He said he felt that ‘free will’ was freer if it was unhampered by judgment—that it was almost a whim. ‘If you don’t like the term Free Will, call it Free Whim—this is the Free Whim Theorem.’” Nagra (2020).

There is a record kept

Physicists talk about conservation of information. It is a fundamental law of classical physics—information cannot be lost or destroyed. Stanford physicist Leonard Susskind calls it the zero-minus law because it comes before all other laws—before the first laws and even before the zeroth laws.[1]

It means that each moment in time includes information about the state of the universe in that moment and every moment leading up to that moment. The location and momentum of every microscopic particle in a system, together with the forces and fields interacting with those particles, comprise the complete specification of the system in that moment. From that complete information, it is possible to determine exactly the state of the system in the immediately prior moment. And with that information comes the information about the state of the system prior to that. The entire prior history of a system, including the universe, is time reversible from the information contained in any one moment.

The result is that information about every prior moment is never lost. It cannot be lost. It exists in the full specification of every subsequent moment and the operation of the laws of physics on the particles, forces, and fields interacting in the system.

Not just the “important” information, but all information

The information in that moment includes everything about the system that could possibly be known. It is not limited to information that we have the practical means of discovering or knowing, but includes all the information, whether we know it or not. Theoretically, the complete specification of the system includes information about every element of physical existence in the universe at that moment.[2] That means the state of every planet, star, and galaxy, every molecule, atom and subatomic particle, and every entity of any kind. That includes information about all of biological existence, every cell and neuron in the brain of every entity. Even our thoughts and desires, which at some level arise from our physical existence, are included in the record of that moment.[3]

Are the past and the future as real as the present?

Einstein believed in what is called a “block universe”. He believed that conservation of information and the principle of relativity demonstrate that the flow of time is an illusion created by our perceptions. In the reality beneath our perceptions, time is not absolute, and the past and the future are as real as the present. If that view is correct, then the record kept by the universe may reflect more than a trail of time-reversible moments; it may reveal a universe in which every moment lives forever, in which moments actually do not die. We may exist even after we seem to die, as do those who came before us, and those that come after. We all exist because all moments exist at once in the block universe.

Is the record kept forever?

Physicists debate what forever really means. Black holes exist throughout the universe, and nothing, not even light, escapes a black hole. Stephen Hawking posited the possibility of radiation escaping from the event horizon of black holes as they dissipate over time. But we do not know if the physical information in so-called “Hawking radiation” is time-reversible in any meaningful way. If not, then the information about any particle that falls into a black hole is not conserved, but lost forever.

There is also the possibility that the universe will end its existence in a state of maximum entropy or “heat death”, with all information seeped away in a great expanse of dissipated nothingness. If that is the future universe, then all memory of our existence may be lost in that final state of maximum entropy, without any possibility of time-reversible recreation of the moments leading up to that state. But physicists have also theorized that our universe is one in a cycle of universes, that our universe will not die in a state of information-free nothingness, but rather will evolve to an end-state which could serve as the foundation of a new universe. Information about our universe could influence the wave function of the next universe, which then could influence another, on and on.[4]

Is conservation of information only a hopeful dream?

It is a comforting thought to imagine that we and all our loved ones exist forever in a physically possible block universe. But is it wishful thinking? Do physicists theorize about information recovery simply as a form of consolation?[5] Do we imagine that the universe will remember us to feel better about the inevitable loss of all that we and other humans are? Will Shakespeare and all his creations—and everything ever thought or created by any human—cease to exist without any record whatsoever? We want to believe that the universe keeps a record of our existence that cannot be erased, that exists for all time.

But time may not be what Einstein believed it to be. Time may pass. And not come back.

The block universe requires one arrow in and one arrow out

Conservation of information is based on the premise that both the past and the future can be calculated from the present. There must be one arrow in from the past and one arrow out to the future.[6] But quantum mechanics tells us that the arrow in may not tell the full story of the past and the arrow out may be only one of many possible futures. Conservation of information may not be absolute.

The future is probabilistic, but random

Evolution of particles and waves in the subatomic quantum world is governed by the quantum wave function described in the Schrödinger equation. Continuous evolution under the Schrodinger equation is time symmetric, even time-reversible, meaning the equations can be solved backward or forward, predicting the future or describing the past. The wave function produces weighted amplitudes that predict with great accuracy the evolving probabilities of a range of outcomes in the future. But the Schrödinger equation predicts only probabilities; it cannot predict the specific outcome of any one event. Specific outcomes are governed by a second phase of the quantum wave function, called quantum state reduction, in which the continuous evolution of the wave function devolves or reduces into discontinuous evolution and the probabilities resolve themselves into specific unique occurrences in the macroscopic world. Effectively, the dice are thrown, and the range of probabilities described by the equation is replaced by a single outcome—a unique event in time. There is no way to know in advance what that unique event will be. The equations predict the likelihoods of different events, but the actual unique outcome in each instance is a random result that occurs somewhere within the range of probabilities.

That means there is more than one possible arrow out to the future. The block universe may be less settled (or blockish) than we once thought.

The unrealized possibilities of the past are not recoverable

Perhaps even more significantly, the arrow in from the past cannot be reconstructed in its complete form based on information about the present. After the second phase of the wave function results in a specific random outcome, it is not possible to determine the shape of the wave function that preceded it. The weighted amplitudes of the Schrödinger equation, as well as the probabilities predicted by those amplitudes, cannot be recalculated from the outcome of the quantum reduction process. We can observe the result of the process, but we can no longer calculate the range of probabilities that produced that result. One possibility occurs, and all others are forgotten.

An imperfect record

We are left with a situation in which the future is probabilistic in general, but unpredictable in a specific instance; the future always has an element of randomness. The past also cannot be recreated fully from the present. We can find the specific event that preceded the present moment and track the string of present moments that resulted from the evolution of the wave function, but we cannot recreate the range of possibilities and probabilities that generated that string of moments. The logical conclusion is that the future is never completely known, and the possibilities of the past are lost forever.

So yes, there is a record kept. But the record is incomplete and likely impermanent. Moments are created in time, and time may not be eternal. Even if it were, time records only moments that actually occur in the macrocosmic world. Time is not a record of the manifold possibilities inherent in the microcosmic quantum world. In that world, there may be no record at all. Moments as we know them may not exist in that world. Moments come into being when the dice are thrown, when a unique outcome results from the second phase in the evolution of the wave function. It is that moment that is recorded in the temporal history of the universe. All other possible moments are lost to the macrocosmic world. They continue to exist, if at all, only in the great lake of quantum interaction from which all possibilities spring.


[1] “We could call it the first law, but unfortunately there are already two first laws—Newton’s and the first law of thermodynamics. There is even a zeroth law of thermodynamics. So we have to go back to a minus first law to gain priority for what is undoubtedly the most fundamental of all physical laws—the conservation of information.” Susskind (2013), p. 9 (emphasis in original).

[2] “[C]onservation of information implies that each moment contains precisely the right amount of information to determine every other moment.” Carroll (2016), p. 34. Information is here defined as “the ‘microscopic’ information: the complete specification of the state of the system, everything you could possibly know about it. When speaking of information being conserved, we mean literally all of it.” P. 34.

[3] “[T]he universe keeps a faithful record of the information about all you have ever said, thought, and done.” Hossenfelder (2022), p.14.

[4] Penrose (2010).

[5] Horgan (2020).

[6] “The conservation of information is simply the rule that every state has one arrow in and one arrow out.” Susskind (2013), pp. 9-10.