When physicists and philosophers talk about the universe, which they do a lot, they often talk about what is fundamental and what is not. What Is not fundamental is described as emergent, meaning that it emerges from what is fundamental. In the world of physics what is fundamental are the elementary particles and forces of which all other things are comprised. Everything else is emergent. That includes all combinations of fundamental things, starting with the atomic elements, the molecules, materials and substances, objects in space—planets, stars, and galaxies—and of course, all the organisms and entities that exist on objects in space, such as bacteria, plants, animals, and humans. All these things are described as systems created from components of fundamental particles and forces.
So far, so good.
The story gets more complicated when physicists and philosophers talk about causation and agency. There is a view among many that what is fundamental is more real than what is not. Emergent things are either not real or at least somewhat less real than what is fundamental. And even if admitted to be real, emergent things such as systems have less power—less causative power—than what is fundamental. Under this common view, all things and events in the universe result from the movements and interactions of fundamental particles and forces. The actions and interactions of emergent things and systems result from and are caused by fundamental particles and forces. Exclusively. Causation moves in only one direction, from what is fundamental to what is not. There is no reverse causation or feedback loop from emergent things to fundamental things.
Does downward causation break the laws of physics?
Downward causation refers to the power of things that are not fundamental, i.e., all emergent things and systems, to exercise causation or agency. Such top-down causation is often described as supernatural and a violation of physical laws. Physicist Sean Carroll talks about focusing on one atom in a finger of his hand and predicting its behavior based on “the laws of nature and some specification of the conditions in its surroundings—the other atoms, the electric and magnetic fields, the force due to gravity, and so on.” Such a prediction does not require “understanding something about the bigger person-system.”[1] It goes without saying that the action of moving his hand is not relevant to predicting the motion of the atom.
Physicist Sabine Hossenfelder calls it a “common misunderstanding” that a computer algorithm written by a programmer controls electrons by switching transistors on and off or that a particle accelerator operated by a scientist causes the collision of two protons to produce a Higgs boson. In both cases it is the deeper fundamental physical composition, i.e., the neutrons, protons, and electrons, that explain the events; it is simply useful to describe the behaviors of the systems (the computer, the accelerator, the programmer, the scientist) in practical system-level terms.
[W]e find explanations for one level’s functions by going to a deeper level, not the other way around…. [A]ccording to the best current evidence, the world is reductionist: the behavior of large composite objects derives from the behavior of their constituents….[2]
The assumption of determinism
These assertions are not entirely uncontroversial. First, there is no universal agreement that the behavior of higher-level things can always be explained by looking at lower-level things and the behavior of constituents.[3] Systems admittedly are combinations of fundamental things, but those combinations result in properties and behaviors that don’t occur at lower levels. Many of the properties relevant to the behavior of emergent systems don’t even exist at the level of fundamental particles and forces. Trying to explain all emergent system behavior by describing the behavior of fundamental particles is somewhat like trying to explain a computer game by describing the opening and closing of logic gates on integrated circuits.[4] You might learn what’s occurring in the computer hardware, but you wouldn’t be able to play the game.
There also seems to be an assumption that “explained by” is equivalent to “caused by”. If you can describe the properties and behavior of a system in terms of particles and forces, then the behavior of the system is caused by those particles and forces. The ability to describe a system in terms of fundamental particles and forces seems relatively established, i.e., when an arm moves, that movement also constitutes the movement of many billions of tiny particles under the influence of fundamental forces. That much is uncontroversial. But whether those particles and forces also can decide to move the arm does not follow quite so logically or incontrovertibly.
That last step requires another key assumption—that the behavior of systems is completely determined by the behavior of fundamental particles and forces. It requires a conclusion that “using the laws of physics to move my arm” is equivalent to “having my arm moved by the laws of physics.” In other words, it assumes complete determinism, which means the behavior of the universe can be analogized to a long chain of dominoes stretching back to the Big Bang 13.8 billion years ago, all falling in a deterministic pattern. Your arm, my arm, and any decision to raise any arm are all dominoes in that chain.
The problem with dominoes
On the face of it, a long chain of dominoes seems a simplistic and brittle design architecture for 13.8 billion years of history. But putting aside the fragility of the design, there is a more fundamental problem with a picture of the universe based on a chain of dominoes—our deepest theory of physical reality says that what is fundamental is not wholly deterministic. Quantum evolution is not deterministic but probabilistic. It integrates uncertainty, probability and indeterminacy into what is fundamental. Determinism relies on an unbroken chain of events and causes. Quantum mechanics breaks the causative chain at a very deep level—the level of fundamental particles and forces.
The problem with indeterminacy
The story does not end there, however. Because quantum indeterminacy does not run rampant through the macroscopic world. Nor does it not cause quantum mechanics to produce nonsensical, random, or chaotic results. No, in fact, despite breaking the causative chain of determinism, quantum mechanics produces extremely accurate predictions and is one of the most successful tools ever created by physics; it is the foundation of much of our advanced technology. Microscopic quantum indeterminacy simply does not result in ubiquitous macroscopic indeterminacy.
The reason is that the seemingly random indeterminacy of quantum state reduction, i.e., what we might call quantum jumps, occurs within the probability distribution of the quantum wave function. As a result many, many microscopic quantum jumps average out to produce aggregate results predicted by the wave function. The laws of probability cause those many, many trillions of tiny quantum interactions to produce a macroscopic world that looks like the world predicted by the wave function and by classical physics. The macrocosm does not look like the quantum world; it looks like Newton’s classical world.
So have we come full circle? Does quantum indeterminacy break the causative chain of determinism and then fail to affect the macroscopic world at all? Does it average out so completely that it becomes irrelevant to emergent systems?
Probabilities are not dominoes
We don’t know the full answer—yet. But it seems vanishingly unlikely that something as fundamental as quantum indeterminacy plays no role in the macroscopic world.
It is true that portions of the macroscopic world seem to act in a largely understandable way consistent with a more determinist view of physical behavior. And yet we know that if we drill down deep enough into the behavior of macroscopic systems, we will find beneath the surface both practical and theoretical uncertainty limiting what we can measure and know about quantum behavior.
We also know that there is a difference between predicting the probability of something happening and predicting what actually happens. There is a tension between those things, a dynamic that makes a difference, even in the emergent world. Probabilities are predicted distributions over many occurrences. In any one occurrence, the particular result is not predictable. So even if the broad-scale average behavior of emergent systems were predictable, the behavior of each system in each event is not. Nature presents us with an average, not an absolute, picture of the macroscopic world; classical physics works as an approximation of quantum physics only because of averages and scale.
Unpredictable variation, in fact, is a requirement for application of the laws of probability. Probability results in a meaningful representation of behavior only if there exists a large number of different events whose outcomes average into a distribution. That requires the occurrence of events which are not individually predictable. In other words, for the aggregate behavior of systems to converge on a meaningful probability, individual systems must have the ability to do something improbable. That must be true for any system whose actions are not predictable with 100% probability. Anything short of 100% requires that the system must on occasion do something less than 100% probable—something improbable or unlikely or even random.
That, of course, is exactly what many emergent systems do. From tumbling bacteria[5] to complex weather patterns to human beings, complex emergent systems on any given day do not conform to the average. Instead, they engage in deeply unpredictable behavior which fits a model of the universe based on probabilistic evolution, at both the microscopic and macroscopic levels.
Emergent systems learn to do random things
Natural selection may teach biological systems to do exactly that. Neuroscientist Kevin Mitchell theorizes that complex biological systems take advantage of the chance introduced by quantum indeterminacy to exert causal influence.
[T]he really crucial point is that the introduction of chance undercuts necessity’s monopoly on causation. The low-level physical details and forces ae not causally comprehensive; they are not sufficient to determine how a system will evolve from state to state. This opens the door for higher-level features to have some causal influence in determining which way the physical system will evolve. This influence is exerted by establishing contextual constraints: in other words, the way the system is organized can also do some causal work. In the brain, that organization embodies knowledge, beliefs, goals, and motivations—our reasons for doing things. This means some things are driven neither by necessity nor by chance; instead, they are up to us.[6]
Emergent systems evolve a design architecture that leverages indeterminacy without breaking the laws of physics.
The universe is not deterministic, and as a consequence, the low-level laws of physics do not exhaustively encompass all types of causation. The laws themselves are not violated, of course—there’s nothing in the way living systems work that contravenes them nor any reason to think they need to be modified when atoms or molecules find themselves in a living organism. It’s just that they are not sufficient either to determine or explain the behavior of the system.[7]
In particular, he describes how organisms use indeterminacy, embodied in “an inherent unreliability and randomness in neural activity,”[8] to exercise causative power in an extraordinary way: “[O]rganisms can sometimes choose to do something random.”[9]
Self-governing systems constrained by probability
Is it possible that the universe can construct autonomous, self-governing, decision-making systems? Can fundamental particles and forces create causation engines that are constrained by the laws of physics and probability but not fully determined by the particles and forces that build them?
Philosopher of physics Jennan Ismael argues that determinism does not rule out the existence of autonomous systems “with robust capabilities for self-governance.”[10] Self-governing systems can have the “felt ability to act spontaneously in the world, to do what [they] choose in the here and now, by whim or fancy, free of any felt constraints.”[11] These emergent systems cannot violate the laws of physics, but they can use them to their own advantage. They can choose without any other local force or subsystem compelling them to do so; they even can engage in capricious or random behavior in defiance of any attempt to predict their actions.
The catch is that this relatively unconstrained freedom exists only for subsystems of the universe where local laws and states are subject to exogenous interventions and no other subsystem can exercise complete control. The big picture is still governed by the global laws of the universe, where there can be no exogenous interventions (because the universe includes everything). Determinism still rules, operating with global laws at the global level. But at the local level, there is freedom for self-governing systems to influence each other and exercise autonomy.
Ismael rejects the notion that quantum indeterminacy changes this picture. And yet her compatibilist description of reality, and her distinction between local freedom and global determinism, looks and feels almost like the universe described by Mitchell—a universe in which the door is open for systems to evolve causative power. Ismael describes the development of the self with autonomous and self-governing capabilities in a way that is very like how Mitchell describes the evolution of free agency through natural selection.[12] In the universe described by both Ismael and Mitchell, fundamental particles and forces enable the existence of emergent systems that exercise agency even to the point of choosing random behavior.
What if the picture Ismael offers is almost entirely correct, except that quantum indeterminacy and probability govern at the global level? Such a world would look and feel like the world she describes, but it would not assume a global principle of absolute determinism. It would be governed by probability at both the microscopic and macroscopic levels. Instead of circumscribed local freedom, self-governing systems would have the relative free agency described by Mitchell, allowing and encouraging them to exercise causative power to do things for reasons and even to do unexpected things.
What if that is who we are?
It is a truism that ideas can be powerful. Yet it is difficult to describe an idea in the language of fundamental particles and forces. The Pythagorean Theorem has influenced the history of mathematics, but what would the theorem look like represented only by fundamental particles and forces? Perhaps the brain of Pythagoras could be represented as a system constructed from fundamental things, but how exactly would particles and forces represent the mathematical concepts employed by Pythagoras—concepts which undoubtedly have exercised causal influence on other mathematicians, engineers, and scientists? The same question can be asked about the concepts of quantum mechanics. Fermions and bosons may behave quantum mechanically, but could they conceptualize quantum mechanics?
Unless we conclude that concepts have no causative influence—even the concepts of quantum mechanics—emergent systems must be able to exercise some causal power, including through the creation of ideas and concepts.
The inference seems inescapable that the universe and the fundamental particles and forces that comprise it can construct emergent systems with causal power—systems that can’t move the atoms of a finger by breaking the laws of physics, but can choose to move a hand.
Emergent, not dead.
[1] Carroll (2016), p. 109.
[2] Hossenfelder (2022), pp. 88-89. She does acknowledge that there are unanswered questions about the connections between the layers. “Why is it that the details from short distances do not matter over long distances? Why doesn’t the behavior of protons and neutrons inside atoms matter for the orbits of planets? How come what quarks and gluons do inside protons doesn’t affect the efficiency of drugs? Physicists have a name for this disconnect—the decoupling of scales—but no explanation. Maybe there isn’t one. The world has to be some way and not another, and so we will always be left with unanswered why questions. Or maybe this particular why question tells us we’re missing an overarching principle that connects the different layers.” Ibid., p. 89 (emphasis in original).
[3] See e.g., Anderson (1972), Ellis (2020).
[4] Analogy suggested by a passage in Ismael (2016), p. 217.
[5] Biologist Martin Heisenberg describes the ability of certain bacteria to initiate random tumbles in a search for food and a favorable environment. Heisenberg (2009).
[6] Mitchell (2023), pp. 163-164 (emphasis in original).
[7] Mitchell (2023), pp. 168-169.
[8] Mitchell (2023), p. 175 (emphasis in original).
[9] Mitchell (2023), p. 175.
[10] Ismael (2016), p. xi.
[11] Ismael (2016), p. 228.
[12] And also similar to the picture developed by Daniel Dennett. Mitchell (2023), p. 151. Dennett (2017).




