In the spirit of crank philosophy, I’m going to talk about something I use as an “intuition primer” in thinking about causality and reductionism. If you’ve ever felt confused when reading a philosophical discussion of reductionism, this may be of some small use, though it will require thinking through a few simple “computer science” concepts.
First, there are a bunch of different ways of defining reductionism. The Wikipedia article on reductionism lists three types: theoretical, methodological and ontological. An article at the Interdisciplinary Encyclopedia of Religion and Science lists these three types:
- Constituent reductionism – the idea that when you break a system down into its smallest parts, there’s nothing “lost”. It doesn’t say much more than something like, “we are all just made of atoms”.
- Epistemological reductionism – the idea that when you describe a system at a “lower level”, nothing is lost. A good example of this is a description of temperature (a high-level description) in terms of the kinetic energy of the molecules in the system.
- Causal reductionism – the idea that all “higher-level” causal features of the system are the sum, without remainder, of the causal interactions at the lowest level. This says that the formation of a whirlpool, for example, doesn’t involve any causes beyond the atomic forces and fields that we recognize as fundamental.
As you can see, there’s some overlap between these types, which is one source of confusion. Another source is the theoretical nature of some descriptions. Imagine, for example, a description of two billiard balls colliding that described the forces acting upon every atom in both balls. We might say that a description of this kind “theoretically” accounted for everything that was going on, while acknowledging that: A) our feeble human brains would quickly lose sight in all of this data that a collision between two medium-sized objects was even occurring; and B) a human lifetime might not be enough to produce or consume such a description.
And what does a “description” or “explanation” amount to anyway? What relationship does it have to the “real” world? If we say that things are made of particles, do there need to be real particles in the world for our description to be correct, or is it enough that our description of the world captures something that “acts exactly as if” it were a particle? Are all descriptions in some sense figurative?
Another, more basic point of confusion is the exact meaning of causality and “summing” causes — especially when those causes and their effects happen simultaneously or pertain to very high level processes. It feels almost absurd to say that the causality involved in a divorce or an election result has anything to do with the weak atomic force. And many arguments can be had about whether there’s such a thing as “emergent” causality, what such a thing might require, and what it would say about the structure of reality. What is causality anyway? Has anyone ever really defined it in such a way that we can understand what it would mean to start performing pseudo-arithmetic operations like “summing” with it?
To sidestep some of this mess, it helps to imagine something called an “agent-based simulation”. An agent-based simulation is a computer model (a program, really) that defines a bunch of things and their individual behaviors, allows those things to interact according to their defined, individual rules of behavior, and then lets you watch to see what happens when it is run. Often, whatever happens is compared to what happens in the real world to see if the simulation is “accurate” or can make meaningful predictions.
A startlingly simple example of an agent-based simulation is Craig Reynolds’ Boids, which simulates the flocking behavior of animals like fish and birds.
Each individual “boid” in the program adjusts its vector (velocity and direction) based entirely on three simple pieces of information:
Separation: steer to avoid crowding local flockmates.
Alignment: steer towards the average heading of local flockmates.
Cohesion: steer to move toward the average position of local flockmates.
The behavior that results when many boids are placed in an open space is uncanny. Even though each boid acts individually according to just these three pieces of information, what emerges on the screen is a coordinated group that very closely mimics what you see when you watch a flock of birds, a school of fish, or even a herd of running horses, depending on the specific parameters of the model.
The three pieces of information above are arrived at through calculations made within a circle of influence around a particular boid (its “neighborhood”). You could say that an individual boid “knows” the average separation, heading and position of the other boids in its neighborhood. But there is no need to get epistemological about it. For our purposes, what we want to say is that the causality at work in the flock is defined by the three pieces of information each boid has, and the procedure each boid uses to adjust its vector based on these three numbers.
We want to be really precise here about how we’re defining causality. So it’s important to recognize that there are two parts to the calculation taking place: 1) the part that looks at the other boids in a boid’s neighborhood and comes up with its three numbers; and 2) the part that determines how the boid will change its vector based on those three numbers. Both of these parts are included in the causality of the system, because they both “determine” something. If you changed the details of either part of the calculation, the behavior of the “world” would change. And the two parts directly affect one another in a sort of causal “loop” (a boid moves closer to another one, which changes the average separation number, which causes it to change heading to move further away, and so on).
What we don’t include in the causality of the system is the code that draws the boids on the computer screen for you to see. Why? Because this code, though it involves calculations of a sort, does not determine any behavior in the system — it simply represents what has already been determined by each boid’s individual calculation. You could draw each boid as a fish or a bird; you could have a blue or green background; and you could show the flock close up or from a distance. No matter how you display it, the system behaves in exactly the same way.
The beauty of the agent-based simulation is twofold. First, there’s no confusion about the relationship between our “description” or “explanation” and the “real” world. In the case of a simulation, the description is the world — the code that is executed both describes the process and enacts the process.
Second, in a simulation, causality becomes something very simple and very definite. We don’t have to think about the specifics of the computer that the simulation runs on, because we assume it will run the same way everywhere (this is actually mathematically provable in many cases). We don’t have to think about what “matter” is, what a “force” is or what a “field” is, because there are only boids, vectors, calculations, and a cartesian space in which they interact. And, most importantly, we don’t have to worry about whether something “extra” creeps in causally, because we know about everything in the program that does any determining. The system is “causally closed”. If we start out with the same initial conditions (the number of boids and their exact starting positions, velocities and directions), the simulation will do the same exact thing every time we run it.
Now let’s relate this idea of agent-based simulations to the real-world question of reductionism, keeping in mind that we’re not trying to answer the question — we’re just trying to ask it in a way that will help us understand what a real answer might mean.
First, imagine that we have infinite computing power and memory. We can create a simulation as large as we want; larger than the entire universe if need be.
Now imagine that the objects (the “boids”) in our simulation are the particles in your favorite fundamental physical theory, and that our calculations are the equations that determine the behavior of the particles in this same theory. If you “run” this simulation, will it act the same way our universe acts? Will it be possible for phenomena to arise in the simulation that we would describe as “planetary motion”? What about “whirlpools”? “Organisms”? “Conscious beings with internal experiences”? ”Divorces”?
If your answer is “Yes”, then you might be a reductionist.