Modeling Reductionism

In the spirit of crank philosophy, I’m going to talk about something I use as an “intuition primer” in thinking about causality and reductionism. If you’ve ever felt confused when reading a philosophical discussion of reductionism, this may be of some small use, though it will require thinking through a few simple “computer science” concepts.

First, there are a bunch of different ways of defining reductionism. The Wikipedia article on reductionism lists three types: theoretical, methodological and ontological.  An article at the Interdisciplinary Encyclopedia of Religion and Science lists these three types:

  1. Constituent reductionism – the idea that when you break a system down into its smallest parts, there’s nothing “lost”. It doesn’t say much more than something like, “we are all just made of atoms”.
  2. Epistemological reductionism – the idea that when you describe a system at a “lower level”, nothing is lost. A good example of this is a description of temperature (a high-level description) in terms of the kinetic energy of the molecules in the system.
  3. Causal reductionism – the idea that all “higher-level” causal features of the system are the sum, without remainder, of the causal interactions at the lowest level. This says that the formation of a whirlpool, for example, doesn’t involve any causes beyond the atomic forces and fields that we recognize as fundamental.

As you can see, there’s some overlap between these types, which is one source of confusion. Another source is the theoretical nature of some descriptions. Imagine, for example, a description of two billiard balls colliding that described the forces acting upon every atom in both balls. We might say that a description of this kind “theoretically” accounted for everything that was going on, while acknowledging that: A) our feeble human brains would quickly lose sight in all of this data that a collision between two medium-sized objects was even occurring; and B) a human lifetime might not be enough to produce or consume such a description.

And what does a “description” or “explanation” amount to anyway? What relationship does it have to the “real” world? If we say that things are made of particles, do there need to be real particles in the world for our description to be correct, or is it enough that our description of the world captures something that “acts exactly as if” it were a particle? Are all descriptions in some sense figurative?

Another, more basic point of confusion is the exact meaning of causality and “summing” causes — especially when those causes and their effects happen simultaneously or pertain to very high level processes. It feels almost absurd to say that the causality involved in a divorce or an election result has anything to do with the weak atomic force. And many arguments can be had about whether there’s such a thing as “emergent” causality, what such a thing might require, and what it would say about the structure of reality. What is causality anyway? Has anyone ever really defined it in such a way that we can understand what it would mean to start performing pseudo-arithmetic operations like “summing” with it?

To sidestep some of this mess, it helps to imagine something called an “agent-based simulation”. An agent-based simulation is a computer model (a program, really) that defines a bunch of things and their individual behaviors, allows those things to interact according to their defined, individual rules of behavior, and then lets you watch to see what happens when it is run. Often, whatever happens is compared to what happens in the real world to see if the simulation is “accurate” or can make meaningful predictions.

A startlingly simple example of an agent-based simulation is Craig Reynolds’ Boids, which simulates the flocking behavior of animals like fish and birds.

Each individual “boid” in the program adjusts its vector (velocity and direction) based entirely on three simple pieces of information:

Separation: steer to avoid crowding local flockmates.

Alignment: steer towards the average heading of local flockmates.

Cohesion: steer to move toward the average position of local flockmates.

The behavior that results when many boids are placed in an open space is uncanny. Even though each boid acts individually according to just these three pieces of information, what emerges on the screen is a coordinated group that very closely mimics what you see when you watch a flock of birds, a school of fish, or even a herd of running horses, depending on the specific parameters of the model.

The three pieces of information above are arrived at through calculations made within a circle of influence around a particular boid (its “neighborhood”). You could say that an individual boid “knows” the average separation, heading and position of the other boids in its neighborhood. But there is no need to get epistemological about it. For our purposes, what we want to say is that the causality at work in the flock is defined by the three pieces of information each boid has, and the procedure each boid uses to adjust its vector based on these three numbers.

We want to be really precise here about how we’re defining causality. So it’s important to recognize that there are two parts to the calculation taking place: 1) the part that looks at the other boids in a boid’s neighborhood and comes up with its three numbers; and 2) the part that determines how the boid will change its vector based on those three numbers. Both of these parts are included in the causality of the system, because they both “determine” something. If you changed the details of either part of the calculation, the behavior of the “world” would change. And the two parts directly affect one another in a sort of causal “loop” (a boid moves closer to another one, which changes the average separation number, which causes it to change heading to move further away, and so on).

What we don’t include in the causality of the system is the code that draws the boids on the computer screen for you to see. Why? Because this code, though it involves calculations of a sort, does not determine any behavior in the system — it simply represents what has already been determined by each boid’s individual calculation. You could draw each boid as a fish or a bird; you could have a blue or green background; and you could show the flock close up or from a distance. No matter how you display it, the system behaves in exactly the same way.

The beauty of the agent-based simulation is twofold. First, there’s no confusion about the relationship between our “description” or “explanation” and the “real” world. In the case of a simulation, the description is the world — the code that is executed both describes the process and enacts the process.

Second, in a simulation, causality becomes something very simple and very definite. We don’t have to think about the specifics of the computer that the simulation runs on, because we assume it will run the same way everywhere (this is actually mathematically provable in many cases). We don’t have to think about what “matter” is, what a “force” is or what a “field” is, because there are only boids, vectors, calculations, and a cartesian space in which they interact. And, most importantly, we don’t have to worry about whether something “extra” creeps in causally, because we know about everything in the program that does any determining. The system is “causally closed”. If we start out with the same initial conditions (the number of boids and their exact starting positions, velocities and directions), the simulation will do the same exact thing every time we run it.

Now let’s relate this idea of agent-based simulations to the real-world question of reductionism, keeping in mind that we’re not trying to answer the question — we’re just trying to ask it in a way that will help us understand what a real answer might mean.

First, imagine that we have infinite computing power and memory. We can create a simulation as large as we want; larger than the entire universe if need be.

Now imagine that the objects (the “boids”) in our simulation are the particles in your favorite fundamental physical theory, and that our calculations are the equations that determine the behavior of the particles in this same theory. If you “run” this simulation, will it act the same way our universe acts? Will it be possible for phenomena to arise in the simulation that we would describe as “planetary motion”? What about “whirlpools”? “Organisms”? “Conscious beings with internal experiences”? ”Divorces”?

If your answer is “Yes”, then you might be a reductionist.

Gutting Zombies

I’ve been reading Daniel Dennett’s Intuition Pumps, and it’s bringing to the forefront something that has been lurking around in the corners of my brain for some time: the idea that intuition is really, really important to philosophical thought, and that finding our way to better philosophical answers has more to do with developing better intuitions than it does with rebutting bad arguments.

One intuitive philosophical exercise that has always bothered me is the “zombie” thought experiment. It’s an oldie, but it was recently revived by Gary Gutting in an NYT blog here and here. Gutting states the thought experiment like this:

Imagine, for example, that in some alternative universe you have a twin, not just genetically identical but identical in every physical detail — made of all the same sorts of elementary particles arranged in exactly the same way. Isn’t it logically possible that this twin has no experiences?

It may, of course, be true that, in our world, the laws of nature require that certain objective physical structures be correlated with corresponding subjective experiences. But laws of nature are not logically necessary (if they were, we could discover them as we do laws of logic or mathematics, by pure thought, independent of empirical facts). So in an alternative universe, there could (logically) be a being physically identical to me but with no experiences: my zombie-twin.

What the thought experiment is supposed to show is that there is something non-physical about experience (or in Chalmers’ version, that physicalism is false). How does that work exactly? It has to do, if you haven’t already guessed, with “logical possibility”. Gutting explains:

But if a zombie-twin is logically possible, it follows that my experiences involve something beyond my physical makeup. For my zombie-twin shares my entire physical makeup, but does not share my experiences. This, however, means that physical science cannot express all the facts about my experiences.

Without getting into the details of how “logical possibility” works here, it’s important to notice that at a basic level, the thought experiment is effective because we can easily intuit a physically identical person with no internal experiences. Our minds, in other words, have no problem abstracting “internal experience” out from “physical arrangement of matter”. If you’re like me – someone for whom the intuition doesn’t “work” – the exercise of doing so creates a sort of nonsensical situation in which physical causality itself is ignored (I’ll get back to that later). But even if you don’t find the intuitive appeal of the zombie idea overwhelming, you generally won’t have a problem performing the simple mental abstraction. Why is that?

I’d say that it’s intrinsic to the way symbolic cognition (perhaps the human animal’s best “trick”) works. Take the idea of cardinal numbers. Cardinal numbers don’t exist as physical objects in the world, though we have no problem thinking about and manipulating them cognitively. They are essentially the result of the mental “trick” of taking, say, five rocks, and abstracting away “rock”. We do this sort of abstraction all the time when we solve word problems in math. One hundred percent minus eighty percent is twenty percent, times forty is eight. Eight what? Eight questions that Pedro missed on his exam, on which he got an 80% and which had 40 questions.

So the zombie twin with no internal experiences is “conceivable”, at least in a broad sense. But what does Gutting mean by “logically possible”? And what power does logical possibility have in determining whether something is or isn’t a purely physical phenomenon?

Sadly, Gutting is not very clear on this (although, if you have the energy, Chalmers is probably the best philosopher to go to if you want to sort it out). He does say, pretty clearly, that physical laws are not logically necessary — in other words, that the set of things that are logically possible includes things that do not necessarily follow physical laws. Even this is a little muddy, and it’s easy to see that approaching the argument this way is likely to get stuck in the sort of semantic hair-splitting that quickly loses sight of the larger picture.

So instead, let’s “turn the knobs”, as Dennett says, on the thought experiment:

Imagine an alternate universe in which there is an open can of gasoline which is physically identical to an open can of gasoline sitting in front of you in this universe. Exact same chemical and atomic composition, arranged in exactly the same way. Is it logically possible that in the alternate universe, lighting and tossing a match into this can of gasoline will not result in the huge explosion that it would produce in our universe?

The question I would ask Gutting is, “For you, in what relevant way does the gasoline thought experiment differ from the zombie thought experiment?”

The purpose in asking this question is not to catch Gutting out in a “gotcha” situation. The purpose is to force us to distinguish what we see differently between a scenario which everyone (hopefully) would agree is “purely” physical, and one in which we might be inclined to doubt it. If “logical possibility” works the same way in both examples, looking at the two side-by-side allows us to either ignore it completely or become more clear about the work we believe it is doing in our thought experiment.

So if Gutting were to answer our question by saying that non-exploding gasoline isn’t logically possible, we could ask him to explain why not, in a way that did not appeal to physical laws governing combustion and its high-level emergent properties. Those laws, after all, are not logically necessary, since they cannot be arrived at a priori.

But maybe, instead, he would focus on what he might see as the difference between the causal process of combustion and the state of having internal experiences. To me, this focuses the argument where it should be focused, even if Gutting reaches a different conclusion. The intuition that drives the zombie thought experiment is one that sees internal, experiential processes as being different in kind than physical causal processes. This is why many physicalist philosophers intuit that the zombie experiment assumes what it is trying to establish.

And the reason why the zombie thought experiment doesn’t pump my intuitions is that I see internal experience as being a physical process of the human brain and body in the same way that explosions are physical processes of combustion.

Pressing Words

I’m going to try blogging a bit again. I have a lot of thoughts about stuff that I don’t tend to record, and this seems like a decent place to do it. Hopefully nobody will happen by.

I decided not to use “Spoonerized Alliterations” for the blog name this time, since nobody really got it the first time around, and it is a lot to spell out. It’s figuratively taxing and literally nonsensical.

Instead, I opted for the trite Wittgenstein reference. It’s an appropriate name, though, since most of my philosophical thoughts these days involve trying to work past the linguistic, conceptual and rhetorical craptangle philosophy seems to get itself in.

Mainly, I intend to blog about things not appropriate to the Dead Voles venue — personal, political, feminist, skeptic, fiction writing and all that. Philosophy too, of course — although I may cross-post some of that at Rancho del Vole Muertos.

So welcome back me.