I just finished reading Farewell to Reality by Jim Baggott, a well written and accessible contribution to the emerging genre of books debating whether untestable theories can be considered science. Thanks in large part to the work of Karl Popper, it has come to be widely accepted that a defining characteristic of science is the making of falsifiable predictions. But there has emerged lately a cadre of (mostly) theoretical physicists advocating that the Popperian standard should be relaxed. See, for example, the recent Edge essay by Sean Carroll, in which he argues that we should instead ask whether a theory is “definite” and “empirical”.

It seems to me that “is it science?” is the wrong question. This is one of those debates that serves mainly as misdirection from the real issue, which is: given that nearly all science today is publicly funded, how and on what shall the limited funds available be spent? That isn’t a scientific question, or even a philosophical one, it’s a political question. In theoretical physics, the string theorists currently have the upper hand, because there are a great many of them and they dominate the grant committees, the peer review process, and the training of the next generation of theoretical physicists. But make no mistake, the issue here is not really “how shall we define science”, it’s “how shall we allocate limited funds.”

But if we’re going to redefine science, I would prefer to go the other direction, back in the direction of practicality: instead of replacing “falsifiable” with “definite and empirical”, replace it with “engineerable”. That is, for a hypothesis to be called science, it should elucidate a cause and effect relationship that can at least potentially be the subject of intentional action in the reality that we actually live in.

At first blush, that might seem to wipe out a lot of useful subject matter — most of cosmology, for example — but let me define more clearly what I mean by engineerability. To do so, I first need to draw another distinction that seems to me to be too often disregarded, between reality and models. When we speak of “theory” in physics, often what we’re really describing is a model. Modern physics relies on a great many constructs that everyone agrees don’t actually “exist”, in the sense of being directly observable. We proceed as though they do exist, because doing so leads to the right answers on things that we can observe and measure. What we really have is a black box that we have constructed out of some complicated imaginary parts, because if you put them together in this particular way the black box happens to give the right answer. But the model is not the thing-in-itself, and models (so far, at least) are always wrong — there are always phenomena for which they don’t give the answers that we observe when we measure. In large part, progress in science consists of improving the models.

As an example of this, I am reminded of a discussion of the Bohr model of the hydrogen atom in a quantum mechanics class some years ago. In the very early days of quantum mechanics, Niels Bohr managed to calculate the correct electronic energy levels of the hydrogen atom by assuming that the electron had a circular orbit and that the wavelength matched the circumference of the orbit. No one today would think that that’s how the electron energy states of hydrogen work, but it gave the right answer, as far as it went. I recall asking the professor (who was a well-regarded QM expert and certainly would know) how this came about, and his answer was that Bohr just got lucky — the model was completely wrong, but it happened to yield the right answer.

String theory (to take another example) is both a theory and a model. The theory is a mathematical construct, and as long as it’s mathematically consistent (something I would be woefully unqualified to have any opinion about), there’s nothing wrong with it as a theory. It is even falsifiable, in a sense, because there could be a mathematical proof that could falsify it on some mathematical ground. But the problem isn’t with the theory, the problem is when you try to use it as a model of reality. In that capacity, it isn’t very useful, because it doesn’t predict anything that you can measure or observe, and it certainly doesn’t seem to reveal any cause and effect relationship that could be affected by intentional action. (It also fails another test of a good model, about which I have opined elsewhere, which is that models that have a great many tuneable parameters are automatically suspect because they can be made to fit anything.)

We naturally tend to “objectify” ideas so that we can think about them. In physics, we start with actual observable physical objects, and we devise models to describe and predict their behavior. In those models we create new objects that we can’t directly observe — virtual particles, for example, or fields of various kinds — but after you manipulate them mentally and mathematically long enough they start to seem quite real in your mind. They have assumed properties and behaviors that come to seem familiar. Then we devise more models to describe the behavior of those objects. After a few iterations of that process, we’ve constructed an entire universe of conceptual objects that we can manipulate mentally or mathematically, and if we’ve been clever enough about it, we may be able to correlate some of their behaviors with those of real objects and events. But without experimental observation as a reality check, it’s easy to get to a point where we may just be speculating about the physics of a world of imaginary objects.

Back to engineerability. I would define a model as engineerable if and only if it contributes to the identification or understanding of some causal relation — some behavior that could, at least in principle, be used to intentionally manipulate our own reality. By that definition, cosmological theories can lead to engineerable models because they contribute principles of physics that have proven useful in designing actual things — relativity being perhaps the most obvious example. Under my proposed definition, if a model describes properties or behaviors that humans can use to intentionally affect the outcome of some real process by making conscious choices among alternatives that are at least in principle within their ability to manipulate, then it qualifies as engineerable; otherwise, not.

I want to be clear that I am not in any way advocating that non-engineerable inquiries should be banned, or even looked down upon — I just think there needs to be some sort of biasing force to gently nudge the scientific endeavor back toward reality when it veers too far toward mysticism. I concede that this point of view probably stems in part from my deeply conditioned American constitution-centric belief system, which holds that matters religious ought not to be the province of (or funded by) government. Theoretical mythologies involving infinite numbers of branching universes, or tiny curled up extra dimensions that can’t ever be measured, even in principle, seem to me indistinguishable from religion. Causal factors that can’t be measured or observed, and can’t be provably derived from things that can be, aren’t science, they’re theology. As soon as you say “X is the way it is because of Y”, and Y is something that can’t ever be measured or tested, Y is just another name for God.

I also want to be clear that I’m not objecting to abstract inquiry. Obviously, there have been entire branches of mathematics that initially seemed utterly abstract and devoid of practicality, but that later turned out to be quite useful. The problem with the way theoretical physics has evolved, I think, is not that it’s abstract or unrelated to anything practical, but that it’s misrepresented. If someone wants to pursue an abstract mathematical idea because it’s interesting, no problem — just say that’s what you’re doing, and don’t make grandiose claims that you’ve discovered the True Architecture Of The Universe.

The reason why I would advocate engineerability as the standard for what we think of as science is because it seems to me a reasonable way to tether science to reality and draw a line between science and religion, or science and mysticism. If you want to call something science, show us that you can at least potentially use it to affect our own reality in some way. Otherwise, the philosophy department is thattaway. That also seems to me a reasonable place to set political boundaries. I think most people have no problem with the idea of their tax dollars being used to fund science that leads to progress that produces public benefit. But it seems fairly hard to argue that what Baggott calls “fairy tale physics” leads to similar public benefit. This is an issue that matters, because these days in theoretical physics the “fairy tale physics” seems to be choking off support for other lines of inquiry. I’ll save the rant about the shortcomings of the current system of science funding for some other day, but as long as we’re stuck with the system that we have, it might be worthwhile to try to steer it at least part way back to the reality that we actually inhabit.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *