I watched a talk by Prof. David Tong at TIFR today. The talk itself was about renormalization groups, which I’ll make a separate post about, but toward the end, in response to a question about what he was working on currently, he said something that caught my attention. He mentioned a theorem that claims it is impossible to simulate the laws of physics on a computer, and then added, “I have an issue with that, that’s what keeps me up at night.”
That sentence lingered. It reminded me of a separate conversation I had with my old post-doc advisor. I think we were talking about a similar topic and he mentioned the "Alice in Wonderland" (I forget, don't quote me on it) story where they try to create a map of a region. Is a detailed enough map the same as the region itself rather than a map? So I asked AI what this theorem was, because the claim sounded both foundational and extremely interesting.
According to AI, the theorem people have in mind is not a single result so much as a family of ideas from theoretical computer science and dynamical systems. Roughly speaking, it says that there exist physical systems whose time evolution is computationally universal. Predicting their future state exactly is as hard as running an arbitrary computation. In such cases, there is no general algorithm that can take the laws of motion and “fast-forward” the system to its state at time (t) more efficiently than simply letting the system evolve for time (t). To know what happens, you have to let it happen.
Stated carefully, this is a worst-case complexity result. It applies to exact microscopic prediction of full system states. It does not say that all physical systems are unpredictable, nor does it say that physics as a discipline is futile. But it is often summarized in a much stronger form: that the universe cannot be simulated faster than the universe itself runs.
Taken at face value, that summary sounds like a direct challenge to the enterprise of theoretical physics. If the only way to know what a system will do is to simulate it in full microscopic detail, then explanation, compression, and prediction seem like illusions. Physics would collapse into brute-force computation.
Physics does not aim to compute exact microstates. It never has. What it predicts are collective variables, long-distance correlations, conserved quantities, scaling laws, and universal behavior. The fact that we can write down simple equations describing fluids, magnets, or quantum fields is already evidence that microscopic irreducibility does not propagate upward in any straightforward way.
Renormalization makes this precise. When we change the scale at which we describe a system, most microscopic details disappear. Under coarse-graining, couplings flow. Irrelevant operators shrink. Only a small number of parameters survive and control large-scale behavior. Entire classes of microscopically distinct systems flow toward the same fixed points. This is not accidental; it is structural.
The no-fast-forwarding theorem assumes that “prediction” means reconstructing the complete microscopic configuration at a later time. Renormalization shows that this is the wrong notion of prediction for physics. What matters is not the full state, but what survives coarse-graining. Even if exact microstate evolution is computationally irreducible, macroscopic laws can still be simple, stable, and computable.
Seen this way, there is no contradiction. Both statements can be true at once. Exact microscopic evolution may resist compression, while large-scale behavior is highly compressible. The mistake is to assume that one implies the negation of the other.
I'm assuming unease not with the theorem itself, but with the conclusion people draw from it. If one takes computational irreducibility as the final word, theory becomes secondary to simulation. Renormalization argues the opposite: theory is possible precisely because most information does not matter at large scales. Forgetting, when done systematically, is not a failure of understanding but its foundation.
What keeps Tong up at night, I suspect, is not whether the universe can be simulated faster than it runs, but whether we are being careless about what we mean by simulation and prediction. Physics has always advanced by identifying what can be ignored without losing explanatory power. Renormalization is the formal expression of that idea.
The universe may be computationally hard in detail. But it is not computationally opaque. The fact that we can understand it at all depends on the difference between those two statements.

Good Stuff! Keep it up!
ReplyDelete