Written with ChatGPT
I watched a talk by Prof. David Tong at TIFR some time ago. The talk itself was about renormalization groups, which I’ll make a separate post about, but toward the end, in response to a question about what he was working on currently, he said something that caught my attention. He mentioned a theorem that claims it is impossible to simulate the laws of physics on a computer, and then added, “I have an issue with that, that’s what keeps me up at night.”
That sentence lingered. It reminded me of a separate conversation I had with my old post-doc advisor (PB from my post “My Mathematical Journey”). I think we were talking about a similar topic and he mentioned the "Alice in Wonderland" (I forget, don't quote me on it) story where they try to create a map of a region. Is a detailed enough map the same as the region itself rather than a map? So I asked AI what this theorem was, because the claim sounded both foundational and extremely interesting.
According to AI, when people talk about this result, they are usually not pointing to a single clean theorem so much as a cluster of ideas from theoretical computer science and dynamical systems. Roughly, the claim is that there exist physical systems whose time evolution is computationally universal. Predicting their future state exactly is as hard as running an arbitrary computation. In those cases, there is no general algorithm that can take the laws of motion and jump ahead to the state at time t more efficiently than just letting the system evolve for time t. If you want to know what happens, you have to let it happen.
Stated carefully, this is a worst-case complexity statement. It applies to exact microscopic prediction of full system states. It does not say that all physical systems are unpredictable, and it does not say that physics as a discipline is pointless. But it often gets summarized in a much stronger way, as the claim that the universe cannot be simulated faster than the universe itself runs.
Taken literally, that summary sounds like a direct threat to theoretical physics. If the only way to know what a system will do is to simulate it in complete microscopic detail, then explanation, compression, and prediction start to look like illusions. Physics would reduce to brute-force computation.
But physics has never been about computing exact microstates. What it predicts are collective variables, long-distance correlations, conserved quantities, scaling laws, and universal behavior. The fact that we can write down simple equations for fluids, magnets, or quantum fields is already evidence that microscopic irreducibility does not automatically propagate upward.
The no fast-forwarding result assumes that prediction means reconstructing the complete microscopic configuration at a later time. Renormalization shows that this is the wrong notion of prediction for physics. What matters is not the full state, but what survives coarse-graining. Even if exact microstate evolution is computationally irreducible, macroscopic laws can still be simple, stable, and computable.
Seen this way, there is no contradiction. Both things can be true at once. Exact microscopic evolution can resist compression, while large-scale behavior remains highly compressible. The mistake is to assume that one of these statements cancels the other.
My unease is not with the theorem itself, but with the conclusion people like to draw from it. If computational irreducibility is treated as the final word, theory becomes secondary to simulation. Renormalization argues the opposite. Theory is possible precisely because most information does not matter at large scales. Forgetting, when done in a controlled and principled way, is not a failure of understanding. It is its foundation.
What I suspect really keeps Tong up at night is not whether the universe can be simulated faster than it runs, but whether we are being sloppy about what we mean by simulation and prediction in the first place. Physics has always progressed by identifying what can be ignored without losing explanatory power. Renormalization is the formal expression of that idea.
The universe may be computationally hard in detail. But it is not computationally opaque. Our ability to understand it at all rests on the difference between those two claims.

Good Stuff! Keep it up!
ReplyDelete