Four centuries ago, Newton showed that the gravitational field of a perfectly spherical object is equivalent to that of a point. In other words, if you pretend that the Earth is spherically symmetric (which is a really good approximation), then, if you want to know the net gravitational effect that comes from this massively complex system made up of about elementary particles–each one with its own mass creating its own gravitational field–it suffices to consider the gravitational effect of a single particle. (Admittedly, you need to consider a single particle that weighs 6,000,000,000,000,000,000,000,000 kilograms.) In other words, the gravitational field is simply given by , where k is a constant and r is the distance from the center of the Earth. (Extending this model to include points within the Earth is easy as well.)
This idea is enormously powerful. It takes a problem that our most powerful computers couldn’t even contemplate and converts it cleanly into a problem that everyone solves as a freshman in high school. Indeed, physicists are so enamored with spherical approximations that they get mocked regularly for it.
So, to act like a caricature of a theoretical physicist for a minute, let’s pretend that the Earth is indeed perfectly spherically symmetric and that it’s the only object in the universe. And, say we only care about gravity in this world. In this fabled universe, it’s not too hard to imagine building a computer that perfectly represents the effects of gravity everywhere in the universe. A user of this computer could enter coordinates and a desired level of accuracy and learn the strength and direction of the gravitational field at those coordinates within the requested level of accuracy. It is even conceivable that this computer could itself be a sphere of uniform density, centered around the center of the earth, so that the computer could easily consider its own gravitational effects as well. (Or, the density of the material from which the computer is constructed at a given distance from the Earth’s center could be the same as all other matter at that distance. E.g., the computer could be as dense as rock and inside the Earth or as dense as air and in the atmosphere.)
So, a resident of this toy universe (who would have to have similar properties to our hypothetical computer to preserve symmetry) could theoretically build a machine within the universe itself that perfectly represents the universe. We could imagine making the universe more complicated–for example, by adding some more objects, using a more accurate shape for the Earth, and allowing it to change with time–and still constructing such a machine. This computer would be a solution to all of physics in this toy universe, and in some sense, it would be a replacement for the entire universe, even though it’s much smaller and simpler.
In our universe, we don’t yet have such a thing. Indeed, this place is really messy on scales smaller than 100 megaparsecs or so, and it only starts to look symmetrical if you ignore details that are smaller than roughly a gigaparsec. (Think about how the Earth looks reasonably spherically symmetric if you consider distances greater than, say, 1,000 kilometers, but Manhattan is no where close to spherically symmetric. Similarly, our universe is a mess of stars, galaxies, clusters, and voids until you zoom out A LOT, at which point it looks like a nice, evenly distributed foam. In order to see this smoothness, you’d have to look at swathes of the universe that are so big that it takes light billions of years to get across them.) So, there’s no obvious way to represent our universe simply without abstracting away all of the important detail.
We do, however, have computer programs that roughly model our universe, and they’re extremely useful. For example, we test some of our cosmological theories by running simulations that attempt to model the entire universe. (Here’s one example.) These models aren’t so accurate that they create semi-realistic simulations of your favorite T-shirt, or even your home town, or even the Earth, or even the solar system, or even the Milky Way. But, in principle, perhaps they could?
The question behind this blog post is fairly simple: We already have one model that perfectly encapsulates the behavior of the universe. It’s called the universe. Can we do better?
Is our universe like the toy universe that I talked about earlier? In other words, is there some way to build a machine within this universe, and strictly smaller than the universe as a whole, that completely captures it? Can we build, inside of this universe, a computer that answers every physics question to any degree of accuracy? (We might, for example, ask it who will win the the 2025 World Series, or, if the universe is random (as most physicists currently believe), what are the odds that the winner will be the Red Sox.) If so, what’s the smallest possible perfect representation of our universe within our universe? How many particles would it take to build?
I think that most people who read the above questions and know a bit of physics will think that it’s impossible to have a subset of the universe that perfectly represents the whole thing. In other words, if you want to know what’s going to happen in the next ten seconds to an extremely high level of accuracy, all you can do is wait ten seconds and see. (There are, however, some relatively suggestive ideas in theoretical physics. The holographic principle comes to mind.) Indeed, special relativity and quantum mechanics make it hard to even imagine what a solution would look like, since we lose the helpful concepts of absolute time and determinism. (I’m mostly just conveniently ignoring this problem and pretending that we live in a Newtonian world.)
There are also some philosophical hurdles: Wouldn’t some part of this computer necessarily represent itself? Would that piece of the computer be a smaller representation of the universe, leading to an infinite chain of smaller representations? Could such a computer predict its own behavior? Would it be able to solve the halting problem for all buildable machines? These don’t directly lead to contradiction (A buildable machine and a theoretical machine are two different things), but they show that this line of thought is somewhat risky.
Fortunately, such problems lead to some natural relaxations:
- Can we build a smaller model of the universe if we separate it from the universe? In other words, suppose we could escape from the universe into some new universe with the same laws. Can we model our own universe within it without just rebuilding the whole damn thing? Since we’re now allowed to leave our universe, we don’t have to worry about any circularity anymore. (When phrased like this, this question seems to have no practical importance. But, it’s not much of a stretch to go from modeling a different universe to modelling everything that’s not causally connected to the computer.)
- Suppose I don’t want to predict the future to arbitrary accuracy, but just to some fixed accuracy. For example, maybe I want to know the mass/energy distribution of the entire universe n years from now, so that in any meter cubed, my estimate of its mass is accurate to within x kg. For a given n and x, how big of a computer do I need? (Note that we already have computer programs that do exactly this.)
- If space is discrete, what’s the Kolmogorov complexity of the universe currently? (In other words, how much information is contained in the entire universe.) If it’s not discrete, what’s the Kolmogorov complexity of a description of the universe that is accurate up to n meters. (There are some natural lower bounds on this number. For example, the Kolmogorov complexity of the universe can’t be smaller than that of the internet.)
Anyway, that’s a thought that I’ve had for a while now, and this seemed like a nice place to put it. Let me know what you think!
Disclaimer: I didn’t bother to define universe anywhere here because I didn’t want to get too encumbered in the messiness of the real world. When I say “universe,” I probably mean “observable universe.” Even that gets complicated because matter might be entering our observable universe from the outside, so maybe I mean a toy universe that’s pretty close to ours but a bit more well-behaved.
Similarly, I mostly skirted around the idea of what it means for a machine to perfectly represent the universe. I probably mean that a reasonably intelligent user (say a human being) would be able to input a request like “Tell me the probability that event A will happen at coordinate x at time t to within three decimal places.” and eventually (perhaps after some absurd amount of time) receive a correct, easy-to-interpret answer. Obviously, the definitions of coordinate, time, event A, and probability are all a bit sketchy, but I’d imagine that I could formalize such a model reasonably well if I really wanted to bore my readers.
It may well be that the technicalities that I ignored contain the meat of the problem.
In the beginning there was nothing. Then there was an explosion. So, it seems that “nothing” + “x time” (for some value of x) is a pretty good description of the galaxy. You possibly need a seed for the random-number-generator as well, if applicable.
universe(hash(time())
http://xkcd.com/224/
Well, considering that quantum fluctuations don’t obey deterministic parameters, you will only be able to “predict” in a statistical matter.
BTW, we are slowly getting there. When I was in grad school smashing galaxies together with 1000 particles each was hard. Now with the Millenium simulation (which you linked to), we are getting better and better, especially with adaptive mesh refinement.
And, there are some interesting ideas out there that say we ourselves are only part of a big program with some randomness built in. There are some set of governing rules (algorithms). We are all made out of elementary particles (bits). Some heck of simulation, I tell you! 😉
You might find the simulation argument to be interesting: http://www.simulation-argument.com/ .
Scott Aaronson, noted MIT quantum computational theorist and prolific blogger, had a great post a while ago about modeling complexity: http://www.scottaaronson.com/blog/?p=762. I haven’t quite wrapped my head around how it might apply to a computer that simulates the universe, but it feels related to me. In particular, I think it suggests you need to consider resource consumption in such a model.
It’s absolutely related. In fact, though this is an idea that I’ve had since like high school or something, this actual blog post mutated from an e-mail that I never sent to Professor Aaronson because I figured he probably wouldn’t respond.
To say I’m a fan of Aaronson’s blog would be a huge understatement.