[NKS] Wolfram: Computational Equivalence

[Continued live-blogging of the Wolfram conference in Waltham, MA.]

When looking at Rule 110, he wonde3red what happened if you consider everything that happens as a computation. He shows a Cellular Automaton [CA] that generates primes and another that does powers of 2. (The white stripes fall on the primes or the powers.)

He discovered the principle of Computational Equivalence (PCE) by looking at lots of CAs. The principle: If you look at a process, the process will correspond to a computation of equivalent sophistication. The computer revolution has taught us that it’s possible build a single, universal machine that will do any computation. He’s taken that idea seriously and applied it to the natural sciences.

The PCE says that except in cases where the system is doing something really simple, it’s most likely the case that the system is doing a computation of equivalent sophistication. That principle has many implications and predictions. E.g., it predicts that a system with simple rules should be capable of computations equivalent in sophistication to any other computer of equal sophistication. I.e., Rule 110 should be a universal computer. Rule 110 is really really random looking, but it has some “local structures,” i.e., there are identifiable structures (lines, triangles) in the swirling mist. These might rerpresent useful information. You might have thought that to do universal computation you need very complex systems with very complex rules (e.g., lots of logic gates), but instead you can do it with Rule 110 which arises from extremely simple rules.

This has implications. For natural science it means that among systems in nature one expects to see many systems capable of universal computation.

The PCE wraps together several things. First, it means there’s an upper limmit on the computations that can be done by a system. You can’t keep adding complex rules to get more complex computations; once you get past the threshold at which a system is a universal computer, you can’t get any further. Adding registers, for example, won’t increase the sophistication of the computation although it obviously might speed up the actual calculations. (This is like Church’s thesis, he says.) But the PCE gets its teeth by saying that not only is there an upper limit, but this limit is achieved by many systems.

To do a computation, perhaps you have to feed Rule 110 complex inputs. But the PCE says that that’s not necessary. Rule 110, even when the initial conditions are simple yields computations of enormous complexity. [I've always been fuzzy on this point. Still am. Where in Rule 110 is the computing of Pi or the lighting effects for Doom III?]

PCE is in one sense a law of nature [In the book he leaves out the "in one sense."] In another it’s a law of computation. One must ask “How could this principle be wrong?” As a law of nature, it could disagree with reality. As an abstract fact, it might yield false deductions. He thinks that it will come to seem as obvious as the Second Law of Thermodynamics. (He pauses to suggest that in fact the Second Law isn’t obvious.)

So how might it be wrong? Systems might have too much or too little computational sophistication. Models in the past haven’t noticed or cared about this. If constraint-based systems operated as well in nature as initial value problems, then the PCE couldn’t be right. [I lost his point.] Or, in physics, if quantities are continuous, then you could do…[and I lost the point again...he's talking very fast and I am skating on very thin ice]. But Wolfram believes the universe is discrete, not continuous, so that objection doesn’t hold.

Human thinking is supposedly more sophisticated than what computers can do. Wolfram disagrees.

But the PCE could fail at the low end, i.e., maybe Rule 30 isn’t a universal computer. Maybe there’s some regularity in Rule 110. We usually think of repetition and nesting as regularity but perhaps there’s another form of regularity. And maybe selects for rules that are complex but have that unexpected form of regularity.

Wolfram thinks the principle is true but his point is that there are ways it could fail. [And thus the PCE is a scientific statement. He doesn't use the word "falsifiable" but he's clearly thinking it.]

Systems like 110 seem complex because they’re doing computations as complex as we are when we’re trying to make sense of it.

Computational Irreducibility (CI) argues against predictability. If a system is complex, we can’t predict where it will be in a thousand steps except by running the thousand steps. That is, we can’t figure out the outcome with less effort than the system itself expends. [This is one of the ideas that first attracted me to Wolfram's thought.] Computational reducibility has been at the heart of many of the sciences: we can tell exactly where the moons of Jupiter will be in the year 3005 without having to wait until 3005. But you can’t predict will step 3005 of Rule 110 will be without going through all 3005 steps.

The PCI and PCE means that it simply won’t be possible to find exact solutions (mathematical functions) for some systems.

(In response to a question): The Scott Aronson review was disappointing because Wolfram spent time with him trying to correct his misconstructions but it got published anyway.

Take a question like whether there’s extraterrestial life. We recognize earth life because it shares ancestry. But is there an abstract definition of life? He can’t find one. The same is probable even more true of definitions of intellligence. There’s a threshold of computational sophistication, but beyond that there isn’t an essence of intelligence. Lots of systems have crossed the threshold to universal computation. There’s lots of fun things to talk about distinguishing intelligent systems from merely universally computational ones, involving concepts such as meaning and intention, but unfortunately we’re out of time. [!]

Categories: Uncategorized Tagged with:

uncat Date: June 28th, 2003

dw