Joho the Blog » 2003 » June

June 27, 2003

[NKS] Wolfram: Applying to the physical world

When you try to model a natural phenomenon, you inevitably drop out some of the phenomenon as irrelevant; that’s the nature of modeling. You can’t then complain that the model doesn’t capture something that it wasn’t intended to capture.

Lt’s take snow flakes as an example. When snowflakes form, they start from a seed. When a crystal bond is formed, some heat is released, inhibiting neighboring molecules from attaching. So, make a rule that says a cell will only be filled in if ___, and you get a snowflake form. You can make predictions from this such as: Big snowflakes will have holes in them from where arms collide; that turns out to be true. The model works. But it won’t answer a question like how far an arm will grow at a particular temperature. [I don't know why.]

So, how does one assess models? The best models are ones where you put a little in and get a lot out. A bad model gets more complex because you have to keep adding new considerations until you’re putting in more than you get out.

Models used to be mechanistic: you push A and B moves; there’s a small chain of inference. In another form of modeling, about 300 years old, we use equations to model systems. That’s a much more abstract form of modeling. But some of the equations can be very hard. E.g., you can explain snowflake generation via partial differential equations, but they’re very hard to solve. But NKS adds a new type of model: a simple set of rules producing complex phenomena.

It’s not proper to object that snowflakes aren’t made of CA cells because CA is a model of snowflakes. We don’t think that the earth is solving a differential equation when it moves through space. Differential equations are an abstraction. Similarly, a CA model of a snowflake is an abstract representation of how snowflakes work.

Randomness in models

We see examples of randomness in the natural world, e.g., fluid motion. Where does the randomness come from? There are three possible origins:

1. Classically, randomness comes from external perturbation, e.g., a boat being kicked around by the randomness of the ocean’s surface. Randomness of this sort: Brownian motion and some electronic noise. The randomness you get out isn’t part of the system you’re studying.

2. Chaos theory points to systems in which the initial conditions are random, e.g., a toin coss or the spin of a wheel. The three-body problem in gravity was one of the first cases of this studied: a change in a billionth of a degree results in hugely different results. There’s some effect from the outside that causes the initial conditions to be random. The randomness doesn’t come from the system we’re modeling.

3. You can get randomness without going outside the system. E.g., Rule 110 is intrinsically random.

Constraints

In traditional mathematical models, one can have an equation that is a constraint on the system that solves the equation. Typical example is a boundary problem. [Suddenly over my head. Prepare for vagueness. ] Constraint-based models don’t tell you how to fill in the constraints to solve the problem. His example of a constraint-based model is the question of what the closest packing of circles is. This is very hard to solve if the circles are different sizes. [I've lost the point. Damn not-knkowing-math-iness!]

Biology wants to know where the complexity of organisms come from. Initially, Wolfram assumed it was a different class of phenomenon because the biological systems adapt and change over time. But he’s concluded that adaption and evolution isn’t the issue. What forms of explanation should we give for biological systems? Will simple rules do or do we need much more complicate rules? Maybe biology is in fact sampling really simple programs. So, does the complexity come as a response to the visual system of predators [he seems to be thinking about patterns of fur] or does it come from simple programs? We think it has to have a complex explanation (evolution) because it’s complex. Nah, says Wolfram. [Evolutionists don't necessarily think that pigmentation patterns have been "carefully tuned" by natural selection. The question is whether we can get past pigmentation and get to flight or sight or kidneys.] Natural selection is good at progressively shortening or lengthening bones, but it’s not good at creating complicated things. Natural selection actually simplifies things, not makes them more complicated. We see this in technology where a form of natural selection makes stuff simpler, e.g., Fedex bills have gotten simpler. Natural selection operates well where you can make small changes and not have them be disastrous, just as engineering does.

His model explains how sea shells are formed. If you exclude ridiculous shapes — e.g., ones that leave no room for the animal — all of the ones his model draws are found in nature. So, you don’t need natural selection to explain them. Likewise for the shapes of leaves.

How do you find a model? If you think it’s a CA sort of thing, you can just match ‘em up. But it can be really hard to go from a natural phenomenon to its model. It is an unsolvable computational problem in the general case. So what do you do in practice? First, you can use Wolfram’s Atlas of Simple Programs and see if you recognize one. [The mug shot approach.]

The other thing you can do is search through all possible models of some particular kind. This seems crazy because if you were to search all possible equations, you’d never find it. But because you’re looking for simple rules, it can work.

3 Comments »

[NKS] Analyzing Simple Programs

I’m in the beginners section on how to do cellular automata. The instructor (I came in 10 seconds late and missed his name) is explaining how to use a particular piece of software.

Q: You suggest we look for “interesting stuff” when playing around with CA. But what do you mean by “interesting”?

A: Everyone has a different viewpoint. But there are basic things like classifying them [according to Wolfram's 4 classes of CA, the 4th being complex/random]. Or you might notice that in Rule 30 big white triangles come at particular intervals. You can ask about the distribution of these triangles and plot according to the size of the triangle and where it shows up. You might understand more about how Rule 30 works. These localized structures are incredibly interesting.

A: Isn’t there a problem with relying on perception to notice randomness?

Q: Yes, perception isn’t reliable. That’s why this is non-trivial.

Be the first to comment »

[NKS] Wolfram: How it works

Wolfram is explaining the argument of his book. My blogging this is not going to be more helpful than reading more considered expositions, including Wolfram’s own. So, I plan on only jotting down some stray notes and thoughts.

Trivia: A cellular automaton (CA) gets its number by converting the pattern of bits that express the rule of the CA into a decimal number.

Wolfram shows how simple rules can lead to great complexity in simple CA. He asks how typical this is in the computational world. Is it only CA that generate complexity from simple rules? CA seem special (everything updates at the same time, there are local rules, etc.) so is this generalizable? So Wolfram systematically removes each of the CA’s special features. E.g., what happens if you don’t update all the cells at the same time? So he looks at some variations (substitution systems, replacement systems) and finds that they too can generate complexity from simple rules.

Tag systems: Look at the first element of a string, chop it off, and add different strings at the end based on the color of the first element. If you chop off one element, you get nested patterns. If you chop off two or more, you get much more complex behavior. In cyclic tag systems at alternating steps you add cells or not depending on the step number. [Aha! In the book tag systems turn out to be crucial in explaining the universality of CA 110, and I didn't understand them until now.]

[It's 11:05 and he's lost me. Too much math.]

He’s finding complex patterns in multidimensional systems and networked systems evolving through time, all part of his argument that simple-makes-complex isn’t an artifact of CA.

Nor is it dependent on the complexity of initial conditions. [This I believe is part of Wolfram's radicalization of formalism: initial conditions are a type of contingency and having to rely on initial conditions would mean that the system isn't entirely formal and mathematical.]

[He's talking about constraints. I'm lost again. But he bounces up a level and says the point is that you can force constraints to yield complexity. This apparently has application to the nature of crystals.]

He’s finding the same simplicity-yields-complexity in arithematic and math. Part of his point is that it’s a property not merely of an artifical construct like a CA. But I’m not sure if he’s re-pounding the same nail or whether he’s finding important insights within each area he’s discussing.

Now he’s talking about CA in which cells aren’t only black or white but could be any shade of gray. Guess what? Simple rules yield complexity. And it’s true of differential equations also.

He’s summarizing: Each of these types of systems comes up over and over again so it’s worth understanding something about how they work [A plea for a science of computation]. We’ve seen over and over again that simple rules can bring about great complexity. As soon you pass a very low threshhold of complexity of the initial rules you get all sorts of wild results.

Next topic: Analyzing what simple programs do. What can you do to analyze a hugely complex CA-generated pattern. The end of the story is that there isn’t a way to crack something like that. But what does “crack” mean? Can we go from the sequence back to the rule and initial condition that generated it? Nope. When we say something has regularities, we mean we can summarize it more briefly than by just repeating the sequence. [E.g., "It's a checkerboard" is a lot shorter than listing all 64 squares.] Can we compress some of the complex CA? Nah. Run-length encoding doesn’ work. Block-based compresssion doesn’t work. Dictionary-based? Nope.

We say something is random if we can’t summarize it. When we call something complex, not only can’t we find a unique summary but we can’t find a summary of the properties we actually care about.

Wolfram is endorsing simply looking at patterns. With our eyes. Our visual systems are quite good at discerning patterns. How does our visual system do that? What kind of things will our visual system be able to disentangle? It’s very good at noticing repetition. It’s ok at noticing nesting where there are big blocks of repeating color.

Application to cryptography.

Summary of this part of the talk: One’s assumption when confronted with something like Rule 30 is to say that there’s got to be some pattern of regularity hidden there. But he’s tried lots of ways of “cracking” it and none work.

Summary of the talk overall: These are the sorts of issues that pure NKS talks about.

After lunch he’s going to talk about more technical approaches. Bad news for the likes of me.

Be the first to comment »

Block that metaphor

From RottenTomatoes‘ aggregation of movie reviews:

“Watching Charlie’s Angels: Full Throttle is like being trapped inside a pinball machine operated by a 6-year-old having a sugar rush.”
— Kirk Honeycutt, HOLLYWOOD REPORTER

“Watching Full Throttle is like being pummeled for two hours with a feather duster. It leaves no scars, but you do feel the pain.”
— Peter Travers, ROLLING STONE

“Charlie’s Angels: Full Throttle is like eating a bowl of Honeycomb drenched in Red Bull — a dizzying mouthful of unabashed silliness that leads to an equally precipitous crash once the buzz wears off after the film’s first hour.”
— Elvis Mitchell, NEW YORK TIMES

["Block that metaphor" either is a blatant violation of the New Yorker's copyright or is a loving homage. We'll leave it to the courts to decide.]

1 Comment »

[NKS] Why I Care about Wolfram

I am not qualified to have an opinion about Wolfram’s A New Kind of Science. I’ve read it 1.75 times just to confirm this fact. I can’t evaluate the claims about how much of what he says is new, but I also sort of don’t care, except in a gossip-y sort of way. So why am I interested?

First, since I was in high school I’ve been bothered by the notion of laws. It’s a metaphor that scientists immediately reject: there’s no governing body, there’s no jail time for miscreants. So, then it’s just regularities and correlations. But that’s not much of an explanation. Wolfram tries to explain phenomena by asking what’s the simplest computer program that could have generated it. I don’t know that that is any more of an explanation, but it’s at least a radically different type of explanation. One indication of its radicalness: some phenomena cannot be predicted by solving an equation but only by running the program.

I like the fact that his approach holds hope (but I can’t evaluate how much) for understanding complex phenomena. Traditional science often punks out there. I asked a physicist friend of mine about this, a guy who was in grad school with Wolfram btw, and he said that the structure of heavy atoms is too complex to be managed — so far — by our equations. So, here’s exactly the sort of problem that Kuhn pointed to, a limit against which the current paradigm bumps. And it’s not in some marginal area. Complexity is clearly hugely important, from snowflakes to brains to galaxies. Wolfram may have (I can’t tell) made progress in understanding how complexity can be generated from very simple rules.

I’m also interested in watching the scientific community’s reaction to it. Will it embrace or reject (or take a third path) this? Likewise, will there be sufficient application of Wolfram’s ideas to recalcitrant existing problems to establish it as a workable paradigm? Fascinating.

And I’m very interested in his metaphysics because it seems to be the apotheosis of a modern trend: the triumph of formalism. The discussion in the comments section of my blog recently about Kurzweil and Searle typifies the deep division in our thought. Many of us find it obvious that the brain is hardware running software, and thus we will be able to move the software into another medium and run it losslessly, just like we can move our copy Sim City and our saved games from one computer to another. But this seems to me to be so fundamentally wrong, for reasons I won’t discuss again here. Wolfram takes the brain-as-software idea to its ultimate extension: the universe is software. His books attempts to derive space, time and the fundamental particles of physics from purely formal considerations. Wow.

He’s also an excellent writer and a truly interesting character. I’ve had the opportunity to spend a little time with him, and I like him.

Although I’m frustrated by my inability to follow his argument past page 500 or so, I also think there’s a certain benefit to being forced into agnosticism about his content, for the questions that circle “Is he right?” are fascinating on their own.

10 Comments »

[NKS] Wolfram: The NKS Enterprise

He’s going to talk about three components of the New Kind of Science (NKS) enterprise: Pure NKS, applied NKS and the NKS way of thinking. [I'm live-blogging and don't have time for quote marks.]

The intellectual core of pure NKS means asking the abstract question: What sort of simple programs are there and what do they do? It’s an independent area of intellectual inquiry that one day will be viewed as a discipline like physics and mathematics. It’s topic: the computational world.

The pure science might not have any applications but what’s been driving it is the hope that it does. Applied NKS takes what we learn from pure NKS and use it to model elements of nature, and even to human organizations.

The NKS way of thinking extends this to philosophy and art. NKS may not provide a model for human organization but it may provide a way of thinking about human organization. Also education.

“In order for NKS to realized its potential, it’s absolutely crucial that the pure NKS be properly developed.” A danger is that pure cores get left behind because of the excitement about applications.

The core has a simple story: One day it should be like physics and math with its own questions and methods and thhat is recognized as its own thing that gets to define its own boundaries.

It’s also important that NKS be embedded in other sciences. That’s the only way applications will happen. So, while Pure NKS should be its own field, the applications should not spring from a separate and distinct NKS.

You’d think that sciences are defined by their subject matter, e.g., biology is defined as the study of all living things. In fact, they’re defined by their methodologies. The questions they ask are the ones answerable by their methodologies. [Very Kuhn-ian. But it also fits with Wolfram's attempt to explain the universe purely through formal terms; subject matter doesn't count any more than a mathematician cares whether you're adding apples or oranges.] NKS needs to be implanted through real practitioners, although the questions NKS asks are different. NKS enables some tough questions in these fields to be answered. [Pure Kuhn: new paradigms grow in part in response to anomalies in existing paradigms. Wolfram knows his Kuhn.] In this way NKS is similar to mathematics.

Is there a general applications layer between the pure NKS and applied NKS? Yes, sort of. But it’d be a mistake to focus on that layer instead of focusing on particular application areas. Unlike philosophy, when NKS goes into an application area, it has lots of clear things to say, whereas philosophy has trouble moving from the pure to the applied with any clarity.

How do you confirm the rightness of NKS? Pure NKS simply wants to explore the computational world and see what’s out there. Whether that’s worth doing can’t be judged by the applications. The principle of computational equivalence, however, is an exception within pure NKS because there are predictions that can be made and it is falsifiable, but Wolfram’s not going to talk about this tomorrow. [Damn! I'm only here for the one day!] But in general, pure NKS is like math in terms of its justification. With applied NKS, it works like physics and other sciences: you see if you’re explaining stuff. By the way, when NKS is applied it generally works on more complex problems than the traditional sciences can handle. [He means "complex" in the complexity theory way, e.g., turbulence.]

Are there technologies that arise from applied NKS? Sure, particularly and obviously around computer technology.

History of NKS So Far

It was satisfying that the initial print run of 50,000 of A New Kind of Science sold out in one day. He’s planted the ideas by writing the book, lecturing, etc. But beyond that, how do ideas get introduced into the world? Let’s look at previous paradigm shifts.

There are typical responses: It’s wrong, it’s been done before, and it doesn’t make any sense.

But it’s hard to quickly say that NKS is wrong because there’s a lot in it. And it’s been done before is a denial based on the need to connect what one is doing with what one’s done before. And it seems like it doesn’t make sense because it’s different from what’s gone before. [Wolfram's reply to the "It's been done before" objection was weak, I thought. Better to say that it builds on what's been done but is new in important ways.]

The reaction against NKS shows that it’s being taken seriously. I had viewed science as a less emotionally-involved enterprise … [audience laughs knowingly]

He’s been flooded with emails, etc. His site’s guestbook shows that the visitors come first from the physical sciences and then from mathematics. The most frequent request is for software that will let people do their own experiments. NKS Explorer enables this. [Does it require Mathematica?]

About 50 papers have come out that refer to NKS. They almost uniformly cover the chapters of the book.

He’s handed out a booklet of “Open Problems and Projects.” Most have to do with pure NKS issues, but he’s continuing to work on this and will post on his site open questions in the applications area as well.

He’s been building an “atlas” of what’s out there in the computational world. It’s available on his website. The “Wolfram Atlas” is repository of information. It enables people interested in pure NKS to meet, sort of an experiment in Open Source science. There will probably soon be an online forum for discussing NKS. They haven’t done that yet because they wanted to wait for the furor to die down so that a reasonable discussion can be held.

Application areas he’s particularly interested in:

He wants there to be survey articles in various fields to show the activity around NKS. He’s very interested in how the idea NKS can be used in schools. He’d like somehow to incubate 50 pure NKS professors in order to seed academia; he’s running a summer school as a start. He’s found the best reaction among those at the beginning of their careers and those who are near the end; those in the middle have too much invested in what they’re currently doing. He believes that there will be some dramatic applications that will help people understand what NKS is about.

3 Comments »

Why I’m epigrammatic today

I’m running out to an all-day conference about and by Wolfram. I don’t expect there to be wifi so I’m off-line all day…

LATER: Ok, I was wrong. I’m here with about 250 others and there’s wifi. Wolfram is just welcoming us…

Be the first to comment »

Why matters matters: Short version

Brains aren’t hardware running software any more than pool tables are.

Be the first to comment »

June 26, 2003

3 Justices

So three justices of the United States Supreme Court think that it’s ok for the government to tell us what type of consensual sex we can have with whom.

Scary.

9 Comments »

Paynter on Saltire

Frank Paynter subjects Steve MacLaughlin – so funny he has a laugh in the middle of his name – to his long form interview. I haven’t had time to read it yet because I’m running out to a lunch thing, but Frank’s got a way with the Q&A.

2 Comments »

« Previous Page | Next Page »