Joho the Blog » 2003 » June

June 30, 2003

Duel Booting

As my PC seems to be fairly stable – only one crash in 24 hours! – I’m beginning to think about trying out Linux. I’ve bought a copy of RedHat and I’m beginning to clear out one of the hard drives in my machine. But I’m frankly frightened about dual-booting XP and Linux given the fragility of my machine. So I’d like some advice.

Here’s my situation. I have a fresh install of XP on my 120G boot drive. I will have lots of room for Linux on a 60G drive currently formatted as NFTS. I am ok with scraping everything off of that drive and repartitioning and reformatting.

I want to start slowly with Linux. I expect to spend most of my time in Windows for now. That may switch, depending on how things go with Linux. I will continue to have a hell of a lot of data in Windows formats. I also expect to need to boot into XP for some Windows-only apps, including games.

I am wary about monkeying with boot sectors. I will be really really pissed – at no one in particular – if in the course of installing Linux, I end up having to reinstall XP and all of its apps. But I also want a transition path; if (for example) I start off by booting Linux off a floppy, I’d like to be able to boot off a hard drive once I’m feeling more secure. But boot decisions seem to be forever.

So, does anyone have any links that explain it all to me? Or tales of woe and rejoicing?

As ever, thanks.

23 Comments »

Semantic TV

The always read-worthy Scott Kirsner writes in the Boston Globe today (note: Globe links rot) about Gotuit Media, a company that “indexes” video. Indexing in this case means that it divides video content into chunks tagged by its content so that you can choose to watch “just the highlights, or the ‘best hits’ or the top plays by Tom Brady, or even a 20-minute Reader’s Digest condensed version” of a football game or any other video. It takes software and humans to tag the video, an expensive proposition but perhaps worthwhile to cable providers and others who will sell the smarter content to the likes of us. (Gotuit also has a branch doing TiVo for radio.)

I wonder how much of this could be done right in a TiVo box. I don’t know what metadata is embedded in the video stream, but Pinnacle Studio, among others, does a good job of figuring out when scenes have changed in a digitized video; I assume it looks for a significant change in the pattern of pixels from one frame to another. If TiVo increased its processing power, it too could offer scene selection. Speech recognition would let it find all the plays in a game where a particular player is mentioned. If it has access to closed captioning, then it could do some text indexing as well. And if it had some high-end visual pattern recognition software it could to the thing that traditionally has driven entertainment technologies: it could automatically find the nude scenes in any movie.


Scott also reports on a lawsuit brought by Pause Technology charging TiVo with infringing on a 1995 patent held by Jim Logan and a partner.

Pleeeease don’t let them take my TiVo away!

1 Comment »

Fear, Dread and Wifi

We all have watched the Arc of Fame:

1. Buzz among the cognoscenti
2. Adoration by the masses
3. Thrashing by the media
4. Blase disregard by everyone
5. Retro condescension by the idle smirky

Judging by a pair of articles in the Boston Globe today, wifi has reached stage 3 without ever making it to stage 2.

At the top of the Technology section today, Hiawatha Bray writes a fear-mongering piece about the vulnerability of home networks, with an emphasis on the dangers of wifi. Vandals are out to trash you! Thieves can’t wait to get their hands on those photos of your kid’s birthday party! The second half of the article is useful (but not detailed enough) advice on how to lock down your network.

Immediately below Bray’s article is one by Peter J. Howe, subtitled “Some analysts wonder whether WiFi craze is a bubble waiting to burst.” I know from sad experience that writers don’t write their own headlines or subheads, but in this case it’s a good summary of the article. Although (says Howe) the stats all indicate a sector taking off, Lars Godell of Forrester Research is quoted as saying that “much of the money … is being wasted” because not enough people are going to be willing to pay for the service.

Howe’s article ends by suggesting that wifi growth may be fueled by “companies supporting free access to draw publicity and foot traffic.” He does not mention neighborhood networks. When I log onto my wifi network, I have three networks to choose from. One is my next door neighbor’s and the other emanates from the house across the street.

Too bad there wasn’t an accompanying article about how to build your own neighborhood network, including how lock down your computer as you open up your network.

15 Comments »

June 29, 2003

PC Is working…

…knock wood, throw salt over my shoulder, kiss a leprechaun, pet a cobra, blow out the candles in one breath, pour out some wine, vote Republican.

It seems to have been a conflict between my Asus P4P800 Deluxe motherboard and Kingston HyperX memory. The PC store (ICG Computer in Brookline) put in a lot of hours tracking this down, and now that they’ve switched out the HyperX for whatever is the next best type, the system seems to be stable. At least it’s been up for almost 24 hours. (And now, of course, I just jinxed myself.)

Let me add some keywords in case someone with the same problem is searching for information: Crash. No BSOD. Cold Boot. RAM. Hyperthread. Flashed the BIOS. Pulled out cards. Swapped graphics cards. Reinstalled XP. Reformatted. Repartitioned. Many times. Tried everything. Not heat related. Haunted. Cursed. Get a Mac. !@#$%!-ing computers!

8 Comments »

Pundithood

I’m a pundit! Now all I have to do is spot a yellow crested grebe and my Life List will be done!

Unfortunately, it’s a pretty biting and funny satire — the Internet Pundit Fantasy Camp — that accords me the accolade.

2 Comments »

See you at Pop!Tech

It was a hard choice, but I’ve decided to go to Pop!Tech again this year. It was hard precisely because Pop!Tech is such a good conference, but it conflicts with DigitalID World, which was terrific last year and looks like it’ll be at least as good this year. I really want to go to both, but physics is making that impossible. Damn physics.

I finally decided on Pop!Tech because its territories are more unfamiliar to me. But I will sorely miss the friends and ideas at DigID World. I wholeheartedly recommend both events.

Be the first to comment »

June 28, 2003

[NKS] Starting point

If you’re just visiting this blog for the first time in the past couple of days and are faced with the endless scroll of live-bloggage of the Wolfram conference, here‘s a place to start. The actual blog coverage begins in the entry prior to that one.

Be the first to comment »

[NKS] Jason Cawley: Philosophical Implications

[Jason was a researcher on historical and philosophical topics for the NKS book.]

He starts by talking about what NKS says about Free Will. Wolfram isn’t claiming that free will is impossible or natural. It’s a much more limited claim than that, directed against the spontaneity position and the behaviorist position. Spontaneity: Will is uncaused. Behaviorism: There’s a simple scheme for representing what wills do. Both too easily conflate being free and being unpredictable. Wolfram’s found a wedge between determinism and predictability. Wills are more unpredictable than behaviorists think. But will’s unpredictability doesn’t mean that it’s undetermined. Unpredictability is built into the system because of the system’s complexity. One should read that section of NKS as arguing against those two positions more than as establishing a positive doctrine of free will.

As a person, Wolfram is committed to determinism. But he doesn’t think that he’s proven it or that it follows from NKS. He thinks NKS makes determinism more plausible but doesn’t prove it.

Q: [Me] If we’re computationally equivalent to Rule 110 and all other such systems, then what distinguishes intellligent systems?

A: Not their cleverness, but there are other factors.

Q: Then how can you talk about FW without talking about the factors specific to systems that have will?

A: Wolfram is making a more limited claim. He’s talking about one piece of evidence — unpredictability — that’s been used by various FW theories.

Q: Shouldn’t he be talking about subjective vs. objective?

A: Complexity is independent of whether you’re inside or outside the system.

Q: In a note, he mentions Augustine. Where’s God in this?

A: [I suspect that Jason wrote the note] Religions are heterogeneous when it comes to the FW question. He talks about Augustine because Augustine tried to put together FW and predestination. Augustine distinguishes between what’s knowable to us and to God. This lines up with what’s subject to computation; you get the same sort of seems-free-to-us vs. seems-not-free-to-God viewpoint.

Just as Wolfram has presented open problems in NKS, Jason has open problems in philosophy for us, particularly for philosophy of science.

  • What it means to stick your neck out, Popper-ianly, when dealing with computability. You’re making falsifiable predictions about something that was logically derived. With NKS, you’re experimenting on what used to be the model. E.g., The Principal of Computation Equivalence claims that Class III rules are universal but it could be wrong; its falsifiable.
  • The definition of randomness. It’d be good to have a definition that captures the ordinary meaning but also works in the sciences.
  • Distinguish prior determinism, epistemology and …
  • When you look at where a system is migrating to (an attractor or a constraint) it’s different than looking at how it evolves. Discuss amongst yourselves.
  • Distinguish rules, models and theories. A rule isn’t a model because it’s abstract. Models have to correspond to something in reality. When can you make predictions? When can you just see what happens? Can you have a theory that doesn’t make predictions?

Q: [Me] What about the scope of NKS? Is it an ontology? Doesn’t it seem tied to the happenstanace that we have computers and thus is simply a way of seeing the world, especially since Wolfram rightly says that a model always leaves something out and is something of a political decision?

A: He certainly likes to make heroic generalizations. Theoretical physicists like to stick their necks out and let others show them wrong. He has an ontology that says you can get everything as an emergent property of space. That’s his intuition because he gets so much from what’s simple. He has a philosophy of pure form. How can he know? He doesn’t much care.

Q: [Me] But doesn’t this prove that McLuhan was right and we see our world through our technology?

A: Sure, but when new tech comes along, we don’t throw out the previous insights. We still incorporate what we see through telescopes. [Yeah, but revolutions do occur that re-do the fundaments.]

Q: Does he avoid using the term “emergence” because thinks simplicity and complexity are the same? Because he thinks emergence implies the properties aren’t there at the beginning?

A: He thinks “emergence” is a buzz word. And it’s present in the system from the beginning of its complexity.

Q: When you think about philosophers, which one strikes you as being Wolframian?

A: Hmm. Plato, because of his focus on forms. But Plato thought the forms had to be less detailed and specific than their instances, whereas Wolfram is all about seeing the complexity of forms. [I think he’s more like Hegel in the Logic, deriving everything from the simplest of starting points.


I left the conference after this session because I have some family stuff and because the rest is almost all too technical for me.

Be the first to comment »

[NKS] Wolfram: Computational Equivalence

[NKS] Wolfram: Computational Equivalence

[Continued live-blogging of the Wolfram conference in Waltham, MA.]

When looking at Rule 110, he wonde3red what happened if you consider everything that happens as a computation. He shows a Cellular Automaton [CA] that generates primes and another that does powers of 2. (The white stripes fall on the primes or the powers.)

He discovered the principle of Computational Equivalence (PCE) by looking at lots of CAs. The principle: If you look at a process, the process will correspond to a computation of equivalent sophistication. The computer revolution has taught us that it’s possible build a single, universal machine that will do any computation. He’s taken that idea seriously and applied it to the natural sciences.

The PCE says that except in cases where the system is doing something really simple, it’s most likely the case that the system is doing a computation of equivalent sophistication. That principle has many implications and predictions. E.g., it predicts that a system with simple rules should be capable of computations equivalent in sophistication to any other computer of equal sophistication. I.e., Rule 110 should be a universal computer. Rule 110 is really really random looking, but it has some “local structures,” i.e., there are identifiable structures (lines, triangles) in the swirling mist. These might rerpresent useful information. You might have thought that to do universal computation you need very complex systems with very complex rules (e.g., lots of logic gates), but instead you can do it with Rule 110 which arises from extremely simple rules.

This has implications. For natural science it means that among systems in nature one expects to see many systems capable of universal computation.

The PCE wraps together several things. First, it means there’s an upper limmit on the computations that can be done by a system. You can’t keep adding complex rules to get more complex computations; once you get past the threshold at which a system is a universal computer, you can’t get any further. Adding registers, for example, won’t increase the sophistication of the computation although it obviously might speed up the actual calculations. (This is like Church’s thesis, he says.) But the PCE gets its teeth by saying that not only is there an upper limit, but this limit is achieved by many systems.

To do a computation, perhaps you have to feed Rule 110 complex inputs. But the PCE says that that’s not necessary. Rule 110, even when the initial conditions are simple yields computations of enormous complexity. [I've always been fuzzy on this point. Still am. Where in Rule 110 is the computing of Pi or the lighting effects for Doom III?]

PCE is in one sense a law of nature [In the book he leaves out the "in one sense."] In another it’s a law of computation. One must ask “How could this principle be wrong?” As a law of nature, it could disagree with reality. As an abstract fact, it might yield false deductions. He thinks that it will come to seem as obvious as the Second Law of Thermodynamics. (He pauses to suggest that in fact the Second Law isn’t obvious.)

So how might it be wrong? Systems might have too much or too little computational sophistication. Models in the past haven’t noticed or cared about this. If constraint-based systems operated as well in nature as initial value problems, then the PCE couldn’t be right. [I lost his point.] Or, in physics, if quantities are continuous, then you could do…[and I lost the point again...he's talking very fast and I am skating on very thin ice]. But Wolfram believes the universe is discrete, not continuous, so that objection doesn’t hold.

Human thinking is supposedly more sophisticated than what computers can do. Wolfram disagrees.

But the PCE could fail at the low end, i.e., maybe Rule 30 isn’t a universal computer. Maybe there’s some regularity in Rule 110. We usually think of repetition and nesting as regularity but perhaps there’s another form of regularity. And maybe selects for rules that are complex but have that unexpected form of regularity.

Wolfram thinks the principle is true but his point is that there are ways it could fail. [And thus the PCE is a scientific statement. He doesn't use the word "falsifiable" but he's clearly thinking it.]

Systems like 110 seem complex because they’re doing computations as complex as we are when we’re trying to make sense of it.

Computational Irreducibility (CI) argues against predictability. If a system is complex, we can’t predict where it will be in a thousand steps except by running the thousand steps. That is, we can’t figure out the outcome with less effort than the system itself expends. [This is one of the ideas that first attracted me to Wolfram's thought.] Computational reducibility has been at the heart of many of the sciences: we can tell exactly where the moons of Jupiter will be in the year 3005 without having to wait until 3005. But you can’t predict will step 3005 of Rule 110 will be without going through all 3005 steps.

The PCI and PCE means that it simply won’t be possible to find exact solutions (mathematical functions) for some systems.

(In response to a question): The Scott Aronson review was disappointing because Wolfram spent time with him trying to correct his misconstructions but it got published anyway.

Take a question like whether there’s extraterrestial life. We recognize earth life because it shares ancestry. But is there an abstract definition of life? He can’t find one. The same is probable even more true of definitions of intellligence. There’s a threshold of computational sophistication, but beyond that there isn’t an essence of intelligence. Lots of systems have crossed the threshold to universal computation. There’s lots of fun things to talk about distinguishing intelligent systems from merely universally computational ones, involving concepts such as meaning and intention, but unfortunately we’re out of time. [!]

4 Comments »

June 27, 2003

[NKS] Biological systems

It’s a panel on how NKS applies to medicine.

First: “Challenges to Conceptualizing Biological Systems: Wobble, redundancy and the unpredictable,” by Elaine Bearer, Brown U. We have conceptual problems incorporating wobble, redundancy and the unpredictable.

The evolutionary hypothesis says that biological systems raise through the process of selection which ignores details. It operates on the results, so there may be multiple ways to get to the same outcome. There’s a ton of evidence that supports the idea that it’s the outcome that counts, including wobble in DNA-protein interactions. Also, many functions are redundant. Wobble means that more than one triplet can spcify the same amino acide; the third nucleotide can vary. E.g., methionine is coded for by ATG but also by ATT and ATC. There’s no 1:1 relationship between the nucleotide code and the protein. Also, transcription factor-DNA interactions have no code. [No idea what that last sentence means.] There are many combinations that will work.

There’s also redundancy: more than one protein or copy of a gene can have the same outome. E.g., the ability to change cell positions is crucial and there are over 50 proteins that take its cytoskeleton structure apart. She shows an amazing video of a platelet taking apart its cytoskeleton and puting it together again like an earthworks mound around the center of the cell in order to form a clot. She’s discsovered which proteins enable this. She used Mathematica to simulate how the protein spreads. [This isn't a CA thing but more of an example of the power of Mathematica.]

She points to differences in conceptualization that get in the way of a fruitful conversation among biologists and mathematicians. For example, for her randomness is easy: it’s death. Life is randomness harnessed into regularity and repeatability. [I think she's getting at the difference between randomness and complexity that occasionally confused me in NKS.]


Ilan “Lanny” Kirsch, chief of the genetics branch of the Center for Cancer Researcfh at the National Cancer Institute in Bethesda. He’s going to talk more generally.

Cancer is a genetic disease caused by genetic instability. The genome of a cancer is the not the same as the genome of the normal cell from which it arose; the DNA in a tumor is different that of the cell that gave rise to it. The change in DNA that is cancer is caused by interitance, encironment and randomness. So how does NKS modeling work? We’ll look at three general examples.

1. Pathways and systems. Define the initial conditions annd the rules that descrfibe the pathway. [He doesn't explain what pathways he's talking about.]

2. Sequential steps in carcinogenesis. Usually it’s not a single gene that goes but the alteration of sequential genes that causes cancer. NKS can define the sequential rule/condition changes lead to the coutme of malignant transformation.

3. Undersanding and modeling instability. This one is more problematic. It moeans modeling instability itself, studying the basis of change. Wolfram’s example of mutation (p. 321? 391?) is a very good starting point. In the example, there is a mutation of a rule (i.e., if the left and right block is black, the middle block turns white instead of black, or whatever). The sort of randomness he sees in genes look very much like the randomness Wolfram shows and seem to be capable of being modeled by CA. Perhaps we’re seeing the collision of CA. Are mutations the result of the intersection of programs each with its own rule and initial conditions?


Wolfram: What sort of questions should pure NKS investigators concentrae on that would help biologists? In morphological studies, what are the appropriate rules? What are the primitives? Network based systems? Mobile automata? And what’s the appropriate level for trying to do modeling? What sort of questions would you like to be able to answer about these questions?

Kirsch: [Didn't undestand a word. Too much medical jargon for me.]

Bearer: Computational biological models have been held up by the belief that you need to have everything in place. But NKS may help us figure out what the missing factors are. When we were modelling, the Mathematica guy said that there has to be an inhibitor at a particular position because otherwise the model says that the branching would be other than it is. [Impressive.]

2 Comments »

Next Page »


Switch to our mobile site