Joho the Blog » david reed

December 18, 2010

David Reed on the neutrality of the Net’s code

Barbara van Schewick has posted two brilliant posts (1 2) about the practical effects removing Net neutrality would have on innovation. Now David Reed, one of the authors of the original argument for the Net’s neutral architecture, has responded, in agreement, but with a shading of emphasis.

David’s point (as I understand it) is that we should remember that Net neutrality isn’t something that we need the law to impose upon the Net. Rather, the Net was architected from the beginning to be neutral. The Internet as a protocol explicitly is designed to move packets of bits from source to destination without knowing what information they contain, what type of application they support, or who created them. All packets move equally in those regards.

So, David asks, “[W]hat do we need from the ‘law’ when the ‘code’ was designed to do most of the job?” After all, he writes, “merely requiring those who offer Internet service to implement the Internet design as it was intended – without trying to assign meaning to the data content of the packets – would automatically be application agnostic.”

In particular: We don’t need a complex rule defining “applications” in order to implement an application agnostic Internet. We have the basis of that rule – it’s in the “code” of the Internet. What we need from the “law” is merely a rule that says a network operator is not supposed to make routing decisions, packet delivery decisions, etc. based on contents of the packet.

David along with Barbara disputes the claim that the need to manage traffic to avoid congestion justifies application-specific discrimination. The Net, David says, was built with traffic management in mind:

… network congestion control is managed by having the routers merely detect and signal the existence of congestion back to the edges of the network, where the sources can decide to re-route traffic and the traffic engineers can decide to modify the network’s hardware connectivity. This decision means that the only function needed in the network transport itself is application-agnostic – congestion detection and signalling.

So, the only law we need, David is saying, is that which lets the Net be the Net.

5 Comments »

October 25, 2009

Is AT&T’s data overload self-inflicted?

Brough Turner summarizes and explains an hypothesis put forward by David Reed that much of AT&T’s bandwidth overload is self-inflicted.

As I understand it — which I admit is not very far — AT&T may have its servers misconfigured. If AT&T has set the servers’ buffers (particular servers — see Brough’s explanation) too large, then they disrupt the network’s traffic self-regulation loop. TCP increases its transmission rate until it starts losing packets. At that point, it cuts its transmission rate in half. So, if all those iPhones are transmitting packets that are being buffered instead of notifying the sending servers that they’re not being received, all those iPhones just keep increasing their transmission rates, further overloading the network.

Feel free to enumerate all the ways the following is wrong. I don’t claim to actually understand it. Here’s Brough’s summary:

It appears AT&T Wireless has configured their RNC buffers so there is no packet loss, i.e. with buffers capable of holding more than ten seconds of data. Zero packet loss may sound impressive to a telephone guy, but it causes TCP congestion collapse and thus doesn’t work for the mobile Internet!

If Reed’s hypothesis is correct, then presumably much of the congestion on AT&T’s network (but how much is much?) could be reduced by shrinking the buffers and allowing TCP to do the self-regulation it was designed to do.

:ATER: Brough’s article has been slashdotted.

1 Comment »