As I understand it â€” which I admit is not very far â€” AT&T may have its servers misconfigured. If AT&T has set the servers’ buffers (particular servers â€” see Brough’s explanation) too large, then they disrupt the network’s traffic self-regulation loop. TCP increases its transmission rate until it starts losing packets. At that point, it cuts its transmission rate in half. So, if all those iPhones are transmitting packets that are being buffered instead of notifying the sending servers that they’re not being received, all those iPhones just keep increasing their transmission rates, further overloading the network.
Feel free to enumerate all the ways the following is wrong. I don’t claim to actually understand it. Here’s Brough’s summary:
It appears AT&T Wireless has configured their RNC buffers so there is no packet loss, i.e. with buffers capable of holding more than ten seconds of data. Zero packet loss may sound impressive to a telephone guy, but it causes TCP congestion collapse and thus doesn’t work for the mobile Internet!
If Reed’s hypothesis is correct, then presumably much of the congestion on AT&T’s network (but how much is much?) could be reduced by shrinking the buffers and allowing TCP to do the self-regulation it was designed to do.
:ATER: Brough’s article has been slashdotted.