Gettys on Bufferbloat

Jim Gettys has a nice tale of what he calls ‘bufferbloat’.  Instinctively, it seems like bigger buffers should result in less packet loss.  As long as you can buffer it, the other guy doesn’t have to retransmit, right?  But that is not the way TCP works.  It’s going to retransmit if you don’t reply fast enough.  And if you clog the buffers, its going to take a long time before the endpoint can acknowledge the data.

One interesting anecdote to me (and it isn’t really a conclusion) is that the world’s love affair with Windows XP (which has an ancient TCP stack) may actually be helping the internet at large, even though the Vista TCP stack is measurably a better stack:

The most commonly used system on the Internet today remains Windows XP, which does not implement window scaling and will never have more than 64KB in flight at once. But the bufferbloat will become much more obvious and common as more users switch to other operating systems and/or later versions of Windows, any of which can saturate a broadband link with but a merely a single TCP connection.

Gettys did conclude that this was a problem for video downloads, which is something everyone is doing these days.  He’s not wrong, but real video services may not be as subject to this as it seems.  Video services live-and-die by bandwidth costs, so to preserve bandwidth costs, they avoid simply transmitting the whole video – instead they dribble it out manually, at the application layer.  If they depended on TCP for throttling, he’d be right, but I don’t think many large-scale video services work this way.  Need more data! 🙂

Anyway, a great read.

Leave a Reply

Your email address will not be published. Required fields are marked *