Followup to “Not as SPDY as You Thought”

In the last couple of weeks many people have asked me to comment on guypo’s benchmark blog post, “Not as SPDY as You Thought”.  Guy shared the post with me before he posted it.  Overall, I disagree with his title, but I don’t disagree with his results much, so I haven’t felt pressed to comment.  He tested something that nobody else has tested, and after reviewing his methodology, it’s mostly fine. Some suggestions have been made for improvement, which he was very open to, and we’ll likely see additional test results coming soon.  But his results are not contrary to Google’s or my own results; they’re just a different test.

The reason his results aren’t contradictory is because Guy’s test doesn’t test SPDY page loads.  Guy tested partial SPDY page loads, not full SPDY page loads.  More specifically, he tested this case:  if you upgrade your primary domain, but few of your other domains, your CDN, etc, how does SPDY perform?  This is a perfectly valid case to test – especially when sites may take an incremental approach to upgrading.  And I’m not surprised at all that if you only upgrade half your page to SPDY, that the results are not as good as if you upgrade all of your page to SPDY.

In the report, Guy breaks out domains into “1st party domains” and “3rd party domains”.  He argues that since you don’t have control over the 3rd party content servers, it may not get SPDY-ized, and therefore his test is valid.  Ok – thats a good point.  But how do we define “3rd party”?  I consider “1st party” to be any content which you, as the site owner have control to change directly.  So if you load your main content from and your images from, those are both 1st party domains.  Unfortunately, Guy’s classifier did not classify them this way.

To understand what I mean, lets take a look at the domains used on a few sample pages and how his test loaded resources from them.  I simply picked three from his test results.   Every page tested is different, but the patterns below are common to many of the top websites.

Domains Used

SPDY domains




Non SPDY domains




Resources fetched over SPDY




Resources fetched over HTTP




“1st party” resources that could have been SPDY but were NOT in Guy’s test




I hope you can now see why I don’t discredit Guy’s test results.  On these pages, 25-50% of the 1st party controlled resources which could have been loaded over SPDY weren’t loaded over SPDY at all. If you only partially use SPDY, you only get partial results. This is okay to me.

Nobody should think I’m discrediting Guy’s work here.  He’s done a great job with great vigor, and it takes an incredible amount of time to do these tests.  He’s planning to do more tests, and I’m very thankful that he is doing this and that Akamai is letting him do so.

In the next wave of tests, I expect we’ll see that SPDY benefits are increased.  Keep in mind that your average site isn’t going to see the 2x speed boost.   The overall benefit of SPDY is conditional on many factors, and websites today have not yet been tuned for SPDY.  Most sites will see benefits in the 5-20% range (like Google did).   A few will see 50% better.  A few will see worse.  Everyone will benefit from new optimization possibilities, less complex websites, and a more network and mobile friendly protocol. More testing like Guy’s is the key to a better HTTP/2.0.

Comments on Microsoft’s SPDY Proposal

Microsoft published their SPDY proposal today to the IETF. They call it “HTTP + Mobility”. Here are some quick comments on their proposal.

a) It’s SPDY!
The Microsoft proposal is SPDY at its core. They’ve fully retained the major elements of SPDY, including multiplexing, prioritization, and compression, and they’ve even lifted the exact syntax of most of the framing layer – maintaining SYN_STREAM, SYN_RESET, SYN_REPLY, HEADERS, etc.

It’s a huge relief for me to see Microsoft propose SPDY with a few minor tweaks.

b) WebSockets Syntax
When SPDY started a couple of years ago, WebSockets didn’t exist. Microsoft is proposing taking existing SPDY, and changing the syntax to be more like WebSockets. This won’t have any feature impact on the protocol, but does make the protocol overall more like other web technologies.

Personally, I don’t think syntax matters much, and I also see value in symmetry across web protocols. I do think the WebSocket syntax is more complicated than SPDY today, but its not that big of a deal. Overall, this part of the Microsoft proposal may make sense. I’m happy that Microsoft has presented it.

c) Removal of Flow Control
The Microsoft proposal is quick to dismiss SPDY’s per-stream flow control as though it is already handled at the TCP layer. However, this is incorrect. TCP handles flow control for the TCP stream. Because SPDY introduces multiple concurrent flows, a new layer of flow control is necessary. Imagine you were sending 10 streams to a server, and one of those streams stalled out (for whatever reason). Without flow control, you either have to terminate all the streams, buffer unbounded amounts of memory, or stall all the streams. None of these are good outcomes, and TCP’s flow control is not the same as SPDY’s flow control.

This may be an example of where SPDY’s implementation experience trumps any amount of protocol theory. For those who remember, earlier drafts of SPDY didn’t have flow control. We were aware of it long ago, but until we fully implemented SPDY, we didn’t know how badly it was needed nor how to do it in a performant and simple manner. I can’t emphasize enough with protocols how important it is to actually implement your proposals. If you don’t implement them, you don’t really know if it works.

d) Optional Compression
HTTP is full of “optional” features. Experience shows that if we make features optional, we lose them altogether due to implementations that don’t implement them, bugs in implementations, and bugs in the design. Examples of optional features in existing HTTP/1.1 include: pipelining, chunked uploads, absolute URIs, and there are many more.

Microsoft did not include any benchmarks for their proposal, so I don’t really know how well it performs. What I do know, however, is that the header compression which Microsoft is advocating be optional was absolutely critical to mobile performance for SPDY. If the Microsoft proposal were truly optimized for mobile, I suspect it would be taking more aggressive steps toward compression rather than pulling it out.

Lastly, I’m puzzled as to why anyone would propose removing the header compression. We could argue about which compression algorithm is best, but it has been pretty non-controversial that we need to start compressing headers with HTTP. (See also: SPDY spec, Mozilla example, UofDelaware research)

e) Removal of SETTINGS frames
SPDY has the promise of “infinite flows” – that a client can make as many requests as it wants. But this is a jedi mind trick. Servers, for a variety of reasons, still want to limit a client to a reasonable number of flows. And different servers have very different ideas about what “reasonable” is. The SETTINGS frame is how servers communicate to the client that they want to do this.

I’m guessing this is an oversight in the Microsoft proposal.

f) Making Server Push Optional
Microsoft proposes to make server push optional. There is a fair discussion to be had about removing Server Push for a number of reasons, but to make it optional seems like the worst of all worlds. Server Push is not trivial, and is definitely one of the most radical portions of the protocol. To make it optional without removing it leaves implementors with the burden of all the complexity with potentially none of the benefits.

The authors offer opinions as to the merits of Server Push, but offer no evidence or data to back up those claims.

h) Removal of IP Pooling
The Microsoft writeup eliminates connection pooling, but it is unclear why. Connection pooling is an important element of SPDY both for performance and for efficiency on the network. I’m not sure why Microsoft would recommend removing this, especially without benchmarks, data, or implementation details. The benchmarks clearly show it has measurable benefit, and without this feature, mobile performance for the Microsoft proposal will surely be slower than for SPDY proper.

I’m happy with the writeup from Microsoft. I view their proposal as agreement that the core of SPDY in acceptable for HTTP/2.0, which should help move the standardization effort along more quickly. They’ve also raised a couple of very reasonable questions. It’s clear that Microsoft hasn’t done much testing or experimentation with their proposal yet. I’m certain that with data, we’ll come to resolution on all fronts quite quickly.

Firefox Idle Connection Reuse

httpwatch does some anecdotal testing to conclude that Firefox’s new algorithm for selecting which idle connection to reuse has some strong benefits.

This is great stuff, and in general it should definitely help.  This is part of why getting to one-connection-per-domain is an important goal.  HTTP’s use of 6 or more connections per domain make it so that each connection must “warm up” independently.  A similar algorithm should land in Chrome soon too.

Fortunately, there is a protocol for this stuff too :-)  Hopefully firefox will pick that up soon too.

How to Get a Small Cert Chain

chain After my last article illustrated the length of our Certificate Chains, many people asked me “ok – well how do I get a small one?”. 

The obvious answer is to get your certificate signed as close to the root of a well-rooted Certificate Authority (CA) as possible.  But that isn’t very helpful.  To answer the question, lets look at a few of the problems and tradeoffs.

Problem #1:  Most CA’s Won’t Sign At The Root

Most CA’s won’t sign from the root.  Root CAs are key to our overall trust on the web, so simply having them online is a security risk.  If the roots are hacked, it can send a shockwave through our circle of trust.  As such, most CAs keep their root servers offline most of the time, and only bring them online occasionally  (every few months) to sign for a subordinate CA in the chain.  The real signing is most often done from the subordinate.

While this is already considered a ‘best practice’ for CAs, Microsoft’s Windows Root CA Program Requirements were just updated last month to require that leaf certificates are not signed directly at the root.  From section F-2:

All root certificates distributed by the Program must be maintained in an offline state – that is, root certificates may not issue end-entity certificates of any kind, except as explicitly approved from Microsoft.

Unfortunately for latency, this is probably the right thing to do.  So expecting a leaf certificate directly from the root is unreasonable.  The best we can hope for is one level down.

Problem #2: “Works” is more important than “Fast”

Having your site be accessible to all of your customers is usually more important than being optimally fast.  If you use a CA not trusted by 1% of your customers, are you willing to lose those customers because they can’t reach your site?  Probably not.

To solve this, we wish that we could serve multiple certificates, and always present a certificate to the client which we know that specific will trust.  (e.g. if an old Motorola Phone from 2005 needs a different CA, we could use a different certificate just for that client.  But alas, SSL does not expose a user-agent as part of the handshake, so the server can’t do this.  Again, hiding the user agent is important from a privacy and security point of view.

Because we want to reach all of our clients, and because we don’t know which client is connecting to us, we simply have to use a certificate chain which we know all clients will trust.  And that leads us to either presenting a very long certificate chain, or only purchasing certificates from the oldest CAs.

I am sad that our SSL protocol gives the incumbent CAs an advantage over new ones.  It is hard enough for a CA to get accepted by all the modern browsers.  But how can a CA be taken seriously if it isn’t supported by 5-10% of the clients out there?  Or if users are left with a higher-latency SSL handshake?

Problem #3: Multi-Rooting of CAs

We like to think of the CA trust list as well-formed tree where the roots are roots, and the non-roots are not roots.  But, because the clients change their trust points over time, this is not the case.  What is a root to one browser is not a root to another.

As an example, we can look at the certificate chain presented by  Poor has a certificate chain of 5733 bytes (4 pkts, 2 RTs), with the following certificates:

  1. 2445 bytes
  2. Go Daddy Secure Certification Authority 1250 bytes
  3. Go Daddy Class 2 Certification Authority: 1279 bytes
  4. ValiCert Class 2 Policy Validation Authority: 747 bytes

In Firefox, Chrome and IE (see note below), the 3rd certificate in that chain (Go Daddy Class 2 Certification Authority) is already considered a trusted root.  The server sent certificates 3 and 4, and the client didn’t even need them.  Why?  This is likely due to Problem #2 above.  Some older clients may not consider Go Daddy a trusted root yet, and therefore, for compatibility, it is better to send all 4 certificates.

What Should Facebook Do?

Obviously I don’t know exactly what Facebook should do.  They’re smart and they’ll figure it out.  But FB’s large certificate chain suffers the same problem as the site:  they include a cert they usually don’t need in order to ensure that all users can access Facebook.

Recall that FB sends 3 certificates.  The 3rd is already a trusted root in the popular browsers (DigiCert), so sending it is superfluous for most users.  The DigiCert cert is signed by Entrust.  I presume they send the DigiCert certificate (1094 bytes) because some older clients don’t have DigiCert as a trusted root, but they do have Entrust as a trusted root.  I can only speculate.

Facebook might be better served to move to a more well-rooted vendor.  This may not be cheap for them.

Aside: Potential SSL Protocol Improvements

If you’re interested in protocol changes, this investigation has already uncovered some potential improvements for SSL:

  • Exposing some sort of minimal user-agent would help servers ensure that they can select an optimal certificate chain to each customer.  Or, exposing some sort of optional “I trust CA root list #1234”, would allow the server to select a good certificate chain without knowing anything about the browser, other than its root list.  Of course, even this small amount of information does sacrifice some amount of privacy.
  • The certificate chain is not compressed.  It could be, and some of these certificates compress by 30-40%.
  • If SNI were required (sadly still not supported on Windows XP), sites could avoid lengthy lists of subject names in their certificates.  Since many sites separate their desktop and mobile web apps (e.g. vs, this may be a way to serve better certificates to mobile vs web clients.

Who Does My Browser Trust, Anyway?

All browsers use a “certificate store” which contains the list of trusted root CAs.

The certificate store can either be provided by the OS, or by the browser.

On Windows, Chrome and IE use the operating-system provided certificate store.  So they have the same points of trust.  However, this means that the trust list is governed by the OS vendor, not the browser.  I’m not sure how often this list is updated for Windows XP, which is still used by 50% of the world’s internet users.

On Mac, Chrome and Safari use the operating system provided store.

On Linux, there is no operating system provided certificate store, so each browser maintains its own certificate store, with its own set of roots.

Firefox, on all platforms (I believe, I might be wrong on this) uses its own certificate store, independent of the operating system store.

Finally, on mobile devices, everyone has their own certificate store.  I’d hate to guess at how many there are or how often they are updated.

Complicated, isn’t it?

Yeah Yeah, but Where Do I Get The Best Certificate?

If you read this far, you probably realize I can’t really tell you.  It depends on who your target customers are, and how many obscure, older devices you need to support.

From talking to others who are far more knowledgeable on this topic than I, it seems like you might have the best luck with either Equifax or Verisign.  Using the most common CAs will have the side benefit that the browser may have cached the OCSP responses for any intermediate CAs in the chain already.  This is probably a small point, though.

Some of the readers of this thread pointed me at what appears to be the smallest, well-rooted certificate chain I’ve seen. has a certificate signed directly at the root by Equifax.  The total size is 871 bytes.  I don’t know how or if you can get this yourself.  You probably can’t.

Finally, Does This Really Matter?

SSL has two forms of handshakes:

  • Full Handshake
  • Session Resumption Handshake

All of this certificate transfer, OCSP and CRL verification only applies to the Full Handshake.  Further, OCSP and CRL responses are cacheable, and are persisted to disk (at least with the Windows Certificate Store they are). 

So, how often do clients do a full handshake, receiving the entire certificate chain from the server?  I don’t have perfect numbers to cite here, and it will vary depending on how frequently your customers return to your site.  But there is evidence that this is as high as 40-50% of the time.  Of course, the browser bug mentioned in the prior article affects these statistics (6 concurrent connections, each doing full handshakes).

And how often do clients need to verify the full certificate chain?  This appears to be substantially less, thanks to the disk caching.  Our current estimates are less than 5% of SSL handshakes do OCSP checks, but we’re working to gather more precise measurements.

In all honesty, there are probably more important things for your site to optimize.  This is a lot of protocol gobbledygook.

Thank you to agl, wtc, jar, and others who provided great insights into this topic.

Certificate Validation Example: Facebook

Most people know the concepts of SSL, but not the gory details.  By using Facebook as a walkthrough example, I’m going to discuss how it works from the browser’s viewpoint, and how it impacts latency to your site.  BTW, this is not intended as a criticism of Facebook – they’re doing all the right things to make sure your data is encrypted and authenticated and fast.  The failures highlighted here are failures of a system that wasn’t designed for speed.

Fetching the Certificate
When you first connect to a SSL site, the client and server use the server’s public key to exchange a secret which will be used to encrypt the session.  So the first thing the client needs to do is to get the server’s public key.  The public key is sent as part of the SSL Server Hello message.   When we look at the Server Hello Message from Facebook, we see that it sent us a Certificate which was 4325 bytes in size.  This means that before your HTTP request even gets off your computer, the server had to send 4KB of data to the client.  That’s a pretty big bundle, considering that the entire Facebook login page is only 8.8KB.  Now, if a public key is generally only 1024 or 2048 bits, with elliptic curve keys being much smaller than that, how did Facebook’s certificate mushroom from 256 to 4325 bytes?  Clearly there is a lot of overhead.  More on this later.

Trusting the Certificate
Once the browser has the server’s certificate, it needs to validate that the certificate is authentic.  After all, did we really get Facebook’s key? Maybe someone is trying to trick us.  To deal with this, public keys are always transferred as part of a certificate, and the certificate is signed by a source, which needs to be trusted.  Your operating system shipped with a list of known and trusted signers (certificate authority roots).  The browser will verify that the Facebook certificate was signed by one of these known, trusted signers.  There are dozens of trusted parties already known to your browser.  Do you trust them all? Well, you don’t really get a choice.  More on this later.

But very few, if any, certificates are actually signed by these CA’s.  Because the Root CA’s are so important to the overall system, they’re usually kept offline to minimize chances of hackery.  Instead, these CAs periodically delegate authority to intermediate CAs, when then validate Facebook’s certificate.  The browser doesn’t care who signs the certificate, as long the chain of certificates ultimately flows to a trusted root CA.

And now we can see why Facebook’s Certificate is so large.  It’s actually not just one Certificate – it is 3 certificates rolled into one bundle:

The browser must verify each link of the chain in order to authenticate that this is really

Facebook, being as large as they are, would be well served by finding a way to reduce the size of this certificate, and by removing one level from their chain.  They should talk to DigiSign about this immediately.

Verifying The Certificate
With the Facebook Certificate in hand, the browser can almost verify the site is really Facebook.  There is one catch – the designers of Certificates put in an emergency safety valve.  What happens if someone does get a fraudulent certificate (like what happened last month with Comodo) or steal your private key?  There are two mechanisms built into the browser to deal with this.

Most people are familiar with the concept of the “Certificate Revocation List” (CRL).  Inside the certificate, the signer puts a link to where the CRL for this certificate would be found.  If this certificate were ever compromised, the signer could add the serial number for this certificate to the list, and then the browser would refuse to accept the certificate. CRLs can be cached by the operating system, for a duration specified by the CA.

The second type of check is to use the Online Certificate Status Protocol (OCSP).  With OCSP, instead of the browser having to download a potentially very large list (CRL), the browser simply checks this one certificate to see if it has been revoked.  Of course it must do this for each certificate in the chain.  Like with CRLs, these are cacheable, for durations specified in the OCSP response.

In the example, the DigiCert certificates specify an OCSP server.  So as soon as the browser received the Server Hello message, it took a timeout with Facebook and instead issued a series of OCSP requests to verify the certificates haven’t been revoked.

In my trace, this process was quick, with a 17ms RTT, and spanning 4 round-trips (DNS, TCP, OCSP Request 1, OCSP Request 2), this process took 116ms.  That’s a pretty fast case.  Most users have 100+ms RTTs and would have experienced approximately a ½ second delay.  And again, this all happens before we’ve transmitted a single byte of actual Facebook content.  And by the way, the two OCSP responses were 417 bytes and 1100 bytes, respectively.

Oh but the CDN!
All major sites today employ Content Delivery Networks to speed the site, and Facebook is no exception.  For Facebook, the CDN site is “”, and it is hosted through Akamai. Unfortunately, the browser has no way of knowing that is related to, and so it must repeat the exact same certificate verification process that we walked through before.

For Facebook’s CDN, the Certificate is 1717 bytes, comprised of 2 certificates:

Unlike the certificate for, these certificates specify a CRL instead of an OCSP server.  By manually fetching the CRL from the Facebook certificate, I can see that the CRL is small – only 886 bytes. But I didn’t see the browser fetch it in my trace.  Why not?  Because the CRL in this case specifies an expiration date of July 12, 2011, so my browser already had it cached.  Further, my browser won’t re-check this CRL until July, 4 months from now.  This is interesting, for reasons I’ll discuss later.

Oh but the Browser Bug!
But for poor Facebook, there is a browser bug (present in all major browsers, including IE, FF, and Chrome) which is horribly sad.  The main content from Facebook comes from, but as soon as that page is fetched, it references 6 items from  The browser, being so smart, will open 6 parallel SSL connections to the domain. Unfortunately, each connection will resend the same SSL certificate (1717 bytes).  That means that we’ll be sending over 10KB of data to the browser for redundant certificate information.

The reason this is a bug is because, when the browser doesn’t have certificate information cached for, it should have completed the first handshake first (downloading the certificate information once), and then used the faster, SSL session resumption for each of the other 5 connections.

Putting It All Together
So, for Facebook, the overall impact of SSL on the initial user is pretty large.  On the first connection, we’ve got:

  • 2 round trips for the SSL handshake
  • 4325 bytes of Certificate information
  • 4 round trips of OCSP validation
  • 1500 bytes of OCSP response data

Then, for the CDN connections we’ve got:

  • 2 round trips for the SSL handshake
  • 10302 bytes of Certificate information (1717 duplicated 6 times)

The one blessing is that SSL is designed with a fast-path to re-establish connectivity.  So subsequent page loads from Facebook do get to cut out most of this work, at least until tomorrow, when the browser probably forgot most of it and has to start over again.

Making it Better

OCSP & CRLs are broken
In the above example, if the keys are ever compromised, browsers around the planet will not notice for 4 months. In my opinion, that is too long.  For the OCSP checks, we cache the result for usually ~7 days.  Having users exposed to broken sites for 7 days is also a long time.  And when Comodo was hacked a month ago, the browser vendors elected to immediately patch every browser user on the planet rather than wait for the OCSP caches to expire in a week.  Clearly the industry believes the revocation checking is broken when it is easier to patch than rely on the built-in infrastructure.

But it is worse than that.  What does a browser do when if the OCSP check fails?  Of course, it proceeds, usually without even letting the user know that it has done so (heck, users wouldn’t know what to do about this anyway)!   Adam Langley points this out in great detail, but the browsers really don’t have an option.  Imagine if DigiCert were down for an hour, and because of that users couldn’t access Facebook?  It’s far more likely that DigiCert had downtime than that the certificate has been revoked.

But why are we delaying our users so radically to do checks that we’re just going to ignore the result of if they fail anyway?  Having a single-point-of-failure for revocation checking makes it impossible to do anything else.

Certificates are Too Wordy
I feel really sorry for Facebook with it’s 4KB certificate.  I wish I could say theirs was somehow larger than average.  They are so diligent about keeping their site efficient and small, and then they get screwed by the Certificate.  Keep in mind that their public key is only 2048bits. We could transmit that with 256B of data.  Surely we can find ways to use fewer intermediate signers and also reduce the size of these certificates?

Certificate Authorities are Difficult to Trust
Verisign and others might claim that most of this overhead is necessary to provide integrity and all the features of SSL.  But is the integrity that we get really that much better than a leaner PGP-like system?  The browser today has dozens of root trust points, with those delegating trust authority to hundreds more.  China’s government is trusted by browsers today to sign certificates for, or even  Do we trust them all?

A PGP model could reduce the size of the Certificates, provide decentralization so that we could enforce revocation lists, and eliminate worries about trusting China, the Iranian government, the US government, or any dubious entities that have signature authority today.

Better Browser Implementations
I mentioned above about the flaw where the browser will simultaneously open multiple connections to a single site when it knows it doesn’t have the server’s certificate, and thus redundantly download potentially large certs.  All browsers need to be smarter.
Although I expressed my grievances against the OCSP model above, it is used today.  If browsers continue to use OCSP, they need to fully implement OCSP caching on the client, they need to support OCSP stapling, and they need to help push the OCSP multi-stapling forward.

SSL Handshake Round Trips
The round trips in the handshake are tragic.  Fortunately, we can remove one, and Chrome users get this for free thanks to SSL False Start.  False Start is a relatively new, client-side only change.  We’ve measured that it is effective at removing one round trip from the handshake, and that it can reduce page load times by more than 5%.

Hopefully I got all that right, if you read this far, you deserve a medal.

Chrome Speeding up SSL with SSL FalseStart

The latest releases of Chrome now enable a feature called SSL False Start.  False Start is a client-side change which makes your SSL connections faster.  As of this writing, Chrome is the only browser implementing it.  Here is what it does.

In order to establish a secure connection, SSL uses a special handshake where the client and server exchange basic information to establish the secure connection.  The very last message exchanged has traditionally been implemented such that the client says, “done”, waits for the server, and then the server says, “done”.  However, this waiting-for-done is unnecessary, and the SSL researchers have discovered that we can remove one round trip from the process and allow the client to start sending data immediately after it is done.

To visualize this, lets look at some packet traces during the handshake sequence, comparing two browsers:


Browser w/o FalseStart

83ms SEND Client Hello
175ms RECV Server Hello
           Server Hello Done
176ms SEND Client Key Exchange
           Change Cipher Spec
           Enc Handshake Msg
           HTTP Request
274ms RECV Enc Handshake Msg
           Change Cipher Spec
           Enc Handshake Msg
275ms RECV HTTP Response
84ms SEND Client Hello
173ms RECV Server Hello
           Server Hello Done
176ms SEND Client Key Exchange
           Change Cipher Spec
           Enc Handshake Msg

269ms RECV Enc Handshake Msg
           Change Cipher Spec
           Enc Handshake Msg
269ms SEND HTTP Request

524ms RECV HTTP Response

These two traces are almost identical.  Highlighted in red is the subtle difference.  Notice that Chrome sent the HTTP Request at time 176ms, which was a little more than one round-trip-time faster than the other browser could send it. 

(Note- it is unclear why the HTTP response for the non-FalseStart browser was 250ms late; the savings here is, in theory, just 1 round trip, or 83ms.  There is always variance on the net, and I’ll attribute this to bad luck)

Multiplicative Effect on Web Pages
Today, almost all web pages combine data from multiple sites.  For SSL sites, this means that the handshake must be repeated to each server that is referenced by the page.  In our tests, we see that there are often 2-3 “critical path” connections while loading a web page.  If your round-trip-time is 83ms, as in this example, that’s 249ms of savings – just for getting started with your page.  I hope to do a more thorough report on the effect of FalseStart on overall PLT in the future.

For more information on the topic, check out Adam Langley’s post on how Chrome deals with the very few sites that can’t handle FalseStart.

Linux Client TCP Stack Slower Than Windows

Conventional wisdom says that Linux has a better TCP stack than Windows.  But with the current latest Linux and the current latest Windows (or even Vista), there is at least one aspect where this is not true.  (My definition of better is simple- which one is fastest)

Over the past year or so, researchers have proposed to adjust TCP’s congestion window from it’s current form (2pkts or ~4KB) up to about 10 packets.  These changes are still being debated, but it looks likely that a change will be ratified.  But even without official ratification, many commercial sites and commercially available load balancing software have already increased initcwnd on their systems in order to reduce latency. 

Back to the matter at hand – when a client makes a connection to a server, there are two variables which dictate how quickly a server can send data to the client.  The first variable is the client’s “receive window”.  The client tells the server, “please don’t exceed X bytes without my acknowledgement”, and this is a fundamental part of how TCP controls information flow.  The second variable is the server’s cwnd, which, as stated previously is generally the bottleneck and is usually initialized to 2.

In the long-ago past,  TCP clients (like web browsers) would specify receive-window buffer sizes manually.  But these days, all modern TCP stacks use dynamic window size adjustments based on measurements from the network, and applications are recommended to leave it alone, since the computer can do it better.  Unfortunately, the defaults on Linux are too low. 

On my systems, with a 1Gbps network, here are the initial window sizes.  Keep in mind your system may vary as each of the TCP stacks does dynamically change the window size based on many factors.

Vista:  64KB
Mac:    64KB
Linux:    6KB

6KB!  Yikes! Well, the argument can be made that there is no need for the Linux client to use a larger initial receive window, since the servers are supposed to abide by RFC2581.  But there really isn’t much downside to using a larger initial receive window,  and we already know that many sites do benefit from a large cwnd already.  The net result is that when the server is legitimately trying to use a larger cwnd, web browsing on Linux will be slower than web browsing on Mac or Windows, which don’t artificially constrain the initial receive window.

Some good news – a patch is in the works to allow users to change the default, but you’ll need to be a TCP whiz and install a kernel change to use it.  I don’t know of any plans to change the default value on Linux yet.  Certainly if the cwnd changes are approved, the default initial receive window must also be changed.  I have yet to find any way to make linux use a larger initial receive window without a kernel change.

Two last notes: 

1) This isn’t theoretical.  It’s very visible in network traces to existing servers on the web that use larger-than-2 cwnd values.  And you don’t hit the stall just once, you hit it for every connection which tries to send more than 6KB of data in the initial burst.

2) As we look to make HTTP more efficient by using fewer connections (SPDY), this limit becomes yet-another-factor which favors protocols that use many connections instead of just one.  TCP implementors lament that browsers open 20-40 concurrent connections routinely as part of making sites load quickly.  But if a connection has an initial window of only 6KB, the use of many connections is the only way to work around the artificially low throttle.

There is always one more configuration setting to tweak.