Followup to “Not as SPDY as You Thought”


In the last couple of weeks many people have asked me to comment on guypo’s benchmark blog post, “Not as SPDY as You Thought”.  Guy shared the post with me before he posted it.  Overall, I disagree with his title, but I don’t disagree with his results much, so I haven’t felt pressed to comment.  He tested something that nobody else has tested, and after reviewing his methodology, it’s mostly fine. Some suggestions have been made for improvement, which he was very open to, and we’ll likely see additional test results coming soon.  But his results are not contrary to Google’s or my own results; they’re just a different test.

The reason his results aren’t contradictory is because Guy’s test doesn’t test SPDY page loads.  Guy tested partial SPDY page loads, not full SPDY page loads.  More specifically, he tested this case:  if you upgrade your primary domain, but few of your other domains, your CDN, etc, how does SPDY perform?  This is a perfectly valid case to test – especially when sites may take an incremental approach to upgrading.  And I’m not surprised at all that if you only upgrade half your page to SPDY, that the results are not as good as if you upgrade all of your page to SPDY.

In the report, Guy breaks out domains into “1st party domains” and “3rd party domains”.  He argues that since you don’t have control over the 3rd party content servers, it may not get SPDY-ized, and therefore his test is valid.  Ok – thats a good point.  But how do we define “3rd party”?  I consider “1st party” to be any content which you, as the site owner have control to change directly.  So if you load your main content from www.google.com and your images from images.google.com, those are both 1st party domains.  Unfortunately, Guy’s classifier did not classify them this way.

To understand what I mean, lets take a look at the domains used on a few sample pages and how his test loaded resources from them.  I simply picked three from his test results.   Every page tested is different, but the patterns below are common to many of the top websites.

www.cnn.com

www.ebay.com

www.yahoo.com

Domains Used

www.cnn.com

icompass.insighexpressai.com

z.cdn.turner.com

i.cdn.turner.com

www.facebook.com

ad.insightexpressai.com

s-static.ak.fbcdn.com

svcs.cnn.com

gdyn.cnn.com

s-external.ak.fbcdn.com

www.ebay.com

ir.ebaystatic.com

i.ebayimg.com

q.ebaystatic.com

p.ebaystatic.com

thumbs4.ebaystatic.com

rover.ebay.com

srx.main.ebayrtm.com

rtm.ebaystatic.com

ad.doubleclick.net

pics.ebaystatic.com

s0.2mdn.net

www.yahoo.com

l.yimg.com

us.bc.yahoo.com

v4test.yahoo.com

v4test2.yahoo.com

v4test3.yahoo.com

dstest.yahoo.com

dstest2.yahoo.com

dstest3yahoo.com

ad.doubleclick.net

SPDY domains

1

1

1

Non SPDY domains

9

11

9

Resources fetched over SPDY

40

20

48

Resources fetched over HTTP

46

37

26

“1st party” resources that could have been SPDY but were NOT in Guy’s test

31

34

24

I hope you can now see why I don’t discredit Guy’s test results.  On these pages, 25-50% of the 1st party controlled resources which could have been loaded over SPDY weren’t loaded over SPDY at all. If you only partially use SPDY, you only get partial results. This is okay to me.

Nobody should think I’m discrediting Guy’s work here.  He’s done a great job with great vigor, and it takes an incredible amount of time to do these tests.  He’s planning to do more tests, and I’m very thankful that he is doing this and that Akamai is letting him do so.

In the next wave of tests, I expect we’ll see that SPDY benefits are increased.  Keep in mind that your average site isn’t going to see the 2x speed boost.   The overall benefit of SPDY is conditional on many factors, and websites today have not yet been tuned for SPDY.  Most sites will see benefits in the 5-20% range (like Google did).   A few will see 50% better.  A few will see worse.  Everyone will benefit from new optimization possibilities, less complex websites, and a more network and mobile friendly protocol. More testing like Guy’s is the key to a better HTTP/2.0.

The Web only Works Thanks to Reload… (and why the mobile web fails)

When you build a mobile app that uses the network, it is instantly clear that your app needs to be robust against all sorts of network failures:

  • network completely down
  • network transitioning from WiFi to 3G
  • network insanely slow (EDGE!)
  • network timeouts – is 5s long enough to wait? 10s? 30?
  • network radio warmup is slow
  • what happens if your app is terminated before finishing a critical network request?
  • etc…
  • Dealing with these is hard, but not impossible. Applications retry at various levels trading off battery life and user perceived performance all the time. After enough work, you can make the app functional.

    But if you try to write an app using HTML5, how do you do this?

    You can’t.

    The web simply isn’t designed for partial network failures on a web page. Web pages are comprised of hundreds of subresources from multiple sites these days. What happens when CSS #5 out of 32 resources fails to load? What happens when you can’t connect to i.amazon.com even though you already loaded the main content from www.amazon.com? Does your application even know? Generally not. You can trap for some sorts of errors; but the browser will NOT automatically retry on any of these failures for you. Most likely you’ll be left with a web page which renders incorrectly, hangs forever, or throws javascript errors on the whole page because a critical set of code just never got loaded.

    Of course, these problems can happen on your desktop, too. But they generally don’t happen as often. And when they do occur, every user easily becomes his own network administrator thanks to the web browser’s handy dandy “reload” button. How many times have you drummed your fingers for a few seconds before reloading a page? Probably a lot! But on mobile, network errors occur *all* the time. Do mobile apps have ‘reload’ buttons? Generally not – users are becoming quite accustomed to apps which can handle their own errors gracefully.

    Sadly, I think this is one more nail in the coffin against HTML5 on mobile. Browsers need to be completely overhauled to properly deal with network errors and retries before HTML5 can be a serious contender to native applications.

    Comments on Microsoft’s SPDY Proposal

    Microsoft published their SPDY proposal today to the IETF. They call it “HTTP + Mobility”. Here are some quick comments on their proposal.

    a) It’s SPDY!
    The Microsoft proposal is SPDY at its core. They’ve fully retained the major elements of SPDY, including multiplexing, prioritization, and compression, and they’ve even lifted the exact syntax of most of the framing layer – maintaining SYN_STREAM, SYN_RESET, SYN_REPLY, HEADERS, etc.

    It’s a huge relief for me to see Microsoft propose SPDY with a few minor tweaks.

    b) WebSockets Syntax
    When SPDY started a couple of years ago, WebSockets didn’t exist. Microsoft is proposing taking existing SPDY, and changing the syntax to be more like WebSockets. This won’t have any feature impact on the protocol, but does make the protocol overall more like other web technologies.

    Personally, I don’t think syntax matters much, and I also see value in symmetry across web protocols. I do think the WebSocket syntax is more complicated than SPDY today, but its not that big of a deal. Overall, this part of the Microsoft proposal may make sense. I’m happy that Microsoft has presented it.

    c) Removal of Flow Control
    The Microsoft proposal is quick to dismiss SPDY’s per-stream flow control as though it is already handled at the TCP layer. However, this is incorrect. TCP handles flow control for the TCP stream. Because SPDY introduces multiple concurrent flows, a new layer of flow control is necessary. Imagine you were sending 10 streams to a server, and one of those streams stalled out (for whatever reason). Without flow control, you either have to terminate all the streams, buffer unbounded amounts of memory, or stall all the streams. None of these are good outcomes, and TCP’s flow control is not the same as SPDY’s flow control.

    This may be an example of where SPDY’s implementation experience trumps any amount of protocol theory. For those who remember, earlier drafts of SPDY didn’t have flow control. We were aware of it long ago, but until we fully implemented SPDY, we didn’t know how badly it was needed nor how to do it in a performant and simple manner. I can’t emphasize enough with protocols how important it is to actually implement your proposals. If you don’t implement them, you don’t really know if it works.

    d) Optional Compression
    HTTP is full of “optional” features. Experience shows that if we make features optional, we lose them altogether due to implementations that don’t implement them, bugs in implementations, and bugs in the design. Examples of optional features in existing HTTP/1.1 include: pipelining, chunked uploads, absolute URIs, and there are many more.

    Microsoft did not include any benchmarks for their proposal, so I don’t really know how well it performs. What I do know, however, is that the header compression which Microsoft is advocating be optional was absolutely critical to mobile performance for SPDY. If the Microsoft proposal were truly optimized for mobile, I suspect it would be taking more aggressive steps toward compression rather than pulling it out.

    Lastly, I’m puzzled as to why anyone would propose removing the header compression. We could argue about which compression algorithm is best, but it has been pretty non-controversial that we need to start compressing headers with HTTP. (See also: SPDY spec, Mozilla example, UofDelaware research)

    e) Removal of SETTINGS frames
    SPDY has the promise of “infinite flows” – that a client can make as many requests as it wants. But this is a jedi mind trick. Servers, for a variety of reasons, still want to limit a client to a reasonable number of flows. And different servers have very different ideas about what “reasonable” is. The SETTINGS frame is how servers communicate to the client that they want to do this.

    I’m guessing this is an oversight in the Microsoft proposal.

    f) Making Server Push Optional
    Microsoft proposes to make server push optional. There is a fair discussion to be had about removing Server Push for a number of reasons, but to make it optional seems like the worst of all worlds. Server Push is not trivial, and is definitely one of the most radical portions of the protocol. To make it optional without removing it leaves implementors with the burden of all the complexity with potentially none of the benefits.

    The authors offer opinions as to the merits of Server Push, but offer no evidence or data to back up those claims.

    h) Removal of IP Pooling
    The Microsoft writeup eliminates connection pooling, but it is unclear why. Connection pooling is an important element of SPDY both for performance and for efficiency on the network. I’m not sure why Microsoft would recommend removing this, especially without benchmarks, data, or implementation details. The benchmarks clearly show it has measurable benefit, and without this feature, mobile performance for the Microsoft proposal will surely be slower than for SPDY proper.

    Conclusion
    I’m happy with the writeup from Microsoft. I view their proposal as agreement that the core of SPDY in acceptable for HTTP/2.0, which should help move the standardization effort along more quickly. They’ve also raised a couple of very reasonable questions. It’s clear that Microsoft hasn’t done much testing or experimentation with their proposal yet. I’m certain that with data, we’ll come to resolution on all fronts quite quickly.

    SPDY Momentum Fueled by Juggernauts

    Recent SPDY news comes from some big brands: Twitter, Mozilla, Amazon, Apache, Google.

    Looking forward to seeing what comes next!

    Rethinking SSL for Mobile Apps

    Mobile Apps use HTTP. But they usually don’t use it to transfer HyperText – rather they are using it to transfer JSON, XML, or other data formats. Just like their web counterparts, secure transmission is desirable.

    But, if you ever trace a fresh SSL connection, you know that its a nasty process:

    • DNS
    • TCP handshake
    • SSL handshake
    • Server sends certificate
    • DNS to CA
    • TCP to CA
    • OCSP to CA
    • Finish SSL handshake
    • Finally do what you wanted to do….

    SSL is designed so that you can pick up some random certificate and check it dynamically. This is a good thing for the web, where the user coasts from site to site, happily discovering new content which needs new validation.

    But this process is pretty costly, especially on mobile networks. For my own service, I just did a quick trace over 3G:

    • DNS (1334ms)
    • TCP handshake (240ms)
    • SSL handshake (376ms)
    • Follow certificate chain (1011ms) — server should have bundled this.
    • DNS to CA (300ms)
    • TCP to CA (407ms)
    • OCSP to CA #1 (598ms) — StartSSL CA uses connection close on each!
    • TCP to CA #2 (317ms)
    • OCSP to CA #2 (444ms)
    • Finish SSL handshake (1270ms)

    With the web, this verification process makes some sense – you ask the CA to be your trust point and verify that he trusts the certificate provided.

    But why do this with a mobile app? Your mobile app has a lot of trust with it – they downloaded it from you, its signed by Apple, and if the code has been compromised, well, heck, your app isn’t really running anyway.

    What we really want for mobile apps is to bake the server’s certificate into the app. If the server’s certificate needs to change, you can auto-update your app. In the example above, doing so would have shaved about 3000ms off application startup time.

    The downside of this is that if your certificate changes, your app won’t verify. Then what to do? Simple – force an auto update.

    There is another advantage to this approach. If you can verify your own certs, you don’t need a CA provided certificate anyway. These silly 1-2 year expirations are no longer necessary. Sign your own cert, and verify it yourself. Since our CAs have been getting hacked left and right in 2011, this is probably even more secure.

    PS: SSL is hard. In this one trace, I can spot at *least* 3 low-hanging-fruit optimizations. I haven’t mentioned them, because they are pervasive everywhere on the net. There are errors here at every level – the client is missing opportunities, the server is missing opportunities, and the CA is missing opportunities! It’s no wonder that SSL is slow. The chance that your combination of client + server + CA will have some dumb performance bug is ~99%.

    Origin http://localhost is not allowed by Access-Control-Allow-Origin

    I had a frustrating day. I’m writing some simple ajax/xhr tests, and I can’t get the browser to issue my requests:

    Origin http://localhost is not allowed by Access-Control-Allow-Origin

    Searching on Google, it’s clear that lots of other people are having the same problem, but nobody has an answer – the browser nazis are being overzealous, protecting us from localhost – really?

    Fortunately, chrome has a workaround. Use this commandline:

    chrome.exe –disable-web-security

    And you can get your job done.

    SPDY configuration: tcp_slow_start_after_idle

    If you’re a SPDY server implementor, you’ve likely already read about the impact of CWND. Fortunately, the TCP implementors now largely agree that we can now safely increase CWND, and the standard will likely change soon. The default linux kernel implementation already has.

    But, there is a second cwnd-related kernel flag which is not often mentioned. It’s important in all cases, but particularly important if you’re trying to establish long-lived connections. It’s not just important to SPDY – it’s important for HTTP keepalives or pipelines too. And many of the large web service providers are already tuning it:

      > sysctl -a | grep tcp_slow_start_after_idle
      net.ipv4.tcp_slow_start_after_idle = 1
      

    At casual glance, you probably think “this sounds good, after a minute or so, it will go back into slow start mode”. That is fine, right?

    Not quite. “Idle” in this case doesn’t mean a ‘minute or so’. In fact, it doesn’t even mean a second. This flag comes from RFC2861‘s recommendation, which states that cwnd be cut in half with each RTT of idleness. That means that a persistently held open connection soon degrades back to the performance of an un-warmed connection very quickly.

    So why does this matter? If you’re attempting to use a long-lived SPDY connection and think that the initial CWND won’t affect you because you’re only opening one connection anyway, you’re wrong. The slow-start-after-idle will still get you.

    While there has been a tremendous amount of investigation and discussion about the initial cwnd value, I’m not aware of any recent debate about the slow-start-after-idle. I know that many websites are already disabling this flag to make HTTP keepalive connections perform more reasonably. Sadly, I can’t find any research which actually measured the effects of this behavior in the real world, so I can’t fall back on any real data. Given how aggressive TCP already is at backing off should network congestion change, I see no reason to enable this flag. Further, if you’re helping the net by dropping from N connections to 1, there is no reason you should be further penalized for your good deeds! Turn this one off.

    SPDY of the Future Might Blow Your Mind Today

    This post is definitely for protocol geeks.

    SPDY has been up and running in the “basic case” at Google for some time now. But I never wrote publicly about some wicked cool possibilities for SPDY in the future. (Much to my surprise, it may be that someone is doing them today already!)

    To start this discussion, lets consider how the web basically works today. In this scenario, we’ve got a browser with 3 tabs open:

    As you can see, these pages use a tremendous number concurrent connections. This pattern has been measured both with Firefox and also with Chrome. Many mobile browsers today cap the connections at lower levels due to hardware constraints, but their desktop counterparts generally don’t because the only way to get true parallelism with HTTP is to open lots of connections. The HTTPArchive adds more good data into the mix, showing that an average web page today will use data from 12 different domains.

    Each of these connections needs a separate handshake to the server. Each of these connections occupies a slot in your ISP’s NAT table. Each of these connections needs to warm up the TCP SlowStart algorithm independently (Slow Start is how TCP learns how much data your Internet connection can handle). Eventually, the connections feed out onto the internet and on to the sites you’re visiting. Its impressive this system works very well at all, for it is certainly not a very inefficient use of TCP. Jim Gettys, one of the authors of HTTP has observed these inefficiencies and written about the effects of HTTP’s connection management with ‘bufferbloat’.

    SPDY of Today

    One first step to reduce connection load is to migrate sites to SPDY. SPDY resides side by side with HTTP, so not everyone needs to move to SPDY at the same time. But for those pages that do move to SPDY, they’ll have reduced page load times and transmitted with always-on security. On top of that, these pages are much gentler on the the network too. Suddenly those 30-75 connections per page evaporate into only 7 or 8 connections per page (a little less than one per domain). For large site operators, this can have a radical effect on overall network behavior. Note that early next year, when Firefox joins Chrome implementing SPDY, more than 50% of users will be able to access your site using SPDY.

    SPDY of the Future

    Despite its coolness, there is an aspect of SPDY that doesn’t get much press yet (because nobody is doing it). Kudos for Amazon’s Kindle Fire for inspiring me to write about it. I spent a fair amount of time running network traces of the Kindle Fire, and I honestly don’t know quite what they’re doing yet. I hope to learn more about it soon. But based on what I’ve seen so far, it’s clear to me that they’re taking SPDY far beyond where Chrome or Firefox can.

    The big drawback of the previous picture of SPDY is that it requires sites to individually switch to SPDY. This is advantageous from a migration point of view, but it means it will take a long time to roll out everywhere. But, if you’re willing to use a SPDY gateway for all of your traffic, a new door opens. Could mobile operators and carriers do this today? You bet!

    Check out the next picture of a SPDY browser with a SPDY gateway. Because SPDY can multiplex many connections, the browser can now put literally EVERY request onto a single SPDY connection. Now, any time the browser needs to fetch a request, it can send the request right away, without needing to do a DNS lookup, or a TCP handshake, or even an SSL handshake. On top of that, every request is secure, not just those that go to SSL sites.

    Wow! This is really incredible. They’ve just taken that massive ugly problem of ~200 connections to the device and turned it into 1! If your socks aren’t rolling up and down right now, I’m really not sure what would ever get you excited. To me, this is really exciting stuff.

    Some of you might correctly observe that we still end up with a lot of connections out the other end (past the SPDY gateway). But keep in mind that the bottleneck of the network today is the “last mile” – the last mile to your house. Network bandwidth and latencies are orders of magnitude faster on the general Internet than they are during that last mile to your house. Enabling SPDY on that link is the most important of them all. And the potential network efficiency gains here are huge for the mobile operators and ISPs. Because latencies are better on the open internet, it should still yield reduced traffic on the other side – but this is purely theoretical. I haven’t seen any measure of it yet. Maybe Amazon knows 🙂

    More Future SPDY

    Finally, as an exercise to the reader, I’ll leave it to you to imagine the possibilities of SPDY in light of multiplexing many sites, each with their own end-to-end encryption. In the diagram above, SSL is still end-to-end, so that starting a SSL conversation still requires a few round trips. But maybe we can do even better….

    SPDY is not hard. Securing the Internet is.

    The F5 folks wrote a little about SPDY a few weeks ago. It’s a nice write up. But I want to challenge one particular point of it which I commonly hear:

    “The most obvious impact to any infrastructure between a SPDY-enabled client and server is that it drives intermediate processing back to layer 4, to TCP”

    This isn’t actually true. SPDY is not what makes load balancing or hierarchical caching difficult. SSL is what makes these hard. But even blaming SSL is a bit unfair – any protocol which introduces encryption to avoid 3rd party tampering of the data stream is going to have this problem.

    In other words, it’s not deploying SPDY that is hard, it’s securing the web that is hard.

    To the contrary, SPDY actually makes deployment of secure content easier. One of the common complaints against using SSL is that of performance – both in terms of client latency and also server scalability. When SSL is combined with SPDY, the performance objection is substantially lessened.

    Now, don’t get me wrong, I am sympathetic to the difficulty of securing the web, and we need a lot more tools, debugging, and effort to make it simpler and cheaper for everyone. This will be especially difficult for infrastructure solutions which leverage the fact that HTTP is unsecured to do L7 packet analysis. But that doesn’t change the fact that we live in an electronic world full of bad guys. Whenever we ultimately decide to protect the web, it’s going to be hard. SPDY doesn’t create this problem at all.