Followup to “Not as SPDY as You Thought”


In the last couple of weeks many people have asked me to comment on guypo’s benchmark blog post, “Not as SPDY as You Thought”.  Guy shared the post with me before he posted it.  Overall, I disagree with his title, but I don’t disagree with his results much, so I haven’t felt pressed to comment.  He tested something that nobody else has tested, and after reviewing his methodology, it’s mostly fine. Some suggestions have been made for improvement, which he was very open to, and we’ll likely see additional test results coming soon.  But his results are not contrary to Google’s or my own results; they’re just a different test.

The reason his results aren’t contradictory is because Guy’s test doesn’t test SPDY page loads.  Guy tested partial SPDY page loads, not full SPDY page loads.  More specifically, he tested this case:  if you upgrade your primary domain, but few of your other domains, your CDN, etc, how does SPDY perform?  This is a perfectly valid case to test – especially when sites may take an incremental approach to upgrading.  And I’m not surprised at all that if you only upgrade half your page to SPDY, that the results are not as good as if you upgrade all of your page to SPDY.

In the report, Guy breaks out domains into “1st party domains” and “3rd party domains”.  He argues that since you don’t have control over the 3rd party content servers, it may not get SPDY-ized, and therefore his test is valid.  Ok – thats a good point.  But how do we define “3rd party”?  I consider “1st party” to be any content which you, as the site owner have control to change directly.  So if you load your main content from www.google.com and your images from images.google.com, those are both 1st party domains.  Unfortunately, Guy’s classifier did not classify them this way.

To understand what I mean, lets take a look at the domains used on a few sample pages and how his test loaded resources from them.  I simply picked three from his test results.   Every page tested is different, but the patterns below are common to many of the top websites.

www.cnn.com

www.ebay.com

www.yahoo.com

Domains Used

www.cnn.com

icompass.insighexpressai.com

z.cdn.turner.com

i.cdn.turner.com

www.facebook.com

ad.insightexpressai.com

s-static.ak.fbcdn.com

svcs.cnn.com

gdyn.cnn.com

s-external.ak.fbcdn.com

www.ebay.com

ir.ebaystatic.com

i.ebayimg.com

q.ebaystatic.com

p.ebaystatic.com

thumbs4.ebaystatic.com

rover.ebay.com

srx.main.ebayrtm.com

rtm.ebaystatic.com

ad.doubleclick.net

pics.ebaystatic.com

s0.2mdn.net

www.yahoo.com

l.yimg.com

us.bc.yahoo.com

v4test.yahoo.com

v4test2.yahoo.com

v4test3.yahoo.com

dstest.yahoo.com

dstest2.yahoo.com

dstest3yahoo.com

ad.doubleclick.net

SPDY domains

1

1

1

Non SPDY domains

9

11

9

Resources fetched over SPDY

40

20

48

Resources fetched over HTTP

46

37

26

“1st party” resources that could have been SPDY but were NOT in Guy’s test

31

34

24

I hope you can now see why I don’t discredit Guy’s test results.  On these pages, 25-50% of the 1st party controlled resources which could have been loaded over SPDY weren’t loaded over SPDY at all. If you only partially use SPDY, you only get partial results. This is okay to me.

Nobody should think I’m discrediting Guy’s work here.  He’s done a great job with great vigor, and it takes an incredible amount of time to do these tests.  He’s planning to do more tests, and I’m very thankful that he is doing this and that Akamai is letting him do so.

In the next wave of tests, I expect we’ll see that SPDY benefits are increased.  Keep in mind that your average site isn’t going to see the 2x speed boost.   The overall benefit of SPDY is conditional on many factors, and websites today have not yet been tuned for SPDY.  Most sites will see benefits in the 5-20% range (like Google did).   A few will see 50% better.  A few will see worse.  Everyone will benefit from new optimization possibilities, less complex websites, and a more network and mobile friendly protocol. More testing like Guy’s is the key to a better HTTP/2.0.

The Web only Works Thanks to Reload… (and why the mobile web fails)

When you build a mobile app that uses the network, it is instantly clear that your app needs to be robust against all sorts of network failures:

  • network completely down
  • network transitioning from WiFi to 3G
  • network insanely slow (EDGE!)
  • network timeouts – is 5s long enough to wait? 10s? 30?
  • network radio warmup is slow
  • what happens if your app is terminated before finishing a critical network request?
  • etc…
  • Dealing with these is hard, but not impossible. Applications retry at various levels trading off battery life and user perceived performance all the time. After enough work, you can make the app functional.

    But if you try to write an app using HTML5, how do you do this?

    You can’t.

    The web simply isn’t designed for partial network failures on a web page. Web pages are comprised of hundreds of subresources from multiple sites these days. What happens when CSS #5 out of 32 resources fails to load? What happens when you can’t connect to i.amazon.com even though you already loaded the main content from www.amazon.com? Does your application even know? Generally not. You can trap for some sorts of errors; but the browser will NOT automatically retry on any of these failures for you. Most likely you’ll be left with a web page which renders incorrectly, hangs forever, or throws javascript errors on the whole page because a critical set of code just never got loaded.

    Of course, these problems can happen on your desktop, too. But they generally don’t happen as often. And when they do occur, every user easily becomes his own network administrator thanks to the web browser’s handy dandy “reload” button. How many times have you drummed your fingers for a few seconds before reloading a page? Probably a lot! But on mobile, network errors occur *all* the time. Do mobile apps have ‘reload’ buttons? Generally not – users are becoming quite accustomed to apps which can handle their own errors gracefully.

    Sadly, I think this is one more nail in the coffin against HTML5 on mobile. Browsers need to be completely overhauled to properly deal with network errors and retries before HTML5 can be a serious contender to native applications.