Fixing the CEO Pay Problem

I’m generally a pretty free-market kind of guy. But, when it comes to CEO pay, there is no doubt in my mind that America is screwed up and that the free market is failing us. This isn’t the biggest of our problems, but it raises unnecessary doubt about “corporate greed” and about the livelihood of the American Dream.

If you don’t believe me, check out some of the compensation paid to CEOs of companies that are losing massive amounts of money:

  • Aubrey McClendon, Chesapeake Energy, paid $18.6M while the company lost $5.8B.
  • Carol Bartz, Yahoo, paid $39.0M in the same year she’s fired.
  • Timothy Armour, Janus Capital, paid $11.4M while the company lost $757.1M.
  • Rupert Murdoch, News Corp, paid $18.0M while the company lost $3.4B.
  • Robert Stevens, Lockheed Martin, paid $21.7M while the company lost $3.0B.
  • Daniel Hesse, Sprint, paid $10.3M while the company lost $2.4B.
  • Gregory Brown/Sanjay Jha, Motorola, paid $11.7M while the company lost $111M.
  • Ronald Hovsepian, Novell, paid $5.2M while the company lost $214.6M.
  • William Klesse, Valero, paid $11.3M while the company lost $353M.
  • Klaus Kleinfeld, Alcoa, paid $14.3M while the company lost $985M.
  • Ahmad Chatila, MEMC Electronic Materials, paid $16.7M while the company lost -$68.3M.
  • The list goes on and on…

Still not convinced? Why do CEOs get golden parachutes? Why did Leo Apotheker get paid $25M after getting fired 11mos into the job? Do you get one? It makes no sense to ever have a guaranteed payout even if you screw up.

Mark Cuban once again puts this in perspective by demonstrating that the risk-reward for CEOs is out of whack.

Fortunately, it is easy to fix.

The free market should remain free. If a company wants to pay a CEO $50M in advance, they are free to do so. But the Board of Directors, whose sole responsibility is to the shareholders best interests, needs to be able to prove that such a plan is good for the shareholders. If not, the Directors need to be held personally liable.

I’d like to see the SEC adopt new rules about executive pay – including any form of guaranteed pay, pay for non-performance, pay while the company is losing money, or pay for early termination. These rules should outline a very strict and narrow definition for when such compensation would be “good for shareholders”. Common sense should win out here, and the right answer is “almost never”. We all know that if an employee isn’t working out you should fire them with impunity. CEO’s are no exception.

As for the CEOs that are already beneficiaries of guaranteed payouts – if they have any character at all, they should forfeit these benefits and ask their Board of Directors to rework their compensation to something in line with what the rest of the company gets.

SPDY of the Future Might Blow Your Mind Today

This post is definitely for protocol geeks.

SPDY has been up and running in the “basic case” at Google for some time now. But I never wrote publicly about some wicked cool possibilities for SPDY in the future. (Much to my surprise, it may be that someone is doing them today already!)

To start this discussion, lets consider how the web basically works today. In this scenario, we’ve got a browser with 3 tabs open:

As you can see, these pages use a tremendous number concurrent connections. This pattern has been measured both with Firefox and also with Chrome. Many mobile browsers today cap the connections at lower levels due to hardware constraints, but their desktop counterparts generally don’t because the only way to get true parallelism with HTTP is to open lots of connections. The HTTPArchive adds more good data into the mix, showing that an average web page today will use data from 12 different domains.

Each of these connections needs a separate handshake to the server. Each of these connections occupies a slot in your ISP’s NAT table. Each of these connections needs to warm up the TCP SlowStart algorithm independently (Slow Start is how TCP learns how much data your Internet connection can handle). Eventually, the connections feed out onto the internet and on to the sites you’re visiting. Its impressive this system works very well at all, for it is certainly not a very inefficient use of TCP. Jim Gettys, one of the authors of HTTP has observed these inefficiencies and written about the effects of HTTP’s connection management with ‘bufferbloat’.

SPDY of Today

One first step to reduce connection load is to migrate sites to SPDY. SPDY resides side by side with HTTP, so not everyone needs to move to SPDY at the same time. But for those pages that do move to SPDY, they’ll have reduced page load times and transmitted with always-on security. On top of that, these pages are much gentler on the the network too. Suddenly those 30-75 connections per page evaporate into only 7 or 8 connections per page (a little less than one per domain). For large site operators, this can have a radical effect on overall network behavior. Note that early next year, when Firefox joins Chrome implementing SPDY, more than 50% of users will be able to access your site using SPDY.

SPDY of the Future

Despite its coolness, there is an aspect of SPDY that doesn’t get much press yet (because nobody is doing it). Kudos for Amazon’s Kindle Fire for inspiring me to write about it. I spent a fair amount of time running network traces of the Kindle Fire, and I honestly don’t know quite what they’re doing yet. I hope to learn more about it soon. But based on what I’ve seen so far, it’s clear to me that they’re taking SPDY far beyond where Chrome or Firefox can.

The big drawback of the previous picture of SPDY is that it requires sites to individually switch to SPDY. This is advantageous from a migration point of view, but it means it will take a long time to roll out everywhere. But, if you’re willing to use a SPDY gateway for all of your traffic, a new door opens. Could mobile operators and carriers do this today? You bet!

Check out the next picture of a SPDY browser with a SPDY gateway. Because SPDY can multiplex many connections, the browser can now put literally EVERY request onto a single SPDY connection. Now, any time the browser needs to fetch a request, it can send the request right away, without needing to do a DNS lookup, or a TCP handshake, or even an SSL handshake. On top of that, every request is secure, not just those that go to SSL sites.

Wow! This is really incredible. They’ve just taken that massive ugly problem of ~200 connections to the device and turned it into 1! If your socks aren’t rolling up and down right now, I’m really not sure what would ever get you excited. To me, this is really exciting stuff.

Some of you might correctly observe that we still end up with a lot of connections out the other end (past the SPDY gateway). But keep in mind that the bottleneck of the network today is the “last mile” – the last mile to your house. Network bandwidth and latencies are orders of magnitude faster on the general Internet than they are during that last mile to your house. Enabling SPDY on that link is the most important of them all. And the potential network efficiency gains here are huge for the mobile operators and ISPs. Because latencies are better on the open internet, it should still yield reduced traffic on the other side – but this is purely theoretical. I haven’t seen any measure of it yet. Maybe Amazon knows :-)

More Future SPDY

Finally, as an exercise to the reader, I’ll leave it to you to imagine the possibilities of SPDY in light of multiplexing many sites, each with their own end-to-end encryption. In the diagram above, SSL is still end-to-end, so that starting a SSL conversation still requires a few round trips. But maybe we can do even better….

SPDY is not hard. Securing the Internet is.

The F5 folks wrote a little about SPDY a few weeks ago. It’s a nice write up. But I want to challenge one particular point of it which I commonly hear:

“The most obvious impact to any infrastructure between a SPDY-enabled client and server is that it drives intermediate processing back to layer 4, to TCP”

This isn’t actually true. SPDY is not what makes load balancing or hierarchical caching difficult. SSL is what makes these hard. But even blaming SSL is a bit unfair – any protocol which introduces encryption to avoid 3rd party tampering of the data stream is going to have this problem.

In other words, it’s not deploying SPDY that is hard, it’s securing the web that is hard.

To the contrary, SPDY actually makes deployment of secure content easier. One of the common complaints against using SSL is that of performance – both in terms of client latency and also server scalability. When SSL is combined with SPDY, the performance objection is substantially lessened.

Now, don’t get me wrong, I am sympathetic to the difficulty of securing the web, and we need a lot more tools, debugging, and effort to make it simpler and cheaper for everyone. This will be especially difficult for infrastructure solutions which leverage the fact that HTTP is unsecured to do L7 packet analysis. But that doesn’t change the fact that we live in an electronic world full of bad guys. Whenever we ultimately decide to protect the web, it’s going to be hard. SPDY doesn’t create this problem at all.