Microsoft to give away Office for home use

This isn’t really news, it’s just a prediction. 

With all the new, free office equivalents out there, Microsoft will be giving Office away.  Hooray!  The fact is that the free alternatives are looking pretty good.  If you don’t like Google’s Writely, you can use Zoho.  If you don’t like Zoho, you can use OpenOffice.  The point is that there are lots of viable, free choices.

Now, Microsoft is the only vendor that is deeply entrenched in the corporate market, and that is their stronghold.  One of the biggest threats to that stronghold however, would be to lose the consumer and low-end markets.  As we all know, the tools you learn at home and at school are the tools that you carry with you to the office over time.  College students right now can either spend $199 for office (that is after the $300 “student discount”), or they can use a free alternative.  Obviously, they will be increasingly electing to use the free stuff. Schools already get lots of donated copies of office, but it’s not completely free.  These institutions will also be looking to cut costs and consider what is free. 

So, it is inevitable that Microsoft must curb the spread of free alternatives – otherwise they risk losing the small & medium sized business in the medium term, and the corporations in the long term.  It’s just a matter of when they feel enough pressure at the consumer level to finally give it away. 

Regardless of which word processor “wins” the consumer market, one thing is clear – prices are finally going to drop.  Since most of Microsoft’s revenue comes from the corporate arena, this shouldn’t even affect their bottom line too much.  Wow – everyone wins.

Rojo vs Google Reader Review

I just started using the Google Reader application.  It’s easy to use and uncluttered.  For several years I have been using Rojo’s reader.  Here are some initial thoughts about differences between the two products:

1) I like the way the “mark read” feature works in Google Reader.
Marking items as “read” is a tricky thing to do, even though it sounds simple.  Do users manually mark things as read?  Does having shown it on the screen mark it as read?  Google’s product does a great job at this – they show you articles in newspaper-style, but only when you scroll down past them (which you usually do while reading) does it mark-as-read automatically.  This works great for the user, as it adds zero-clicks to the process of reading articles, and yet tracks the read/unread status well.  So far, I like this much better than Rojo, which has had a difficult time marking read well.

2) The home page is where you read your initial set of articles.  Rojo divides this into two tabs of information: “Front Page” and “My Feeds”.  The former tracks what is popular overall, and the latter is what you want to read.  I have liked Rojo’s front-page a little.  It has shown me content which interested me that I otherwise wouldn’t have discovered.  However, because this is the default front-page with Rojo, I most often find myself two-clicks from where I really want to be.  Overall, its a great feature which I want – when I am bored.  Otherwise, I’d rather just read my stuff.  Google Reader only tackles the second, and could definitely use improvement by adding the first.

3) Adding feeds seems simpler in Google.  You enter a term, it searches (and in my case found 100% of the feeds I was looking for), and you click the ones you want to add.  Rojo has always been a little weak.  It is slow, and it doesn’t find results well.  For example, searching for “belshe” somehow doesn’t find my feed. 

4) One biggest feature which Rojo has is the digg-like “Add Mojo” feature.  This is a great way for users to promote the content they like.  Google does have a “shared items” feature, but it is really quite different.  Having a popularity counter like Rojo or Digg would really help.

5) I am a little worried about Google’s lack of foldering.  GMail suffers the same problem.  While I am a huge fan of search, I’m not such a fan that I would drop all foldering.  How do you manage a large list of feeds without having some way to categorize those which are related?

Overall, both products are very good.  I think Google’s is simpler and faster, while Rojo offers more features.

Use NoSpyMail to combat PattyMail

HP's Patricia DunnIn case you haven’t heard, “PattyMail” is the term coined to describe the sending of email with the intent of spying, the way that HP’s Patricia Dunn allegedly authorized this year. 

The idea is simple.  Say you have someone on your board who is sending confidential email to someone they aren’t supposed to, like a competitor or the press.  Simply add a small HTML image into your confidential e-mail.  Then, in theory, when someone reads the email, the email client will download that image, causing a “ping” to be sent back to your webserver to download the image.  You can then see which domains are fetching your images, and find your leaker.

“But that doesn’t work!” you say.  The answer is maybe.  It is true that most modern e-mail clients suppress HTML fetching by default.  BUT  – if the user clicks “show me the images”, then the images are shown.  So, when emails are coming from a trusted sender, like the chairman of the board, there is a reasonable chance you’ll want to see the graphics too, and open yourself to HTML spying.

“But that still doesn’t identify the leaker!”, you say.  But you are wrong; this is where the difference between HTML mail and “Spy Mail” comes in.  With HTML mail, you may have an image referenced in the email like:

    <img src=”http://www.senderisspying.com/images/logo.jpg”>

In this case, you are right, if you forward this document to 10 people, and then one of them forwards to someone else, you won’t be able to tell which of them did it.  So why not encrypt special data in the image link to identify the leaker?  Instead of the link above, you might send a different email to each person, and the image links might instead look like:

    <img src=”http://www.senderisspying.com/potentialleaker#1/logo.jpg”>

This is SpyMail.  Now, when the sender checks their server logs, they’ll know exactly who the leaker is.  Evidently, this is what Patricia Dunn did.

It turns out that embedding information in email in a clandestine way is not too hard.  But generally, you don’t want the recipient to know they are being spied upon.  And this is where NoSpyMail comes in, because it can detect this.  When you read email with Outlook 2003, it won’t show HTML images.  But, if you tell it to, it will.  And if anyone is spying on you, they’ll get you!  NoSpyMail allows you to view those emails *without* getting spied upon.  How does it do this?  Well, it detects images which contain tracking information, and forcibly removes the tracker.  The image is skipped, but other images will still work.  This allows the reader to more safely read email. I wish I could say it were guaranteed 100% to work, but it is not.  But I do think it catches 95+% of the spymail.

Businesses also use this technique for less nefarious schemes.  For instance, if you sign up for newsletters from Costco, you’ll get HTML mail.  You probably want to see the images, because the sale items are all images.  But, as soon as you do, they’re tracking you, and they’ll know that contacting you by email works, and that you read it, where you read it form, what time you read it from, and whether you are a Windows or a Mac user.  Maybe you care, or maybe you don’t.  NoSpyMail offers a middle ground; you can read the newsletter, but not have to tell Costco that you did.

Anyway, NoSpyMail is normally free.  But, if you are a member of the HP board, and you need some protection, let me know.  Pricing starts at $10,000 per copy.  Probably a good investment for you!

If you leave a machine off for 2 years…

I’m not quite sure how long my laptop was sitting on the shelf, but it was about 2 years.  I just didn’t need it because I had one through work.  But this weekend, I dusted off my old friend and booted it up.  It’s still running Windows XP Pro, so I’m thankful there haven’t been any major shakeups in the OS world over these last two years.

Can you guess how many security updates were recommended to me?

Well, in the first pass, Microsoft recommended 64 patches, mostly security related.  Then, after a reboot for one of those patches, there were 44 more.  I think this might have been a bug and repeating 44 of the earlier 64, I wasn’t watching closely enough. 

After the 64 security fixes, the machine was still not in good shape, and it recommended Windows XP SP2.  110MB downloaded and about 1 hour later, XP-SP2 was running. 

Still not done, though, 12 additional critical and major security updates were yet to be installed.

After the end of that (took about 3 hours end-to-end), I’m ready to go!

 

One last note:  After doing all these updates, I found that Microsoft Update keeps track of your patching history.  It even still has my history from 2004!  So, prior to my patching frenzy today, I last patched the system on June 11, 2004, with KB839643.  Today, I installed a total of 127 patches.

Porn invades RSS

I’ve been a big fan of Rojo for quite a while, as you’ve probably read.  But recently I’ve been having to report often that RSS from porn sites is occupying the top spots on Rojo.  I guess that is what happens when you have a successful content publishing platform – porn and spam abound.  Today, the #1 site they recommend I read is “Naughty Neighbors July 2006”.  I guess I should have known from the title…. 

Security by Lawyers – Vista’s Elevation Prompts

If you’ve tried Vista, you’ve no doubt been hit with the onslaught of “elevation prompts” for tasks that need to run with elevated privileges.  The messages are so frequent, they almost read like this:  “You’ve clicked on the Disk Defragmenter button.  Did you really mean to click the Disk Defragmenter button?”  Uh, hello?  Vista?  You mean someone else might have clicked on it?

I really appreciate that Microsoft is trying to solve the security issues they’ve had in the past.  That part is great.  The problem is that the solution doesn’t fix the problem.  As a user, Vista inundates you with “Do you want to do XYZ” so frequently that you become completely numb to the problem.  The message descriptions are obtuse, and your choices blur together.  In the end, you conclude, “damnit, just do what I say” and click yes.  If there was a real reason for the alert, the user doesn’t know and clicked through anyway.

I’m sure the lawyers at Microsoft are happy, though.  Vista provides a credible argument that Microsoft did warn you before something bad happened.  But it’s really like reading the End-User-License-Agreement (EULA) that comes with any website or software package these days – nobody reads them.  In the end, the lawyers are protected, and the users are left with unintelligible gobbledygook that just slows them down.

What we really wanted, Microsoft, was warnings about errors.  What you gave us, was a warning about anything we do normally that might be an error.   And unfortunately, 99.9% of the time, it is not an error!  So, the prompts you’ve just displayed are basically useless (except to the lawyers).

If you aren’t planning on suing Microsoft anyway, I stumbled upon this great tip by way of Omar for how to turn the damn things off.

.NET Market Penetration

I am interested in knowing what percentage of PCs out there have the various versions of .NET installed.  I spent a lot of time collecting a set of data and coming up with the following numbers.   Strictly speaking, the numbers are guaranteed to be skewed based on the sites I got data from and based on the types of users that visit those sites.  But at least it is real data.  For some reason it’s very hard to find information about which .NET runtimes are in use out there!

The numbers:

Unique Users – 631.1K (100%)
.NET 1.0 – 113.2K (18%)
.NET 1.1 – 356.4K (56%)
.NET 2.0 – 64.8K (10%)

This data was compiled from a set of websites that shared logs with me during the month of September, 2006. Your mileage may vary.
 

Belshe leaves Microsoft

REDMOND, WA – Friday, September 29 2006 marked the final day at Microsoft for Mike Belshe, a development manager in the IS Client group in Silicon Valley.  The end of Belshe’s tenure marks a turning point for Microsoft, as Allchin, Valentine, Kennedy and Gates also hang up their hats.

The recent announcement was not a surprise.  Markets had adjusted to the Microsoft (MSFT) news over the past few months, and the stock has soared.  Prior to the news, the stock traded at $22 in June, but now tops $27.35.  “We’re not surprised by the market reaction,” said Steve Ballmer, CEO of Microsoft.  “Mike’s salary was pretty high, and with revenues of only $10B per quarter, our profits were in jeopardy.  He was doing a stellar job, but it was really hard for us to provide investors the return they expect.”  Investors are ecstatic, and happy to have the money back in the bank.

When asked, all that Belshe had to say was, “I really enjoyed the folks I worked with at Microsoft.  I wish them all the best, and I’m super excited that the stock price is up.  Like everyone else, I expect great things from a great team of people.  And Microsoft has got a great team.” 

Working in Seattle

I just read a really old blog post from Steven Sinofsky.  He’s a pretty big deal at Microsoft, and he writes a great blog with all kinds of advice for college grads about what it’s like to work at Microsoft.  If you are thinking of joining Microsoft, definitely check out his blog.

But this particular article struck me as somewhat comical.  I never thought of the weather as much of a selling point for a job, much less as a selling point for Seattle!  Steven fairly posts the cold, hard facts about the weather – that it rains for 155 days per year in Seattle.  (That’s 42% of the days)  They even have streaks of rain every day for 3 or more weeks at a time there!  But then he goes on to pitch that the climate there is actually a moderate one, going so far as to state, “Most people are surprised to learn that it really doesn’t rain all that much in Seattle“.   Huh?  I’m not sure how much rain is “all that much” by his measure, but I guess we all have our own opinions.  To me, that is great salesmanship, Steve!  Believe in your product!

Truth be told, if you are going to work for Microsoft, definitely move to Redmond.  The campus is great, the people are great, and you’ll have a blast working on great technology.  But make no mistake – with the exception of a very short summer – the weather is wet.  Steven claims that, “No matter where you live, people can find a reason to complain about the weather (or is it weather forecasting?)”, but he’s just lived in Seattle too long.  Here in California, we don’t dwell on the weather, except to debate whether we should bike to work or surf.

Desktop Applications Cost Too Much

Ryan Stewart this week was Looking at the Strategy of Rich Internet Applications. He hits a lot of points there, but also takes a stab at the fundamental differences between Web Applications and Desktop Applications. Clearly, if you read other parts of his blog, you’ll know that Ryan very much thinks the Desktop Experience has a much richer experience to offer than the web does.

As a Microsoft employee, I’m supposed to say, “he’s right”. After all, we Microsofties have a vested interest in proving this true – if users don’t need the desktop anymore, we’re in serious trouble! I’ve personally wracked my brain trying to prove this is true, hoping this to be true. But alas, I think he will ultimately be proven wrong. I’m not saying that Web Applications today have knocked out desktop apps yet. They certainly have not. But I also have no doubt that they ultimately will. Web Applications are fundamentally cheaper than Desktop Applications to build, install, administer, maintain and support.

The Problem with Desktop Software:

1) The Cost of Deploying Software
It is incredibly expensive to maintain software deployed to every machine. Take any large corporation, and they’ve got a fleet of IT professionals who’s only job is to make sure that the deployed software still works. Can you imagine managing 100,000 desktops that are in use by 100,000 different people, each sitting at desks in varied locations? Users break it, and the IT guy has to fix it. If the software weren’t deployed in the first place, this cost goes away.

2) The Cost of Conflicting Software
It’s the Operating System’s job to manage the resources of the hardware. To date, we have yet to see an Operating System that can prevent conflicts between two software packages. It boggles my mind, but somehow installing an Antivirus program can affect the way your Email client works. This is a sad fact that each and every one of us has experienced. In today’s world, installing two software packages on a single box means you’ve got bugs. It’s impossible to test every combination, and we software professionals inherently suck at it.

3) The Cost of Patching Software
Patching software is tricky. You need to communicate to the user that you have a patch, how important the patch is, what the fixes are, what the side effects are, as well as any gotchas. You may need to upgrade the data formats the user is using, depending on how big of a difference the new patch is when compared to the old one. Microsoft does patching better than anyone with it’s Microsoft Update product, but it’s taken years to get right, and it takes more process to update a product than it does to get the Space Shuttle off the ground. Why? Well, if you are going to update 500,000,000 desktops, you’d better damn well know it works. Frankly, unless Microsoft can both reduce the process cost and also make this technology available at zero cost to every other software maker, patching of most software continue to be a serious gamble for the end-user. And even if the patch does work, don’t forget about the patch causing new instances of Cost #2 mentioned above.

4) The Cost of Supporting every Platform
Once you deploy your software, your new applications need to support the old ones. This adds exponential expense to building software. If you don’t think it’s too bad to support Windows 98, Windows 2000, Windows XP, Windows Server 2003, and Vista, how about writing an application that needs to work with 4 different versions of Office (Office 2000, Office XP, Office 2003, and Office 2007) on each of OSes? (now you’ve got 20 combinations) But wait! Don’t forget that the world is changing from 32bit to 64bit architectures, so you’ll need to build both 32 bit and 64 bit versions of your code for each of them! Now you’re looking at building software to be tested on 40 different platforms. Seriously, who for a minute thinks that software makers don’t take shortcuts here? Maybe now you realize why your Windows Server 2003 breaks so much (it get’s tested the least).

For platform support, I haven’t even mentioned those noble apps that want to build for both the Macintosh and the Windows environments. Doing that is such a daunting task that nobody expects applications to be concurrently released on both anymore. That’s just crazy talk.

5) The Cost of Integrating Web based and Desktop based software
Building Web-based applications means you have to build out a server infrastructure and employ a whole set of technologies which is fundamentally different from what you use when building desktop based applications. Unfortunately for the desktop apps, almost all modern apps need to use some sort of server-side infrastructure to build the latest features. Both Quicken and Money, which are classic Desktop Applications now integrate with sophisticated server-side applications for tracking your investments, doing online trading, and more. Over time, it will prove too costly to build both the Desktop portion and the Web portion of these applications. Software providers will need to consolidate. Unfortunately, you can’t move the web content (real time stock quotes, news, banking services, etc) onto the desktop. So, if you want to consolidate your technologies, the only way you can consolidate is to move to the Web-based application.

6) The Cost of Going Mobile
Jonathan Schwartz (CEO, Sun Microsystems) wrote about this just the other day. While here in America we haven’t gone as crazy about mobile as other countries have, there is no doubt it is coming. Which application is better capable of moving mobile? The Desktop-based application or the Web based one? Desktop apps need to be completely rewritten to work on mobile devices.

7) The Cost of Synchronizing Data
Once you’ve managed to deploy your desktop app, you start to use it. You write a few Word documents, save away some QuickBooks data, and get some good work done. But then you need to travel to Phoenix. Yikes! Now you need a laptop so you can take your data with you. But wait – you left your laptop in the taxi, and now you need to get the client’s phone number so you can tell him you’ll be late. Shoot – that was on the laptop too! The problem is that you haven’t synchronized your data between all your desktop-based software packages. So, in addition to the desktop and laptop, you’ll now be buying services and software from one of the mobile carriers to try to sync all this data for you. Getting expensive!

The Solution is Web-Based

OK – so if you’ve read this far, you may not yet be convinced of the inevitable doom for our desktop applications. Just to make sure nobody says I left anything out, let’s recap the above 7 costs with the Web-based application.

1) Cost of Deploying Software. In the Web-based model, the IT department does not deploy anything except the browser itself. Once that is deployed, new applications can be added without deployment costs to the desktops. (Server side deployment or data deployment, such as hosting email data still exists, but that exists in the desktop arena as well with Desktop Server applications like Exchange)

2) Cost of Conflicting Software. The web is designed around a set of pages which are partitioned. This partitioning ensures that unrelated applications don’t conflict. (e.g. Yahoo can’t change a page on their site which breaks Microsoft’s site)

3) Cost of Patching Software. Patching software exists in both models. However, in the desktop-model, your patch has to work for any desktop, which could be running any platform, or have been modified by the user in any way. The user could have deleted registry keys, moved disks around, or added new gizmos like USB drives, printers, and network cards. In the Web-based world, the application provider controls all of these things. Further, the patch can be scheduled to run at times when the user is known to not be using the system. Because you know what you are patching, Web Applications patch much more easily. You only have to support the new version, and the version one prior; there is no need to support 10 year old systems.

4) Cost of Supporting Every Platform. This problem does not exist in the Web Application world, except for supporting various browser features. IE, Firefox, and Mac each have somewhat different features, and this can be tricky to build software for. Nonetheless, it is infinitely simpler than the myriad of combinations created with the desktop.

5) Cost of Integrating Web-Apps and Desktop Apps. Ironically, the Web-App world already does this. There is a very clear line between what is done on the client (HTML, JavaScript, etc), and what is done on the server. Web Apps specifically design for this fact, and don’t usually need to modify the desktop.

6) Cost of Going Mobile. Web Apps need relatively small changes to work on mobile devices, and for the desperate, even generic browsers can do a functional job on mobile devices.

What it Takes for Web-Apps to Finally Conquer the Desktop

Alright. Now that we’ve established that Web Applications truly are cheaper to maintain and richer in features, why haven’t they taken over already? Clearly something is missing?

Better UI & App Platform
HTML & JavaScript are pretty flexible, and it always amazes me what some people can do with it. But, most UI’s are pretty poor when compared to what the desktop can provide. Graphics rendering is pretty much unavailable, and accessibility and navigation metaphors are often broken.

We need a few more generations of markup to allow Web Apps to better utilize the client and create more consistent user interfaces.

Ability to Save Data Locally
Today, going to a web-based application means that you are storing your data on the Web. This is a big tradeoff in terms of security and bandwidth. I want my photos to be mine – but I want the application on the web.

I fully expect web browsers to be capable of doing this in the future. I also expect web browsers will be capable of storing data on USB or flash devices. Instead of each of us having a desktop with a big hard disk, we’ll have a set of small compact flashes that we can plug into our cameras, our phones, our computers, the kiosk at the airport, or all of the above.

Note that the ability to Save Data Locally is specifically what weakens desktops for the “Cost of Going Mobile” and the “Cost of Synchronization”. It’s these private data stores which are costly, and using flash or USB devices re-introduce part of that. The difference, however, is that the application is able to write to any place; instead of only being able to write to “C:Documents and SettingsJoeMicrosoftFoo Application”, applications will write to where the user wants the data. And, if that is a mobile storage device, it will go mobile, decreasing the costs of mobility and hopefully eliminating the need for much synchronization.

Ability to Provide Internet and Intranet Solutions
Moving the storage for the consumer is one thing, but companies will still need and want to control their email and other data. Web App providers will need to provide ways that the backend portions of these applications can either be hosted by an IT department, or be hosted on the Intranet. Let the consumer decide.

More Bandwidth
We need more broadband penetration. If you don’t have broadband, you want your desktop apps. Sooner or later, this will be realized. Some thought we’d have enough bandwidth 10 years ago. Who knows, maybe it’s still 10 more years away.

Conclusion

For me, the conclusion is obvious. Users will ultimately elect the pains that come with remote-managing their data over the pains of doing system administration. It’s just easier to delegate system administration tasks (deployment, backups, etc) than it is to do it yourself. As soon as technology takes us far enough, we’ll jump.

Don’t conclude that I’m totally absolute here. This is an evolution that will take many years. There will always be some desktops out there. High-performance games may demand it (or maybe dedicated consoles like XBox and PlayStation will take that), or other vertical apps will demand it. Developers will need their own boxes. Video editors and graphics designers will probably need their own machines to do their specialized work. For mainstream use, though, we’re heading pure web. And increasingly, even these specialized work environments will move to the web too.

Finally, Some External Resources

Paul Graham
I don’t agree quite with the words, but mostly I do agree. Keep in mind that Paul wrote this in 2001, “There is all the more reason for startups to write Web-based software now, because writing desktop software has become a lot less fun. If you want to write desktop software now you do it on Microsoft’s terms, calling their APIs and working around their buggy OS. And if you manage to write something that takes off, you may find that you were merely doing market research for Microsoft.”

If you want to have backward compatibility and support for environments as far back as 10 years old, and you are going to deploy hundreds of millions of copies of it, you are going to be left with something that seems like “calling their APIs and working around their buggy OS.” It’s not Microsoft that is the problem, it’s the nature of the beast.

Om Malik
Om teamed up with Niall Kennedy recently to discuss this topic, and they concluded that there is a lot of life left for Desktops. They are probably mostly right, but I think their long term vision is a little short term. Om created a poll on this topic, with 64% of respondents wanting “both desktop and web apps”.

Paul Kedrosky
Paul’s interesting viewpoint is to look at history, “Way back when there was a time when people would have said that editing text in WYSIWYG was a CPU-bound task that required a desktop application, but times have a-changed. I have no doubt that the same thing will happen, sooner rather than later, to many tasks, like audio-editing, that are currently deemed now-and-forever desktop apps.”

Peter Rip
The real problem with desktop apps is no one works at their desktop anymore.