.NET Market Penetration

I am interested in knowing what percentage of PCs out there have the various versions of .NET installed.  I spent a lot of time collecting a set of data and coming up with the following numbers.   Strictly speaking, the numbers are guaranteed to be skewed based on the sites I got data from and based on the types of users that visit those sites.  But at least it is real data.  For some reason it’s very hard to find information about which .NET runtimes are in use out there!

The numbers:

Unique Users – 631.1K (100%)
.NET 1.0 – 113.2K (18%)
.NET 1.1 – 356.4K (56%)
.NET 2.0 – 64.8K (10%)

This data was compiled from a set of websites that shared logs with me during the month of September, 2006. Your mileage may vary.
 

Mono

I’m a C# fan. Its been on my list for a while to finally install mono on my Linux machine (running redhat 9).

Download
For the initial download, I picked up the following packages (I know, if I were 3 years younger I’d pick up the sources and compile them myself!)

  • Mono core – mono-core-1.0.6-1.ximian.6.1.i386.rpm (4.75MB)
  • Mono development – mono-devel-1.0.6-1.ximian.6.1.i386.rpm (857KB)
  • Mono Web Dev tools – xsp-1.0.6-1.ximian.6.1.i386.rpm (141KB)
  • Mono Apache module – mod_mono-1.0.6-1.ximian.6.1.i386.rpm (13KB)
  • Mono Web Services/Form support -mono-web-1.0.6-1.ximian.6.1.i386.rpm (994KB)
  • Mono data – mono-data-1.0.6-1.ximian.6.1.i386.rpm (826KB)
  • libicu international lib – libicu-2.6.2-1.rh90.dag.i386.rpm (4.57MB)Overall, pretty cool. If this works, thats about 1/2th the size of the regular .NET framework.

    Writing My First Application
    OK – so lets see if I can’t create my first mono application. Shall it be “hello world”? Lets try something a little more original (but not much)…. Keep in mind that I don’t run X on my linux box. So I’m compiling this from the command line. In my first attempt, I tried to write C# by hand. But, as anyone who has used C# will tell you, the code pretty much writes itself. Turns out, even though I have written plenty of C#, I can’t write a program that compiles without help. OK, so instead I created the following code via Microsoft’s visual studio. Took about 2 minutes to write:

    using System;
    
    namespace helloworld
    {
    ///
    /// Summary description for Class1.
    ///
    class Class1
    {
    ///
    /// The main entry point for the application.
    ///
    [STAThread]
    static void Main(string[] args)
    {
    Console.WriteLine("does this work?");
    
    DateTime start = DateTime.Now;
    for (int i=0; i

    Compiling
    Compiling turns out to be very easy, just use:

    mcs mike.cs

    Mono compiles the source and creates “mike.exe” for you. Its a little wierd to see “.exe” files on linux, but I can deal with that.
    To run the compiled executable, you now type:

    mono mike.exe

    And viola, I can see that it takes about 187ms to count to 100million on my linux machine. (The linux machine is a 1.0GHz AMD Duron).
    Comparing the same program on windows (without the debugger, of course) takes 140ms. My windows machine, however, is a 2.8GHz Pentium 4. Quite a bit faster.
    So Mono looks very promising so far! The next step is to get the Mono IDE installed. Trying to write C# code without an IDE is just not a very fun experience. In order to that, I need to get X up and running again on this box.

  • Can small businesses afford the .NET size?

    This is a followup to my entry on May 06 about Managed code and C#. This may sound like I’ve bought into the Microsoft story; but its really based on my experience as an independent software developer. Decide for yourself though…

    The question is – as a small business, can you afford the hit of .NET to develop your applications if some of your customers may not be able to install .NET? Will the 23MB download of .NET be so big that it limits your distribution and prevents your product from being a success?

    The answer is pretty complicated. Is your target user a personal computer user? Or a corporate user? Do you expect the IT department to install the product or will the user install it directly? You should think about these options before you decide what to do. Unfortunately, as with many technologies, using .NET is almost an all-or-nothing choice.

    As for me, I’m a wholehearted believer in C#/.NET at this point, and I think most companies should elect to use .NET, despite the download. Here is why.

    First of all, .NET ubiquity is growing. Microsoft claims that they already have over 80million copies of the .NET framework installed. From the Lookout stats, its hard to tell what percentage of users already had .NET installed, but I think its about 35%. How many users didn’t install Lookout due to .NET is almost impossible to calculate. But I do know that by bundling .NET installation into your install (which Lookout optionally does for users that don’t already have it), a lot of users are able to install easily. These users are probably broadband users, however.

    The good news is that the .NET framework is being bundled with many new Windows installations today. The availability of .NET is only going to increase.

    Here are some reasons you should use .NET.

    1. Most developers agree they are more productive in .NET.
    Developing in C# last year was eye-opening to me. The fact that two guys could build something as complicated as Lookout in a short period of time is just amazing to me. We’re not geniuses, we’re certainly not rocket scientists, but we were able to do it. A lot of it is thanks to .NET. There is no way we could have built an equivalent set of features in C++ in a similar amount of time.

    I do think that Java offers many of the same benefits as C# from a pure development perspective. But the Java Runtime is even less distributed than the .NET framework. So, if you are looking to build in a managed framework where you won’t have to bundle and distribute the 20MB framework, C# is a better bet. For server-side applications, you probably don’t care about the distribution of the framework.

    2. .NET is more reliable.
    In the case of Lookout, we were building an application that had to exist inside of Outlook. Outlook is known to be one of the more treacherous programming environments out there. MAPI in particular (ask your developer friends) is a bit obtuse, and easy to screw up. Managed code, however, runs within a protected boundary. Because its completely interpreted, the native->managed wrappers put a big blanket around the .NET code. If your .NET code crashes or goes awry, its very easy to catch that crash so that it doesn’t percolate into the Outlook application itself. Its difficult to accidentally corrupt the main application’s memory space. Lookout has received praise for its reliability (although it has its share of bugs too), and a big part of this, I believe, is the fact that as managed code, it can’t screw up its hosted application. Consider if it were C++, however. If you have one bad pointer bug, you’ll take down all of Outlook! Thats a huge liability, responsibility, and just downright scary.

    3. .NET is more performant than C++ code.
    This may sound most controversial to many people. However, I believe it to be true. As a managed language, you may think, “how can it possibly be faster?” Well, you are right at one level. If your application is just a number-crunching app that wants to drive the CPU as fast as it can, you can probably write a more optimal algorithm in C++. But how many apps have that property? I’d argue almost none- except for pure research or scientific applications.

    The performance of most real-world applications these days hinge on a combination of disk IO patterns, network IO patterns, and CPU patterns. This is a complex formula, and is generally difficult to optimize. Talk to any performance expert out there, and they’ll tell you that the way to optimize is to design your app, build it, and then profile, profile, and profile again. Use profiling to figure out where your hot-spots are, and then redesign those portions. This is where C# and .NET crush C++. The fact is that C++ is so complicated to maneuver in that refactoring based on profiling is a very difficult and time consuming process. Most companies simply cannot afford to spend much time here. Developers can discover the major bottlenecks, but except in extreme cases, they do not have the time or resources to redesign them. Instead, they will employ a set of “quick hacks” to work around the bottlenecks. These quick hacks become legacy problems for the codebase, and don’t fix the underlying problem. Over the course of a year, after a few patch releases, the C++ code remains largely stagnant due to cost considerations.

    C#, however, can be refactored with much more ease. As problems arise, developers can much more easily rearchitect around performance bottlenecks. That profiling data does not go wasted – there is actually time to redesign large portions of the application without destabilizing the whole thing. Because of this, the 2nd and 3rd generations of a C# project will significantly outperform their C++ counterparts, and also be higher quality.

    Case in point (and I am certainly biased here) is the Lookout MAPI indexer. I have tried a lot of the competitors’ products, and I believe the Lookout MAPI indexer is 2-5 times faster than any of the competitor’s indexers. The competition is written in C++. How is this so? We redesigned the indexing algorithm about 3 times based on experience and profiling. The C++ guys can’t keep up.

    Conclusion:
    Well, if its really faster, has fewer bugs, and takes fewer resources to build, you know my conclusion. Some folks may still want to have their applications target some of the old legacy machines out there (windows 98, etc), and if you really need that, C++ may be for you (although .NET does allegedly run on Win98 too). And, you can’t ignore that .NET does require more RAM; so it may not run as well on the older machines. Anyway, I just hope that Microsoft bundles .NET into a service pack sometime soon so that this whole distribution question can start to go away.

    Managed Code Java/C#/etc

    One problem we’ve worried about with Lookout is the fact that its dependent on Microsoft’s .NET 1.1. .NET is a framework for building managed applications, and its Microsoft’s answer to Java. Its fully featured, but comes with a 23MB download! So, while Lookout is nice and compact at 1MB, its dependent on this huge 23MB download!

    Yesterday, I was pointed at another application which is a bit of a competitor to Lookout. Its a very nice tool, and I liked what they had done. But, its install file was 25MB in size. When I looked at what was in there, I found the complete Java runtime + java libraries. Its coincidental that this app with Java was almost exactly the same size as Lookout + .NET!

    So, it gets you thinking about what tools to use for building your applications. Obviously, any 20MB+ download is something to be concerned about. But I sure am glad I chose .NET instead of Java. With .NET, I know that Microsoft will be bundling it into their distributions in the future. (In fact, its already bundled in their newest releases). But Java will likely never be bundled – despite the recent news that Sun & Microsoft are friends.

    So, this was interesting to me because of Lookout’s size. I hadn’t really thought about the poor Java developers out there. Sorry guys!

    GAC PIAs

    In my previous entry, you heard a little about the trouble of installing into the GAC without using MSIs. I just found some useful information about choices with interop libraries for Outlook from here.

    This article illustrates a few interesting things! If you are writing an Outlook addin, you have a few choices:

    1. Write your addin exclusively for Outlook 2003 and use the Office 2003 PIA
    2. Write your addin for multiple versions of Outlook and use the Office XP PIA
    3. Write code to custom load PIAs at runtiume

    #3 is no small amount of work. #1 doesn’t make sense for anyone other than Microsoft. Microsoft wants everyone to use the latest version (and upgrade), so they love this option. For the rest of us, that actually want a reasonably sized user base to draw from, you’ve got to use option #2.

    And, the XP PIA has a number of known bugs – only fixed in the 2003 PIA. For instance, I ran into this bug the hard way. There is no fix unless you are using Outlook 2003.

    Sigh.

    On the good news front, I am glad that folks from Microsoft (like Omar, mentioned above) are helping to document this stuff along the way.

    gacutil & GAC install

    If you are looking for a way to install into the GAC without using a Microsoft Installer package (msi), here are a couple of ways to do it.

    Option 1:
    Bundle the gacutil.exe. This will require a library, msvcr71.dll. The sum total is about 440K to be added to your install. They’ll compress down to about 200K. This seems to be the most frequently used mechanism. You have to redistribute gacutil.exe because its part of the Microsoft.NET SDK and not installed on most people’s machines.

    Option 2:
    Write your own c# code to do it. After searching for a very long time, I did manage to find some obscure APIs to do the same thing. The good news is that now you can write managed code to do this, and it only takes about 16K of code. Woo hoo!

      System.EnterpriseServices.Internal.Publish foo = new System.EnterpriseServices.Internal.Publish();
      foo.GacInstall(“myassembly.dll”);

    If you want Microsoft’s gory details of the GAC API, you can check here.

    ASP.NET and missing the easy stuff

    I wrote my first ASP.NET program the other day. I was so amazed. Microsoft has made it unbelievably easy. Using all Microsoft tools, the steps were something like:
    – open webserver to my ASP.NET ISP
    – use their admin tool to “create a new Web Application”
    – open Visual Studio.NET
    – create a new project
    – point it at my ISP
    – create code.

    It was so easy, I couldn’t believe it. You have to try it for yourself to believe me.

    So I created a little subroutine which just detects whether .NET is installed on a client’s machine. Nothing magic about that, right?

    Now, try to use that subroutine anywhere other than .NET!

    .NET seems to take the approach that if you use ANY code to generate your web page, you should put the entire webpage into .NET code. I mean real code here – code that you have to compile before you can run it. This makes no sense to me and violates all principles of separating UI and code.

    Example – if you’ve got some static HTML, why would you ever want to put the static HTML into code? Now, if you want to change a comma on your website, you’ve got to go to your web developer who knows how to build code to do it.
    And most websites are made of a lot of static elements.

    Most web pages end up being a collection of elements that are combined together. Each element may be created by a completely different source. For example, I may have a page which includes:
    – a static HTML header
    – a side menu created by a perl program
    – a main content page created by ASP.NET
    – a right-hand-sidebar contianing ads from a third party app
    – a static footer

    Now, how are you going to assemble these into a single page? Do you want to write code to do all this? Of course not. But the ONLY way to do it in .NET is to write .NET code to include it all. ACK! You can’t include “blobs” of ASP.NET from ASP!

    ASP (like JSP) had simple HTML with callouts to code. This allowed for easy separation of UI and Code. And, I could hire web-designers rather than programmers to create 99% of that website and iterate on the UI. The programmers created code components which the web designers would “include”. Now, ASP.NET wants me to make the developer do all the work. In ASP.NET, you need to know how to open visual studio and WRITE CODE in order to spit out anything.

    I like what they’ve done with ASP.NET. The integration with Visual Studio is astounding FOR DEVELOPERS.

    But, unless you want to write code for every closing and other html markup, its a bad choice.

    C# #define. Missing the easy stuff

    The OO crowd sometimes just goes overboard. C# and #defines are an example. C# does provide support for defines, but not for defines with values. So you can write code like:

          #if foo
                  Console.WriteLine("foo is defined");
          #else
                  Console.WriteLine("shoot, its not defined");
          #endif
    

    But, the designers left out support for #define FOO=VALUE on the grounds that its “macros are a bad idea in general”. See other non-thinkers that spout the same idea here.

    The reason these guys think its okay to not have macros is because they’ve never coded in the large. They’ve never built real projects. For any developer that HAS actually written real code, they know that real code invariably requires a set of tools and resources to fully assemble the final product. This includes utility libraries, installers, uninstallers, profilers, memory checking tools, etc, etc. And you always want to have a few key things that you pass between each of these tools. One of the simplest ways to do this is to use the #define NAME=VALUE syntax. Its not graceful, but its so simple that almost every tool out there provides support for it. When you are using someone else’s tool, you just don’t have the luxury to be screwing around with ideals of “macros are bad”.

    Ack. Well, if you haven’t guessed it already, today I’m trying to automate my build processes. I have different tools to link together, and the WEAK LINK in the chain is the C# compiler lacking macros.

    Solution
    The solution I’ve settled on so far is use of environment variables. Its not too bad, and mostly usable in other programs as well. Code looks about like this:

       string version = System.Environment.GetEnvironmentVariable("PROG_VERSION");
    

    But this is a lot more cumbersome that using pre-processor macros because its inline code. Now you’ve got to make sure this is initialized and loaded in the right place, etc etc. I’ll keep looking.

    Ack. Get off your high-horse and allow macros!!!!!
    Read more

    Garbage In/Garbage Out

    Lots of programmers have moved from using languages that primarily don’t do Garbage Collection to languages that do primarily do Garbage Collection. In fact, I’m probably a late comer to using it seriously. Sure, I’ve used some amounts of java on the side a bit over the past few years – enough to be dangerous, at least. But I haven’t used it enough to really care how the GC was working or even notice bugs where the GC was masking things for me.

    In the good old C++ days, every major programming effort that I’ve been involved with had lots of memory allocator debugging techniques employed. We’d use macros for malloc/free, override new/delete, use purify, zeroify memory when its deallocated, create safety zones on each side of the buffers, etc. After you’d done it for a while, these techniques served you pretty well, and with very little effort, you could debug all your memory usage patterns.

    Now, fast forward to the land of Garbage Collection. With the language naturally figuring out what you intended to free and not free, you shouldn’t need any of these tools, right? Well, sort of. So far, in my short experience with GC’d languages, it seems pretty common that you need to reference *something* that isn’t written in the GC’d language. For example, java calling out to C++. In this case, you are passing objects back and forth. Sometimes pointers, sometimes not. But either way, you’ve got references to objects that are not going to be GC’d held by objects that are GC’d. Unless you have a perfectly neat little program that can be 100% java, you may run into this. And, debugging it is a pain!

    Why is it hard to debug? Well, in C & C++, you can employ all sorts of tricks to allocate/deallocate memory differently. But in the GC’d world, once you drop your references to the object, its going to get cleaned up eventually. And – you don’t know when! When does the GC run? When does it not run? Not much you can do.

    Finally, I found one trick which helped a bit. That was to create a simple thread that sits in the background (development mode only) and initiates the GC collection process every second or so. This way, if I’ve got some dangling reference somewhere, the GC will collect the object, and I’ll notice the bug a *lot* sooner than I would have otherwise.

    Anyway, this probably isn’t interesting to most folks, but I found it an interesting problem. I like the benefits of not worrying about GCs. But my stodgy old C++ side really likes understanding exactly when my objects are coming and going. Maybe I’m a control freak.