Nothing like some new terms

Get ready to hear about the 'Fabric controller'. In a cloud computer environment, the fabric controller is the "O/S". It is responsible for managing the services that you have configured.

Modeling your services

The first step is to model your service. This means to define the attributes related to the deployment and execution of your service. This includes the channels and endpoints for your service (do a quick look to WCF for the definition for these terms). As well, you define the security model by identifying the roles and groups. This information is persisted as the configuration for the service.

Development Tools

The developing and testing of the service can be done using familiar tools (i.e. VS2008). There is no need to deploy to the cloud in order to test. There is also no requirement that the application be written purely in managed code. This piece of information is a bit of a clue as to what is going on under the covers. In other words, there is probably Windows running someplace.

The development environment 'simulates' the cloud computing environment. Once the application is completed, it can be 'published' to Azure. The 'package' (the bin output for the project) and the configuration file is sent to Azure. After a few minutes, your application is running live on the cloud.

Simple. At least for the "Hello World" application. :)

Windows Azure Links

If you're looking for a set of links that provide more info on Windows Azure, check out http://blogs.msdn.com/cloud/archive/2008/10/27/bookmarks-windows-azure.aspx

Ray Ozzie Keynote at PDC 2008

Today's focus is on the back-end innovations. The client conversation/demos will be done at tomorrow's keynote. In other words, this is the 'cloud' talk.

The content of the first portion of his talk is about the convergence of IT Pro and IT Dev functions. Basically, he is making a case for the need for cloud computing. Things like redundancy, resilience, reliability, etc. Nothing exceptionally new here. But he then branches to the idea that, rather than having the infrastructure within a corporation, perhaps it would be better to have the infrastructure hosted by someone who specializes in offering web functionality that supports millions of developers. I think I can see where this is heading :)

The new service is called Windows Azure.

That explains the 'blue' theme seen throughout the Convention Center.

One of the goals is to be able to integrate the service with the existing toolset. "And you can", says Ray. But there will also be a new set of tools aimed at assisting developers with this 'cloud design point'. After all, it's not quite the same as traditional Windows applications. The focus for a typical Windows application is 'scale-up' and not 'scale-out'. And to work properly with the cloud, the 'scale-out' model is a requirement.

Keep in mind that one of the benefits of the cloud computing is the ability to increase capacity by turning a dial that increase the number of 'servers'. For your application to work successfully in that environment, the manner in which you develop applications might change significantly. But the details on that piece will have to wait for tomorrow.

Now it's time for the demos. More shortly.

Traveling to PDC 2008

I'm writing this while at 35,000 feet, winging my way to PDC in Los Angeles. This is actually my first PDC so I'm looking forward to it. I've been to Tech Ed and Mix in the past, but have never made it to a PDC. How can anything bad happen while I'm surrounded by 7,000+ geeks?

There are a couple of areas where I expect to see some significant announcements. Some of them, such as the beta bits for Windows 7 and an early CTP for VS2010, are already widely anticipated. But there are likely to be more (and potentially more interested) announcements from across the company.

For example, I expect to hear some big initiative surrounding cloud computing. Aside from tools that will help developers take advantage of the technology, it wouldn't surprise me to hear about a service that competes with Amazon's new E3 service.

Another potential target for news is the Oslo project. There has been a bit of buzz (oooooo...alliteration) on this upgrade to Microsoft's service layer offering, but it will be interesting to see how Oslo is positioned, especially in relation to any cloud computing announcements.

Beyond the above, I'm sure there will be more surprises. From the number of sessions on the agenda, I expect that there will be some VSTS innovations. And my interest has been piqued by an MSR project called Pex that deals with automated white-box testing. I'll be live-blogging from the keynotes and the sessions that attend, basically commenting on anything that catches my ear. So stay tuned for some PDC details...at least, my take on them.

A Disappointment with PPTPlex

I did a presentation this afternoon on some of the basic functions of WCF. I had put a slide deck together using a new Microsoft Office add-in called PPTPlex. You can see demos of what this add-in does in the provided link, but basically is allows for a much more dynamic experience of going through the slides in a slide deck. As compared to the typical linear flow, PPTPlex allows you to easily jump from one slide to another with a couple of clicks up and down the slide hierarchy.

I was pretty excited to be able to put this technology to work in the real world - right up to the time when I started the slide show.

Most of the time when I do presentations, I use a split screen. That is to say that what is displayed to the audience is not the same as what I see on the laptop in front of me. The new Presentation Mode in PowerPoint 2007 helps me a great deal with working that way. I was expecting that, when PPTPlex was run as a slide show, I would have expected the same appearance, that being that the slide show (such as it is) would be displayed on the secondary monitor.

I was disappointed.

Now it may be that there is a setting that I missed that would have allowed this to happen. I will admit that I was standing in front of the class when I tried this, so the time allocated for exploration was limited. But I expected it to just work and it didn't. Sigh.

Now I haven't yet got back to a place where I can do some detailed investigation, but as soon as I do I will see if I missed something. I hope so, but I doubt it. More details as they become available.

Writing is done, so back to the blogging

The two or three of you who follow my blog with regularity will have noticed that I was dark for most of the summer. The reason was that I was in the process of writing a book. Co-writing, would be more accurate, but still long hours were spent pounding out prose on my antique Underwood. Okay, maybe not so much pounding, but writing a book does dry me out for writing blog posts.

The recent influx of posts would seem to indicate that the book writing process was finished. And indeed it is. In fact, my editor informed me yesterday that the files have been shipped off to the publisher for final processing and printing. This is a source of great cheer, as I can now rest easy that no additional requests for editing will arrive in my inbox.

For those of you who are interested, the book is the MS Press training kit for the Windows Communications Foundation exam. You can see what it looks like at Amazon. And feel free to buy multiple copies...they make great Christmas gifts [:)] 

More Ways to Avoid the Second System Effect

Dare Obasanjo had an interesting post yesterday on the Second System Effect in software development. For those who are unaware, the second system effect is a term first coined by Frederick Brooks in The Mythical Man Month. It deals with (in general) the idea that the second system designed/implemented by anyone is typically over-architected, with more bells and whistles added then need be.

Dare goes on to describe a number of common factors that keep systems from falling into this trap. Factors that my experience do contribute greatly to the success of a second version. I do have a couple of factors to add.

Only One Driver

There is a fundamental clash between marketing and developers when it comes to the priority of items added to a product. Marketing is looking for features that will help to drive sales. Developers are looking for features that will improve the architecture, stabilize the product, ease any future enhancements and simply be cool to implement. Frequently, these two groups will not agree 100% at the features which should be in the next release.

Successful projects have a single driver. That is, there is one person who is responsible for determine which features do and don't make the cut. They will listen to both sides of the argument and make a decision, with their ultimate responsibility being to drive the successful shipping of the release. It doesn't matter which discipline the person comes from, although it helps if the driver has the respect of both groups). The important element is to have someone who is making the decision and ensuring that the process doesn't become a continual stream of requests for new features.

Rewriting is not Right

The title should probably read "Rewriting is not Right unless you have extraordinary unit test coverage...and probably not even then", but that wasn't catchy enough.

After you come up for air at the end of a project, it is usual to have some sort of post mortem. Even the name indicates how developers look at this part of the software development process. It is not in our nature (generally speaking) to sit back and admire the good things that were accomplished. Instead, we concentrate on the warts of the system. How many times have you said, immediately after completing a task, that you wished you could rewrite it from scratch?  Take solace that you're not alone in that feeling...it is quite common among your brethren.

The problem is that the feeling to rewrite needs to be resisted. There is, whether you realize it or not, more invested in the development of a particular feature than you might expect. There is more than the code that implements the feature set that is visible. There are also all of the bug fixes associated with the application. The one or two lines of code that were necessary to make the system load the poorly-formatted file that arrives monthly from your biggest client. The use of a semaphore to ensure that a timing problem was corrected. All of those little things that had to be done to take the first pass of code and make it ready for use in the real world.

When you're thinking about rewriting, your mind is focused on reworking the architecture. It is not thinking about the many hours of effort that went into identifying, replicating and correcting the bugs. We know that it's easier to write new code than to read old code, but we don't consider all of the the knowledge embedded in the old code. While throwing out old code is sometimes useful, we tend to fall back on that choice too quickly, believing that 'new' is faster than spending the time to understand the impact of enhancements and changes. If you have a set of unit tests that covers the vast majority of functionality, then you might be able to make a case. But if you don't, then rewriting part of your system should be the last choice, not the first one.

SQL CLR Configuration - A Head Slapping Moment

This post is basically a reminder to a future me. The client I'm working with right now (basically an ISV) is using SQL Express as part of their project. And included in their database are a number of CLR stored procedures. In other words, the stored procedures are written in C#. While a little unusual, this is not particularly extraordinary.

The problem arose as I deployed my application from the build server to a freshly installed machine by hand. Moving the binaries was simple (a file copy from a shared directory). Moving the database was nothing more than a backup and restore. But when I ran the application, things didn't work.

It turns out that I had forgotten an important point, that being that the database setting that enabled the CLR would be 'backedup and restored' along with the database. Given that CLR functionality is turned on using sp_configure (sp_configure 'clr Enabled' to be precise), there was no reason for me to make such an assumption. But I did and the result was a moderate debugging session spent trying to figure out why I was stupid...er...forgetful.

Anyway, the point is that not every setting that a database depends on is backed up with the database. Some are associated with SQL Server itself and are therefore not backed up at all. I knew that, but sometimes I forget. :)

A New Recruiting Drive

ObjectSharp has been growing quite steadily for the past little while. But winning a couple of recent contracts have begun to strain our seemingly unending supply of high-end talent. For this reason (and to get some fresh blood and ideas into the company), we are now actively searching for associates. If you're immediately available, we do have a project that could use your skill set (assuming that your skill set includes C#, Windows/Web Forms and SQL Server). But if you're just interested in learning what ObjectSharp has to offer, I'd be happy to answer any questions. Or you can contact (or forward a resume to) our fantastic Service Manager, Gisele Bourque.

In a few sentences, working with ObjectSharp is a great opportunity to work with some of the top minds in the .NET world using technologies and techniques that are at the forefront of software development. The people are great. The projects are usually quite interesting. And, on top of all of that, you get paid. :) If you'd like the chance to stretch your development wings, it's probably worth having a conversation with us, one way or the other.

The Cost of Migrating from VB6

Recently, a question regarding the cost associated with migrating from VB6 to VB.NET was asked by one of our clients. Since that is a question whose answer has a broader appeal than just to the asker, I thought I would replicate my response here.

It should be stated up front, however, that I don't normally recommend a migration from VB6 to .NET. This isn't to say that there aren't ways to benefit from the move. It's just that a straight migration typically won't see any of those benefits. Re-architecting an application is generally the only way to get those improvements and a straight migration doesn't accomplish this. And if you already have a working VB6 application, there is little to be gained by creating a VB.NET application that does exactly the same thing.

Keep in mind that I said "little", not "nothing". While the benefits are greater for redesigning, there are still times where a migration is the right choice. Which is the reason why the question of cost does need to be addressed.

There are a number of factors that come into play when trying to determine the cost of just migrating a VB6 application to VB.NET. Let me provide an admittedly incomplete list.

Code Migration

Naturally, the first thing to consider is moving the code from VB6 to .NET. Where there are migration wizards that are available, both from Microsoft and third parties, it is important to realize that there is no way to simply 'wizard' a migration. While the syntax between VB6 and VB.NET are similar, there are many concepts that are different. And no wizard will take a VB6 application and create a VB.NET application that is designed in the most effective manner for .NET. So while you can get an application to be compilable quite quickly, it won't be taking advantage of many of the features of .NET that can improve developer productivity. This is one of the reasons that many companies consider re-writing VB6 applications instead of just migrating them.

That having been said, it is certainly faster to migrate an application than it is to rewrite. An average developer can produce 500-1000 lines of tested and deployable code in a month. However, that same developer can migrate 35,000 to 40,000 lines of code a month. So to calculate raw cost per line of a migration, figure out how much you pay an average developer in a month and divide by 35,000.

Developer Training

Of course, migrating the code is only part of the associated cost. Developers have to be retrained to use VB.NET. A typical VB6 developer will take about six months to regain the productivity level in VB.NET that they had in VB6. A junior developer might take 8 months, while a senior developer will take around 4 months.

Part of the process of getting people up to speed will be training. Depending on the technologies that are being used, anywhere from 10-20 days of in-class training will be needed. A typical breakdown of the topics covered would be 3-8 days of .NET Framework training, 3-5 days of ASP.NET development, 1-2 days of testing. 4-5 days of advanced programming concepts (WCF, Sharepoint, WPF, etc).

While this might seem like an expensive process, there are hidden costs and lost productivity associated with trying to get developers up to speed 'on the cheap'. There is too much in .NET for a single class to provide all of the necessary information in sufficient depth to be able to use it effectively. The problem is that some organizations (and some developers) will pretend that a 5 day course on .NET is good enough. The developer will end up spending the next 12 months looking up how to do the common tasks that weren't covered in the 5 days and will end up making design decisions that, had they the correct knowledge at the time, would not be made. Both company and developer can spend years trying to correct the bad choices made through inexperience.

Preparing the VB6 Application

There are a number of common issues that arise during the migration process, issues that can't be addressed by a wizard. These issues generate the large majority of problems that are found when migrating and include such items as

  • The default property is no longer used
  • The property/method is not found in VB.NET
  • The property/method is found, but has a slightly different behavior
  • COM functionality has changed

If these issues are addressed before the migration (that is, in the VB6 application), it can help speed up the migration process. Each of these issues actually results in a cascade of problems (on the order of 5 VB.NET problems for each instance of a VB6 problem) in the migrated application, so it is worthwhile to spend some time 'fixing' the VB6 application in order to get it ready for migration.

While there are other considerations involved (is the new application to be web-enabled? is there integration with other legacy applications?, etc.), these items are the largest sources of cost associated with migrating from VB6 to .NET.