More Thoughts on the Cloud

One of the more farsighted thoughts on the implications of cloud computing is the concern about vendor lock-in. Tim Bray mentioned it in his Get in the Cloud post

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

...

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

This idea was also commented on by Dare Obasanjo here. It was Dare who originally pointed me at Tim's post.

My take on the vendor lock-in problem is two-fold. First is the easier one to deal with - the platform on which the application is running. As it sits right now, use of Azure is dependent on you being able to publish an application. The destination for the application is a cloud service, but that is not a big deal. You can just as easily publish the application to your own servers (or server farm). The application which is being pushed out to the cloud is capable of being deployed onto a different infrastructure.

Now, there are aspects of the cloud service which might place some significant requirements on your target infrastructure. A basic look at the model used by Azure indicates that a worker pattern in being used. Requests arrive at the service and are immediately queued. The requests are then processed in the background by a worker. The placement of the request in the queue helps to ensure the reliability of the application, as well as the ability to scale up on demand. So if you created an infrastructure that was capable of supporting such a model, then your lock-in at the application level doesn't exist. Yes, the barrier is high, but it is not insurmountable. And there is the possibility that additional vendors will take up the challenge.

The second potential for lock-in comes from the data. Again, this becomes a matter of how you have defined your application. Many companies will want to maintain their data within their premises. In the Azure world, this can be done through ADO.NET Data Services. In fact, this is currently (I believe) the expected mechanism. The data stores offered by Azure are not intended to be used for large volumes of data. At some point, I expect that Azure will offer the ability to store data (of the larger variety) within the cloud. At that point, the spectre of lock-in becomes solid. And you should consider your escape options before you commit to the service. But until that happens, the reality is that you are still responsible for your data. It is still yours to preserve, backup and use.

The crux of all this is that the cloud provides pretty much the same lock-in that the operating system does now. If you create an ASP.NET application, you are now required to utilize IIS as the web server. If you create a WPF application, you require either Silverlight or .NET Framework on the client. For almost every application choice you make, there is some form of lock-in. It seems to me that, at least at the moment, the lock-in provided by Azure is no worse than any other infrastructure decision that you would make.

Summarizing the Cloud Initiative

So it's the last few hours of PDC for this year. Which means that pretty much all of the information that can be shoved into my brain has been. It also means that it's a pretty decent moment to be summarizing what I've learned.

Obviously (from its presence in the initial keynote and the number of sessions) cloud computing was the big news. This was also one of the more talked about parts of the conference, and not necessary for a good reason. Many people that I have talked to walked out of the keynote wondering exactly what Azure was. Was it web host? If so, what's the point? It's not like there aren't other companies doing the same thing. Could it be more than web hosting? If so, that wasn't made very clear from the keynote. In other words, I was not exactly chomping at the bit to try out Azure.

But it's here at the end of the week. And I've had a chance to see some additional sessions and talk to a number of people about what Azure is capable of and represents for Microsoft. That has tempered my original skepticism a little. But not completely.

In the vision of Azure was presented, the cloud was intended to be a deployment destination in the sky. Within that cloud, there was some unknown infrastructure that you did not need to be aware of. You could configure the number of servers to go up or down depending on the expected traffic to your site. As you change the configured value, your application becomes provisioned accordingly. This is nice for companies that need to deal with spikes in application usage. Or who don't have (or don't want to have) the support personnel for their infrastructure.

However, there are some limitations to the type of applications which can fit into this model. For example, you need to be able to deploy the application. This implies that you have created the application to the point where you can publish it to the Azure service. The published application might include third party components (a purchase ASP.NET control, for example), but can't use a separate third-party application (such as Community Server).

As well, you need to be able to access the application through the web. You could use a Silverlight front end to create the requests. You could use a client-based application to create the requests. But, ultimately, there is a Web request that need to be sent to the service. I fully expect that Silverlight will be the most common interface.

So if you have applications that fit into that particular model, then Azure is something you should look at. There are some 'best practices' that you need to consider as part of your development, but they are not onerous. In fact, they are really the kind of best practices that you should already be using. As well, you should remember that the version of Azure introduced this past week is really just V1.0. You have to know that Microsoft has other ideas for pushing applications to the cloud. So even if the profile doesn't fit you right now, keep your ears open. I expect that future enhancements will only work to envelope more and more people in the cloud.

WPF Futures

I just got out of the WPF Roadmap talk. I found the future for WPF less interesting that the present, although that could very well because the speed of innovation coming from that group has been very high over the past couple of years. In fact, the 'roadmap' is as much about the present as it is about the future.

Mention was made of the fact that the SP1 version of .NET 3.5 actually included a fair bit of 'new' functionality for WPF. As well, they have just released some new controls that are in different states. There is a DataGrid, Calendar control and DatePicker that are officially released. And there is a brand new ribbon control that is (I believe) in CTP mode.

As well, there is also a Visual State Manager available for WPF. This is actually quite cool, as the VSM solves a problem that WPF has (and that SIlverlight already has). Specifically, it could be challenging to get the visual state of a control/page/form matched up with the event or data triggers that would be raised within the application. The Visual State Manager lets you associate a storyboard with a particular state and then set the state programmatically, at which point WPF runs the storyboard.

Both the ribbon control and the VSM are available right now

The future for WPF seemed less clear. A list of 'potential' controls was presented. And some general comments about improvements in the rendering, integration with touch, etc. In other words, the future appears to be now. I guess that lack of specificity is to be allowed when a product group is right at the beginning of it's development cycle.

Development Changes

The presenter for the development changes related to .NET and Windows 7 is ScottGu. Who else?

First off, Microsoft will be releasing a WPF ribbon control later this week. That is very nice, especially since the ribbon is the new hotness when it comes to the user interface.

Demos also mentioned the integration of touch and jump lists, which are Win7 specific functionality. The jump lists (which is context sensitive functionality available through the thumbnail for an application) are simply commands which are implemented and registered by the application. Touch integration includes the ability to tie gestures to commands. So, ultimately, commands become the main mechanism for integrating WPF and Win7, at least at this level.

Today, some additional WPF controls are being released. This includes DataGrid and a DataPicker control, as well as a Visual State Manager (a feature that greatly improves the creating of interfaces that depend upon events within the UI).

.NET 4.0 includes support for the dynamic language runtime.

VS2010 includes improvements for WPF designer. It also appears that VS2010 was built using WPF (at least the UI portion). Other areas of improvement include a streamlined test-driven development (TTD) model, and a better extensibility within the editor (and the IDE itself). For the later functionality, check out the Microsoft Extensibility Framework (MEF). The idea behind MEF is to allow for better visualization of data, even from external sources such as TFS, within the IDE. The demo includes the ability to grab bug information from TFS and display it while looking at the code associated with the bug.

Traveling to PDC 2008

I'm writing this while at 35,000 feet, winging my way to PDC in Los Angeles. This is actually my first PDC so I'm looking forward to it. I've been to Tech Ed and Mix in the past, but have never made it to a PDC. How can anything bad happen while I'm surrounded by 7,000+ geeks?

There are a couple of areas where I expect to see some significant announcements. Some of them, such as the beta bits for Windows 7 and an early CTP for VS2010, are already widely anticipated. But there are likely to be more (and potentially more interested) announcements from across the company.

For example, I expect to hear some big initiative surrounding cloud computing. Aside from tools that will help developers take advantage of the technology, it wouldn't surprise me to hear about a service that competes with Amazon's new E3 service.

Another potential target for news is the Oslo project. There has been a bit of buzz (oooooo...alliteration) on this upgrade to Microsoft's service layer offering, but it will be interesting to see how Oslo is positioned, especially in relation to any cloud computing announcements.

Beyond the above, I'm sure there will be more surprises. From the number of sessions on the agenda, I expect that there will be some VSTS innovations. And my interest has been piqued by an MSR project called Pex that deals with automated white-box testing. I'll be live-blogging from the keynotes and the sessions that attend, basically commenting on anything that catches my ear. So stay tuned for some PDC details...at least, my take on them.

Mind Mapping from Word 2003 to 2007

I recall my first experience with Word 2007...it took me 20 minute to figure out how to do a Save As. Who would have figured that the cute icon in the top left corner actually had functionality associated with it. :)

In a blog post from Chris Sells, I was point to a tool whose purpose is to bridge this gap. You see an on-line image of Office 2003 menu structure and by hovering an item, you are told where to find the same function in 2007. You can find the tool here. Check it out, use it and avoid the embarrassment that I went through.

More Ways to Avoid the Second System Effect

Dare Obasanjo had an interesting post yesterday on the Second System Effect in software development. For those who are unaware, the second system effect is a term first coined by Frederick Brooks in The Mythical Man Month. It deals with (in general) the idea that the second system designed/implemented by anyone is typically over-architected, with more bells and whistles added then need be.

Dare goes on to describe a number of common factors that keep systems from falling into this trap. Factors that my experience do contribute greatly to the success of a second version. I do have a couple of factors to add.

Only One Driver

There is a fundamental clash between marketing and developers when it comes to the priority of items added to a product. Marketing is looking for features that will help to drive sales. Developers are looking for features that will improve the architecture, stabilize the product, ease any future enhancements and simply be cool to implement. Frequently, these two groups will not agree 100% at the features which should be in the next release.

Successful projects have a single driver. That is, there is one person who is responsible for determine which features do and don't make the cut. They will listen to both sides of the argument and make a decision, with their ultimate responsibility being to drive the successful shipping of the release. It doesn't matter which discipline the person comes from, although it helps if the driver has the respect of both groups). The important element is to have someone who is making the decision and ensuring that the process doesn't become a continual stream of requests for new features.

Rewriting is not Right

The title should probably read "Rewriting is not Right unless you have extraordinary unit test coverage...and probably not even then", but that wasn't catchy enough.

After you come up for air at the end of a project, it is usual to have some sort of post mortem. Even the name indicates how developers look at this part of the software development process. It is not in our nature (generally speaking) to sit back and admire the good things that were accomplished. Instead, we concentrate on the warts of the system. How many times have you said, immediately after completing a task, that you wished you could rewrite it from scratch?  Take solace that you're not alone in that feeling...it is quite common among your brethren.

The problem is that the feeling to rewrite needs to be resisted. There is, whether you realize it or not, more invested in the development of a particular feature than you might expect. There is more than the code that implements the feature set that is visible. There are also all of the bug fixes associated with the application. The one or two lines of code that were necessary to make the system load the poorly-formatted file that arrives monthly from your biggest client. The use of a semaphore to ensure that a timing problem was corrected. All of those little things that had to be done to take the first pass of code and make it ready for use in the real world.

When you're thinking about rewriting, your mind is focused on reworking the architecture. It is not thinking about the many hours of effort that went into identifying, replicating and correcting the bugs. We know that it's easier to write new code than to read old code, but we don't consider all of the the knowledge embedded in the old code. While throwing out old code is sometimes useful, we tend to fall back on that choice too quickly, believing that 'new' is faster than spending the time to understand the impact of enhancements and changes. If you have a set of unit tests that covers the vast majority of functionality, then you might be able to make a case. But if you don't, then rewriting part of your system should be the last choice, not the first one.

SQL CLR Configuration - A Head Slapping Moment

This post is basically a reminder to a future me. The client I'm working with right now (basically an ISV) is using SQL Express as part of their project. And included in their database are a number of CLR stored procedures. In other words, the stored procedures are written in C#. While a little unusual, this is not particularly extraordinary.

The problem arose as I deployed my application from the build server to a freshly installed machine by hand. Moving the binaries was simple (a file copy from a shared directory). Moving the database was nothing more than a backup and restore. But when I ran the application, things didn't work.

It turns out that I had forgotten an important point, that being that the database setting that enabled the CLR would be 'backedup and restored' along with the database. Given that CLR functionality is turned on using sp_configure (sp_configure 'clr Enabled' to be precise), there was no reason for me to make such an assumption. But I did and the result was a moderate debugging session spent trying to figure out why I was stupid...er...forgetful.

Anyway, the point is that not every setting that a database depends on is backed up with the database. Some are associated with SQL Server itself and are therefore not backed up at all. I knew that, but sometimes I forget. :)

The Cost of Migrating from VB6

Recently, a question regarding the cost associated with migrating from VB6 to VB.NET was asked by one of our clients. Since that is a question whose answer has a broader appeal than just to the asker, I thought I would replicate my response here.

It should be stated up front, however, that I don't normally recommend a migration from VB6 to .NET. This isn't to say that there aren't ways to benefit from the move. It's just that a straight migration typically won't see any of those benefits. Re-architecting an application is generally the only way to get those improvements and a straight migration doesn't accomplish this. And if you already have a working VB6 application, there is little to be gained by creating a VB.NET application that does exactly the same thing.

Keep in mind that I said "little", not "nothing". While the benefits are greater for redesigning, there are still times where a migration is the right choice. Which is the reason why the question of cost does need to be addressed.

There are a number of factors that come into play when trying to determine the cost of just migrating a VB6 application to VB.NET. Let me provide an admittedly incomplete list.

Code Migration

Naturally, the first thing to consider is moving the code from VB6 to .NET. Where there are migration wizards that are available, both from Microsoft and third parties, it is important to realize that there is no way to simply 'wizard' a migration. While the syntax between VB6 and VB.NET are similar, there are many concepts that are different. And no wizard will take a VB6 application and create a VB.NET application that is designed in the most effective manner for .NET. So while you can get an application to be compilable quite quickly, it won't be taking advantage of many of the features of .NET that can improve developer productivity. This is one of the reasons that many companies consider re-writing VB6 applications instead of just migrating them.

That having been said, it is certainly faster to migrate an application than it is to rewrite. An average developer can produce 500-1000 lines of tested and deployable code in a month. However, that same developer can migrate 35,000 to 40,000 lines of code a month. So to calculate raw cost per line of a migration, figure out how much you pay an average developer in a month and divide by 35,000.

Developer Training

Of course, migrating the code is only part of the associated cost. Developers have to be retrained to use VB.NET. A typical VB6 developer will take about six months to regain the productivity level in VB.NET that they had in VB6. A junior developer might take 8 months, while a senior developer will take around 4 months.

Part of the process of getting people up to speed will be training. Depending on the technologies that are being used, anywhere from 10-20 days of in-class training will be needed. A typical breakdown of the topics covered would be 3-8 days of .NET Framework training, 3-5 days of ASP.NET development, 1-2 days of testing. 4-5 days of advanced programming concepts (WCF, Sharepoint, WPF, etc).

While this might seem like an expensive process, there are hidden costs and lost productivity associated with trying to get developers up to speed 'on the cheap'. There is too much in .NET for a single class to provide all of the necessary information in sufficient depth to be able to use it effectively. The problem is that some organizations (and some developers) will pretend that a 5 day course on .NET is good enough. The developer will end up spending the next 12 months looking up how to do the common tasks that weren't covered in the 5 days and will end up making design decisions that, had they the correct knowledge at the time, would not be made. Both company and developer can spend years trying to correct the bad choices made through inexperience.

Preparing the VB6 Application

There are a number of common issues that arise during the migration process, issues that can't be addressed by a wizard. These issues generate the large majority of problems that are found when migrating and include such items as

  • The default property is no longer used
  • The property/method is not found in VB.NET
  • The property/method is found, but has a slightly different behavior
  • COM functionality has changed

If these issues are addressed before the migration (that is, in the VB6 application), it can help speed up the migration process. Each of these issues actually results in a cascade of problems (on the order of 5 VB.NET problems for each instance of a VB6 problem) in the migrated application, so it is worthwhile to spend some time 'fixing' the VB6 application in order to get it ready for migration.

While there are other considerations involved (is the new application to be web-enabled? is there integration with other legacy applications?, etc.), these items are the largest sources of cost associated with migrating from VB6 to .NET.

Aspiring Architect - Web cast series

With all this heat, I almost wrote "perspiring". Why not beat the heat, and stay cool inside while watching these web casts from MS Canada targeting aspiring architects. With the predicted shortage in IT in the upcoming years, we're sure to see an influx of junior resources into our industry. This is a good opportunity for developers to transition into architecture roles to leverage their existing skill set.

The Aspiring Architect Series 2008 builds on last year’s content and covers a number of topics that are important for architects to understand. So it would be a great idea to watch last year's recordings if you haven't already. Links are available here:  http://blogs.msdn.com/mohammadakif/archive/tags/Aspiring+Architects/default.aspx .

Upcoming sessions are as follows:

June 16th, 2008 – 12:00 p.m. to 1:00 p.m. – Introduction to the aspiring architect Web Cast series

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380836&Culture=en-CA

June 17th, 2008 – 12:00 p.m. to 1:00 p.m. – Services Oriented Architecture and Enterprise Service Bus – Beyond the hype

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380838&Culture=en-CA

June 18th, 2008 – 12:00 p.m. to 1:00 p.m. – TOGAF and Zachman, a real-world perspective

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380840&Culture=en-CA

June 19th, 2008 – 12:00 p.m. to 1:00 p.m. – Services Oriented Architecture (Web Cast in French)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380842&Culture=en-CA

June 20th, 2008 – 12:00 p.m. to 1:00 p.m. – Interoperability (Web Cast in French)

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380844&Culture=fr-CA

June 23rd , 2008 – 12:00 p.m. to 1:00 p.m. – Realizing dynamic systems

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380846&Culture=en-CA

June 24th, 2008 – 12:00 p.m. to 1:00 p.m. – Web 2.0, beyond the hype

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380848&Culture=en-CA

June 25th, 2008 – 12:00 p.m. to 1:00 p.m. – Architecting for the user experience

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380850&Culture=en-CA

June 26th, 2008 – 12:00 p.m. to 1:00 p.m. – Conclusion and next steps

http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032380852&Culture=en-CA