Writing is done, so back to the blogging

The two or three of you who follow my blog with regularity will have noticed that I was dark for most of the summer. The reason was that I was in the process of writing a book. Co-writing, would be more accurate, but still long hours were spent pounding out prose on my antique Underwood. Okay, maybe not so much pounding, but writing a book does dry me out for writing blog posts.

The recent influx of posts would seem to indicate that the book writing process was finished. And indeed it is. In fact, my editor informed me yesterday that the files have been shipped off to the publisher for final processing and printing. This is a source of great cheer, as I can now rest easy that no additional requests for editing will arrive in my inbox.

For those of you who are interested, the book is the MS Press training kit for the Windows Communications Foundation exam. You can see what it looks like at Amazon. And feel free to buy multiple copies...they make great Christmas gifts [:)] 

Visual Studio and SQL Server 2008 Conflicts

I'm just passing along some information that has been making the rounds (I found it on Guy Burstein's blog).

If you attempt to install SQL Server 2008 on a machine that has Visual Studio 2008 installed, it will fail. The requirement is to have VS 2008 SP1 installed, an update that is still about a week away from release. And you need the 'real' SP1. The same problem exists with the beta for SP1.

So, for you developer/database people out there, it looks like at least a week of waiting to get the combination on a single machine.

Misleading "Could Not Locate Dependency" message in Composite UI Block

The project that I have been working on for a number of months uses the Composite UI Application Block (CAB) as the basis for the user interface. If you have never worked with CAB before, it is an interesting framework. In many ways it makes tasks easy that would be incredibly challenging otherwise. This includes the ability to add and remove elements from the user interface by simply adding or removing lines from an XML file. On the other hand, there are parts of CAB that appear to work as if by magic.

One of the 'magic' pieces is dependency injection. As a smart part (the name of the UI element that is part of the composition process) is loaded, the presenter (CAB supports a model-view-presenter pattern) is instantiated with some number of parameters passed into the constructor. The cool feature is that the value of any one of the parameters can be 'injected' into CAB by marking the parameter as follows.

public DemoPresenter([ServiceDependency] WorkItem workItem,
   [ComponentDependency("Target")] string injectedParameter)

In this declaration, the injectedParameter value is taken from the Target item in the work item and automatically passed into the constructor. Someplace earlier (and, in fact, even in another component), someone had to execute the code.

string valueOfInterest = "Hello";
WorkItem.Items.Add(valueOfInterest, "Target");

While this might seem a little convoluted, in the world of composite UI, this injection technique allows disparate components to easily share context.

So much for the background and on to the problem. I had created a presenter constructor that took a parameter of type NamedObject. As well, the ComponentDependency attribute specified that the "Target" item in the work item should be passed in. In the place where the object was added to the WorkItem.Items collection, I instantiated an object of type Profile, where the Profile class derives from the NamedObject class. I added the Profile object to my Items collection with a key of "Target". I expected that the dependency injection mechanism to retrieve the Target item and pass it into the presenter constructor. Instead, I received an "Unable to locate dependency 'NamedObject'" exception being raised.

After digging deep into the injection process, specifically into the ObjectBuilder methods within the CAB project. It turns out that the matching process used by dependency injection is not based solely onthe key passed into the Items.Add method. It is also includes the type of the item. In other words,  the ObjectBuilder not only looks to find an item called "Target", it also ensures that the type of the item matches. And, to make it more annoying, the ObjectBuilder doesn't understand hierarchy. So even though my item was of type Profile, which is a NamedObject (due to the inheritance), the ObjectBuilder says that there was no match and threw an exception.

The solution to this problem (other than to modify how ObjectBuilder performs its matching...a daunting task to say the least) is to extract the item out of the work item directly. There are reasons for wanting to avoid this (there are actually more than one Items collection which can contain the injected value), but it worked in my situation. But I still have a gripe with the message. It's not truly misleading, as ObjectBuilder couldn't find a dependency of the indicated type. However, the message does require a deep understanding of how ObjectBuilder does its thing, and that is a strike against the message in my book.

More Ways to Avoid the Second System Effect

Dare Obasanjo had an interesting post yesterday on the Second System Effect in software development. For those who are unaware, the second system effect is a term first coined by Frederick Brooks in The Mythical Man Month. It deals with (in general) the idea that the second system designed/implemented by anyone is typically over-architected, with more bells and whistles added then need be.

Dare goes on to describe a number of common factors that keep systems from falling into this trap. Factors that my experience do contribute greatly to the success of a second version. I do have a couple of factors to add.

Only One Driver

There is a fundamental clash between marketing and developers when it comes to the priority of items added to a product. Marketing is looking for features that will help to drive sales. Developers are looking for features that will improve the architecture, stabilize the product, ease any future enhancements and simply be cool to implement. Frequently, these two groups will not agree 100% at the features which should be in the next release.

Successful projects have a single driver. That is, there is one person who is responsible for determine which features do and don't make the cut. They will listen to both sides of the argument and make a decision, with their ultimate responsibility being to drive the successful shipping of the release. It doesn't matter which discipline the person comes from, although it helps if the driver has the respect of both groups). The important element is to have someone who is making the decision and ensuring that the process doesn't become a continual stream of requests for new features.

Rewriting is not Right

The title should probably read "Rewriting is not Right unless you have extraordinary unit test coverage...and probably not even then", but that wasn't catchy enough.

After you come up for air at the end of a project, it is usual to have some sort of post mortem. Even the name indicates how developers look at this part of the software development process. It is not in our nature (generally speaking) to sit back and admire the good things that were accomplished. Instead, we concentrate on the warts of the system. How many times have you said, immediately after completing a task, that you wished you could rewrite it from scratch?  Take solace that you're not alone in that feeling...it is quite common among your brethren.

The problem is that the feeling to rewrite needs to be resisted. There is, whether you realize it or not, more invested in the development of a particular feature than you might expect. There is more than the code that implements the feature set that is visible. There are also all of the bug fixes associated with the application. The one or two lines of code that were necessary to make the system load the poorly-formatted file that arrives monthly from your biggest client. The use of a semaphore to ensure that a timing problem was corrected. All of those little things that had to be done to take the first pass of code and make it ready for use in the real world.

When you're thinking about rewriting, your mind is focused on reworking the architecture. It is not thinking about the many hours of effort that went into identifying, replicating and correcting the bugs. We know that it's easier to write new code than to read old code, but we don't consider all of the the knowledge embedded in the old code. While throwing out old code is sometimes useful, we tend to fall back on that choice too quickly, believing that 'new' is faster than spending the time to understand the impact of enhancements and changes. If you have a set of unit tests that covers the vast majority of functionality, then you might be able to make a case. But if you don't, then rewriting part of your system should be the last choice, not the first one.

SQL CLR Configuration - A Head Slapping Moment

This post is basically a reminder to a future me. The client I'm working with right now (basically an ISV) is using SQL Express as part of their project. And included in their database are a number of CLR stored procedures. In other words, the stored procedures are written in C#. While a little unusual, this is not particularly extraordinary.

The problem arose as I deployed my application from the build server to a freshly installed machine by hand. Moving the binaries was simple (a file copy from a shared directory). Moving the database was nothing more than a backup and restore. But when I ran the application, things didn't work.

It turns out that I had forgotten an important point, that being that the database setting that enabled the CLR would be 'backedup and restored' along with the database. Given that CLR functionality is turned on using sp_configure (sp_configure 'clr Enabled' to be precise), there was no reason for me to make such an assumption. But I did and the result was a moderate debugging session spent trying to figure out why I was stupid...er...forgetful.

Anyway, the point is that not every setting that a database depends on is backed up with the database. Some are associated with SQL Server itself and are therefore not backed up at all. I knew that, but sometimes I forget. :)

A New Recruiting Drive

ObjectSharp has been growing quite steadily for the past little while. But winning a couple of recent contracts have begun to strain our seemingly unending supply of high-end talent. For this reason (and to get some fresh blood and ideas into the company), we are now actively searching for associates. If you're immediately available, we do have a project that could use your skill set (assuming that your skill set includes C#, Windows/Web Forms and SQL Server). But if you're just interested in learning what ObjectSharp has to offer, I'd be happy to answer any questions. Or you can contact (or forward a resume to) our fantastic Service Manager, Gisele Bourque.

In a few sentences, working with ObjectSharp is a great opportunity to work with some of the top minds in the .NET world using technologies and techniques that are at the forefront of software development. The people are great. The projects are usually quite interesting. And, on top of all of that, you get paid. :) If you'd like the chance to stretch your development wings, it's probably worth having a conversation with us, one way or the other.

ORA-01008 Not All Variables Bound error

I have recently had the opportunity to work (once again) with Oracle. Specifically, I had to create a mechanism that would, based on configurable settings, update either a SQL Server or an Oracle database. In and of itself, this is not particularly challenging. Not since ADO.NET implemented a provider model using the Db... classes that are part of System.Data. The provider name can be used to generate the appropriate concrete instance of the DbConnection class and away you go.

While testing out this capability, I ran into this error when the target data source was Oracle. One would think (and I certainly did) was that I had missed out assigning one of the in-line parameters. The text associated with the error certainly gave that impression. And I was, after all, building the SQL statement on the fly. A bug in my logic could have placed a parameter into the SQL and not created a corresponding DbParameter.

But that was not the case.

Instead, it was that the value of one of my parameters (a string, as it turned out) was null. Not String.Empty, but null. And when you assign a null value to the parameter, it's as if you didn't bind anything to the parameter, the result being that when executing the query, a nice ORA-01008 exception is thrown. The correct way to do the assignment is to set the parameter value to System.DbNull value instead. It would appear that the SQL Server data provider doesn't have this issue, in that the problem only appeared against an Oracle data source. Not a particularly vexing problem, but still it's something to be aware of.

And a couple of years had passed since my last Oracle post ;)

The Cost of Migrating from VB6

Recently, a question regarding the cost associated with migrating from VB6 to VB.NET was asked by one of our clients. Since that is a question whose answer has a broader appeal than just to the asker, I thought I would replicate my response here.

It should be stated up front, however, that I don't normally recommend a migration from VB6 to .NET. This isn't to say that there aren't ways to benefit from the move. It's just that a straight migration typically won't see any of those benefits. Re-architecting an application is generally the only way to get those improvements and a straight migration doesn't accomplish this. And if you already have a working VB6 application, there is little to be gained by creating a VB.NET application that does exactly the same thing.

Keep in mind that I said "little", not "nothing". While the benefits are greater for redesigning, there are still times where a migration is the right choice. Which is the reason why the question of cost does need to be addressed.

There are a number of factors that come into play when trying to determine the cost of just migrating a VB6 application to VB.NET. Let me provide an admittedly incomplete list.

Code Migration

Naturally, the first thing to consider is moving the code from VB6 to .NET. Where there are migration wizards that are available, both from Microsoft and third parties, it is important to realize that there is no way to simply 'wizard' a migration. While the syntax between VB6 and VB.NET are similar, there are many concepts that are different. And no wizard will take a VB6 application and create a VB.NET application that is designed in the most effective manner for .NET. So while you can get an application to be compilable quite quickly, it won't be taking advantage of many of the features of .NET that can improve developer productivity. This is one of the reasons that many companies consider re-writing VB6 applications instead of just migrating them.

That having been said, it is certainly faster to migrate an application than it is to rewrite. An average developer can produce 500-1000 lines of tested and deployable code in a month. However, that same developer can migrate 35,000 to 40,000 lines of code a month. So to calculate raw cost per line of a migration, figure out how much you pay an average developer in a month and divide by 35,000.

Developer Training

Of course, migrating the code is only part of the associated cost. Developers have to be retrained to use VB.NET. A typical VB6 developer will take about six months to regain the productivity level in VB.NET that they had in VB6. A junior developer might take 8 months, while a senior developer will take around 4 months.

Part of the process of getting people up to speed will be training. Depending on the technologies that are being used, anywhere from 10-20 days of in-class training will be needed. A typical breakdown of the topics covered would be 3-8 days of .NET Framework training, 3-5 days of ASP.NET development, 1-2 days of testing. 4-5 days of advanced programming concepts (WCF, Sharepoint, WPF, etc).

While this might seem like an expensive process, there are hidden costs and lost productivity associated with trying to get developers up to speed 'on the cheap'. There is too much in .NET for a single class to provide all of the necessary information in sufficient depth to be able to use it effectively. The problem is that some organizations (and some developers) will pretend that a 5 day course on .NET is good enough. The developer will end up spending the next 12 months looking up how to do the common tasks that weren't covered in the 5 days and will end up making design decisions that, had they the correct knowledge at the time, would not be made. Both company and developer can spend years trying to correct the bad choices made through inexperience.

Preparing the VB6 Application

There are a number of common issues that arise during the migration process, issues that can't be addressed by a wizard. These issues generate the large majority of problems that are found when migrating and include such items as

  • The default property is no longer used
  • The property/method is not found in VB.NET
  • The property/method is found, but has a slightly different behavior
  • COM functionality has changed

If these issues are addressed before the migration (that is, in the VB6 application), it can help speed up the migration process. Each of these issues actually results in a cascade of problems (on the order of 5 VB.NET problems for each instance of a VB6 problem) in the migrated application, so it is worthwhile to spend some time 'fixing' the VB6 application in order to get it ready for migration.

While there are other considerations involved (is the new application to be web-enabled? is there integration with other legacy applications?, etc.), these items are the largest sources of cost associated with migrating from VB6 to .NET.

Bring Your Data to Life with WPF Session

The premise behind this session is the idea of separation of UI designers and developers. The UI people don't know how to code business rules. But the UI people need to be able to 'try out' the user interface and easily make changes. This is the designer/developer separation that is in the Web space, only in this case, it's for Windows Forms applications.

For those of you who aren't aware, one of the drawbacks of WPF is the lack of data binding support. This is a significant step back in functionality, if you're used to creating ASP.NET or Windows Forms apps. And it stopped me from using WPF to any great extent.

The session starts out slow, talking about the rationale behind data binding. This is something that I would expect most developers to be aware of, although if he's including designers in his target audience, then I can understand the digression.

Databinding in a WPF form can be done through the latest version of Expression Blend. A new Data pane allows for the selection of a data source (a class, for example). Once the data source has been specified, the property sheet for a control allow the mapping between the control's property and the data source's property to be made. This is a familiar process, although very new to WPF and Expression Blend.

WPF includes the concept of a value converter. This is a function that operates on a bound value with the result from the conversion being displayed. As well, the data binding appears to be hooked up to the property change notification mechanism, in that if a property is programmatically changed, the updated value appears in the form.

There is also a mechanism (INotifyCollectionChanged) which raises an event when the collection is changed. The idea of 'change' in a collection is the addition or removal of an item from the collection. WPF data binding is able to detect and respond to these events.

WPF has replaced the ObjectDataSource class with an ObjectDataProvider. Without seeing the details, I'm guessing there is a lot of similarity in terms of functionality, if not the details.

In the WPF itself, the binding notation looks like the following

<TextBox Test="{Binding Path=Sun.Name, Source={StaticResource solarSystem}}" />

This notation takes the Name property of the Sun object found in the ObjectDataProvider named solarSystem. A little cumbersome, but since it's definable through Expression Blend, that's only an issue for those of you who code in Notepad 2008.

As part of the data binding mechanism, there is the concept of a data template. This greatly resembles a template within ASP.NET, where different fields and controls are displayed based on the mode of the control. One twist is that WPF data templating can be defined based on the type of object being displayed. Within the same list box, a collection of Products will appear with different fields then a collection of Customers, even though the underlying WPF is the same.

The final reveal for the demo is a list box that displayed the information about a solar system not as a list of planet names, but as a graphical representation of the solar system orbit with the images of the planets and the positioning away from the sun based on properties from the object. The cool part is that there is no change to the underlying object necessary to change from a drop down list of properties to the graphical view. Only the XAML needs to be modified. But again, that is the power of WPF.

One word of warning. Not all of the advances in WPF are necessarily available in Silverlight. A concept called a CollectionView was explicitly called out as not being part of Silverlight 2. So if you are developing for the Silverlight market, make sure that the data binding techniques you use are supported before you spend time on it.

WPF provides support for a hierarchical data template. This template because useful when you are trying to create a tree view visualization for your data. It's similar to the list box data template in concept, but the level within the hierarchy becomes part of the mechanism for distinguishing between the different data fields and styles that are used.

It looks to me like data binding for WPF has finally moved towards the standards that we have been used to. The presentation didn't cover error providers and error notification, but a slide at the end suggests that it is, using the IDataErrorInfo interface.

Steve Ballmer Keynote - 2

Continuing from the previous post

Guy commented on how Microsoft is different. Responsive, helpful and a change from what Microsoft's image has been historically.

Some questions from the audience

On Adobe

They are a big competitor in some areas. Specifically in the Silverlight and .NET vs Flex and Flash arena. There is no expectation that they are going to "go away". Will continue to work with Adobe where it makes sense.

On Internet Explorer not moving at the speed of .NET

Many innovations, including the browser, were tied to the "next O/S after XP". Microsoft .NET was not. It was because they took dependencies on the O/S. That will not happen in the future. Future IE will incubate the innovations outside of the O/S and move them into the O/S once they have been proved out

On the PHP applications that Yahoo has

There will be some refactoring of the search, ad and email technologies when Yahoo and Microsoft get together. Some technology will come from Microsoft. Some will come from Yahoo.

On the Synergies between Microsoft and Yahoo

Scale is an advantage in the search game. More search = more ads = higher bids on ads. The more ads you have, the larger the number of ads that can be inserted into the results of a search. Google has more ads that either Yahoo or Microsoft. The merger will help scale out in this market.

On the Virtualization Server Licensing Scheme

The question was regarding the licensing cost for virtualization to be able to compete with Amazon's computing in a cloud (ECC or EC2, I can't remember the acronym). The answer was that Microsoft has plans to provide a similar service.

On Silverlight on the iPhone

Microsoft would love to get Silverlight on every mobile platform they can. There is currently no free runtime license for apps on the iPhone. Apple apparently wants 30% of all of the revenue generated through the iPhone. So while it would be nice to have, the expectation is that developers are unlikely to bite. And it was suggested that perhaps Apple is not being embracing of external developers.

On Silverlight and Microsoft applications

The question is whether Silverlight will become part of Microsoft apps, such as Hotmail. As the product cycle for the relevant products is appropriate, Silverlight will become part of the deliverable. But only on those technologies where appropriate. MSN Messenger was called out specifically as not a likely choice.

On HD DVD vs Blu Ray

Microsoft doesn't make peripherals. They support the devices where the demand is and will continue to do so. In the long term, the format isn't that important, as content is more likely to be delivered over the network.

On Enterprise adoption of social networking

The ways in which people interact with each other within the corporation is changing. Sharepoint provides collaboration services, so there is already some knowledge about how people interact. The key is to leverage these areas to provide more 'social-like' capabilities. This area is early technology, so there will be advances in the future.