Getting Trained in an Economic Downturn

I’m sure that, for the vast majority of the readers of my blog, becoming more productive with your programming tools is a desirable goal. Not all developers go out of their way to advance their skills, but the very fact that you read blogs means that getting better is of interest to you. And, for most companies, they would also like you to make better use of your existing tools. It’s certainly obvious that it’s in your company’s best interest for this to happen, even if they don’t go out of their way to explicitly advance your skills.

And this is the ugly truth for most companies. In the best of times, a large number of companies don’t provide any significant budget for formal training. Usually developers are expected to pick up any new skills on their own time. Or, worse, they are expected to apply new technologies without spending any ‘exploratory’ time with them. As most of your are aware, the first time you try a new technology, the result is usually only partially successful. Only after you have worked with it a few times does it become possible to take full advantage of it. And yet, management is reluctant to pay for training classes, tech conferences, or even a programming book that will help you get ‘into the zone’ for the stuff that would make a difference in your day-to-day efforts. Here's a few suggestions that might, possibly get your manager to approve educational expenses, even in the economic conditions that exist today.

Working with Books

Over the years, I have worked with a number of different companies. Each of them takes a slightly different view of what appropriate training costs are. For companies that have a large number of developers and a small educational budget, sometimes books are all that fit. For some companies, creating a library of programming books is a viable books. Your company could purchase well-reviewed (reviews are easy to find on Amazon) programming books on the relevant topics. Employees could then ‘borrow’ books that were appropriate to their current tasks. The books end up being purchased just once, but can be shared between developers as the need arises.

A more high-tech solution to the problem can be achieved with a solution to the on-line technology book site Safari. Safari allows for books to be searched electronically and even be downloaded (in a PDF format) or even printed on an as needed basis. This is a decent mix between the need to search for a specific answer and still being able to read a book cover-to-cover when called for.

However, a corporate library is not always the best solution. Finding answers in a book requires, in many cases, that you have some inkling of the solution beforehand. At a minimum, you need to frame your query appropriately, something that is as much art as science. And the pace of technology advances means that books are almost always going to lag new technology and best practices by a period of months, if not years.

Selling Management on Training: Speak Their Language

When you want to convince your boss to let you go to a conference or attend a course, the first thing to do is look at the expenditure from their perspective. After all, the cost of the training is an expense to them. If there is no corresponding benefit, it becomes difficult to justify spending the monies. And, ultimately, you need to convince management that the benefits that they will gain are more than the money that they will spend.

In general, you will find that the attitude that a company has towards the training of developers is dictated by how the company makes money and who is responsible for helping to making that money. Usually, companies whose product is technology-based tend to be better at providing and paying for skill improvements for their employees. When developers productivity is closely aligned with corporate revenues, it is easier to get the boss’ attention. However, if you work on an application that has no direct correlation with how the company makes money, you’re much more likely to face an uphill battle.

But regardless of where you fit in this spectrum, focus your arguments on what they get out of sending you across town or across the country. Make the conversation about their Return on Investment. Show that the training will have concrete and immediate value to the company and you’re a lot closer to getting approval.

One way that you might be able to do this is to offer to share materials and experiences with your team upon return. At ObjectSharp, we call these sessions ‘Lunch & Learns”. You many know them as “Brown Bag Training”. By offering to present such a session after your conference or course, your company gets to spread some of the benefits of sending one person on training across multiple people. And your team benefits from having the new technologies couched in terms that are directly relevant to your environment.

In some cases, it’s also possible to get the trainer to offer to help with these sessions. This is something that ObjectSharp offers to attendees of our courses. We’re more than happy to have one of our instructors speak to your company about the newest technologies. While any course is on-going, instructors work hard to make the content relevant to you. To accomplish this, we ask about the kinds of projects that are being worked on and where the technology will be applied. So by having an ObjectSharp instructor give the Lunch & Learn, you get a person who is well-versed in the technology, but who also has a basic understanding of how it will fit into your corporate development plans.

You might consider shouldering some of the burden of training yourself. I don’t necessarily mean to pay for it directly. But if you take your time to attend user group meetings and Code Camps (both of which take place in non-work hours), you show a dedication to improving your skills that might make a difference. At a minimum, you will get some insight into the latest technologies, even if it’s not quite the same personalized and intensive hand-on experience that going on a course might be.

Finally, I’d like to leave you with one final, and surprisingly creative, argument. One of the most common questions we get from potential training clients is 'What if I train our developers and they leave?'" Our answer is invariably 'What if you don’t train them and they stay?' This usually gets an 'Aha' moment from management, followed by a realization that perhaps investing more in staff development might not be quite the pure expense that they think.

Problems Publishing Unit Tests from VSTS

Earlier today, a colleague had an issue with the publishing of unit test results into a TFS instance. The publication process, which is typically done manually at the click of a button, was no longer available. Specifically, the Publish button was actually disabled. There was no obvious error message indicating what, if anything, was wrong. This lack of information made identifying the problem a challenge, to put it mildly.

The solution, at least to identifying what the problem was, is to use the command line version of MSTest. If you execute the command MSTest /? in a command window, you will see that there are a number of options which can be used to execute a set of unit tests and publish them to a TFS server. For example, the following command will execute the unit tests in the TestLibrary.dll assembly and publish the results to the TFS server located at http://TfsServer:8080

MSTest /nologo /testcontainer:"TestLibrary.dll" /runconfig:"localtestrun.testrunconfig"
/resultsfile:"TestLibraryResults.trx" /test:TestLibrary /publish:http://TfsServer:8080
/publishbuild:"DemoTestBuild_20081103.1" /teamproject:"DemoProject" /platform:"Any CPU" /flavor:Debug

In this particular situation, running MSTest generate an error that indicated that the drop location for the build could not be created. An error that was, thankfully, quite easy to correct. But difficult to identify without using the command line tool.

More Thoughts on the Cloud

One of the more farsighted thoughts on the implications of cloud computing is the concern about vendor lock-in. Tim Bray mentioned it in his Get in the Cloud post

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

...

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

This idea was also commented on by Dare Obasanjo here. It was Dare who originally pointed me at Tim's post.

My take on the vendor lock-in problem is two-fold. First is the easier one to deal with - the platform on which the application is running. As it sits right now, use of Azure is dependent on you being able to publish an application. The destination for the application is a cloud service, but that is not a big deal. You can just as easily publish the application to your own servers (or server farm). The application which is being pushed out to the cloud is capable of being deployed onto a different infrastructure.

Now, there are aspects of the cloud service which might place some significant requirements on your target infrastructure. A basic look at the model used by Azure indicates that a worker pattern in being used. Requests arrive at the service and are immediately queued. The requests are then processed in the background by a worker. The placement of the request in the queue helps to ensure the reliability of the application, as well as the ability to scale up on demand. So if you created an infrastructure that was capable of supporting such a model, then your lock-in at the application level doesn't exist. Yes, the barrier is high, but it is not insurmountable. And there is the possibility that additional vendors will take up the challenge.

The second potential for lock-in comes from the data. Again, this becomes a matter of how you have defined your application. Many companies will want to maintain their data within their premises. In the Azure world, this can be done through ADO.NET Data Services. In fact, this is currently (I believe) the expected mechanism. The data stores offered by Azure are not intended to be used for large volumes of data. At some point, I expect that Azure will offer the ability to store data (of the larger variety) within the cloud. At that point, the spectre of lock-in becomes solid. And you should consider your escape options before you commit to the service. But until that happens, the reality is that you are still responsible for your data. It is still yours to preserve, backup and use.

The crux of all this is that the cloud provides pretty much the same lock-in that the operating system does now. If you create an ASP.NET application, you are now required to utilize IIS as the web server. If you create a WPF application, you require either Silverlight or .NET Framework on the client. For almost every application choice you make, there is some form of lock-in. It seems to me that, at least at the moment, the lock-in provided by Azure is no worse than any other infrastructure decision that you would make.

More Ways to Avoid the Second System Effect

Dare Obasanjo had an interesting post yesterday on the Second System Effect in software development. For those who are unaware, the second system effect is a term first coined by Frederick Brooks in The Mythical Man Month. It deals with (in general) the idea that the second system designed/implemented by anyone is typically over-architected, with more bells and whistles added then need be.

Dare goes on to describe a number of common factors that keep systems from falling into this trap. Factors that my experience do contribute greatly to the success of a second version. I do have a couple of factors to add.

Only One Driver

There is a fundamental clash between marketing and developers when it comes to the priority of items added to a product. Marketing is looking for features that will help to drive sales. Developers are looking for features that will improve the architecture, stabilize the product, ease any future enhancements and simply be cool to implement. Frequently, these two groups will not agree 100% at the features which should be in the next release.

Successful projects have a single driver. That is, there is one person who is responsible for determine which features do and don't make the cut. They will listen to both sides of the argument and make a decision, with their ultimate responsibility being to drive the successful shipping of the release. It doesn't matter which discipline the person comes from, although it helps if the driver has the respect of both groups). The important element is to have someone who is making the decision and ensuring that the process doesn't become a continual stream of requests for new features.

Rewriting is not Right

The title should probably read "Rewriting is not Right unless you have extraordinary unit test coverage...and probably not even then", but that wasn't catchy enough.

After you come up for air at the end of a project, it is usual to have some sort of post mortem. Even the name indicates how developers look at this part of the software development process. It is not in our nature (generally speaking) to sit back and admire the good things that were accomplished. Instead, we concentrate on the warts of the system. How many times have you said, immediately after completing a task, that you wished you could rewrite it from scratch?  Take solace that you're not alone in that feeling...it is quite common among your brethren.

The problem is that the feeling to rewrite needs to be resisted. There is, whether you realize it or not, more invested in the development of a particular feature than you might expect. There is more than the code that implements the feature set that is visible. There are also all of the bug fixes associated with the application. The one or two lines of code that were necessary to make the system load the poorly-formatted file that arrives monthly from your biggest client. The use of a semaphore to ensure that a timing problem was corrected. All of those little things that had to be done to take the first pass of code and make it ready for use in the real world.

When you're thinking about rewriting, your mind is focused on reworking the architecture. It is not thinking about the many hours of effort that went into identifying, replicating and correcting the bugs. We know that it's easier to write new code than to read old code, but we don't consider all of the the knowledge embedded in the old code. While throwing out old code is sometimes useful, we tend to fall back on that choice too quickly, believing that 'new' is faster than spending the time to understand the impact of enhancements and changes. If you have a set of unit tests that covers the vast majority of functionality, then you might be able to make a case. But if you don't, then rewriting part of your system should be the last choice, not the first one.

ORA-01008 Not All Variables Bound error

I have recently had the opportunity to work (once again) with Oracle. Specifically, I had to create a mechanism that would, based on configurable settings, update either a SQL Server or an Oracle database. In and of itself, this is not particularly challenging. Not since ADO.NET implemented a provider model using the Db... classes that are part of System.Data. The provider name can be used to generate the appropriate concrete instance of the DbConnection class and away you go.

While testing out this capability, I ran into this error when the target data source was Oracle. One would think (and I certainly did) was that I had missed out assigning one of the in-line parameters. The text associated with the error certainly gave that impression. And I was, after all, building the SQL statement on the fly. A bug in my logic could have placed a parameter into the SQL and not created a corresponding DbParameter.

But that was not the case.

Instead, it was that the value of one of my parameters (a string, as it turned out) was null. Not String.Empty, but null. And when you assign a null value to the parameter, it's as if you didn't bind anything to the parameter, the result being that when executing the query, a nice ORA-01008 exception is thrown. The correct way to do the assignment is to set the parameter value to System.DbNull value instead. It would appear that the SQL Server data provider doesn't have this issue, in that the problem only appeared against an Oracle data source. Not a particularly vexing problem, but still it's something to be aware of.

And a couple of years had passed since my last Oracle post ;)

The Cost of Migrating from VB6

Recently, a question regarding the cost associated with migrating from VB6 to VB.NET was asked by one of our clients. Since that is a question whose answer has a broader appeal than just to the asker, I thought I would replicate my response here.

It should be stated up front, however, that I don't normally recommend a migration from VB6 to .NET. This isn't to say that there aren't ways to benefit from the move. It's just that a straight migration typically won't see any of those benefits. Re-architecting an application is generally the only way to get those improvements and a straight migration doesn't accomplish this. And if you already have a working VB6 application, there is little to be gained by creating a VB.NET application that does exactly the same thing.

Keep in mind that I said "little", not "nothing". While the benefits are greater for redesigning, there are still times where a migration is the right choice. Which is the reason why the question of cost does need to be addressed.

There are a number of factors that come into play when trying to determine the cost of just migrating a VB6 application to VB.NET. Let me provide an admittedly incomplete list.

Code Migration

Naturally, the first thing to consider is moving the code from VB6 to .NET. Where there are migration wizards that are available, both from Microsoft and third parties, it is important to realize that there is no way to simply 'wizard' a migration. While the syntax between VB6 and VB.NET are similar, there are many concepts that are different. And no wizard will take a VB6 application and create a VB.NET application that is designed in the most effective manner for .NET. So while you can get an application to be compilable quite quickly, it won't be taking advantage of many of the features of .NET that can improve developer productivity. This is one of the reasons that many companies consider re-writing VB6 applications instead of just migrating them.

That having been said, it is certainly faster to migrate an application than it is to rewrite. An average developer can produce 500-1000 lines of tested and deployable code in a month. However, that same developer can migrate 35,000 to 40,000 lines of code a month. So to calculate raw cost per line of a migration, figure out how much you pay an average developer in a month and divide by 35,000.

Developer Training

Of course, migrating the code is only part of the associated cost. Developers have to be retrained to use VB.NET. A typical VB6 developer will take about six months to regain the productivity level in VB.NET that they had in VB6. A junior developer might take 8 months, while a senior developer will take around 4 months.

Part of the process of getting people up to speed will be training. Depending on the technologies that are being used, anywhere from 10-20 days of in-class training will be needed. A typical breakdown of the topics covered would be 3-8 days of .NET Framework training, 3-5 days of ASP.NET development, 1-2 days of testing. 4-5 days of advanced programming concepts (WCF, Sharepoint, WPF, etc).

While this might seem like an expensive process, there are hidden costs and lost productivity associated with trying to get developers up to speed 'on the cheap'. There is too much in .NET for a single class to provide all of the necessary information in sufficient depth to be able to use it effectively. The problem is that some organizations (and some developers) will pretend that a 5 day course on .NET is good enough. The developer will end up spending the next 12 months looking up how to do the common tasks that weren't covered in the 5 days and will end up making design decisions that, had they the correct knowledge at the time, would not be made. Both company and developer can spend years trying to correct the bad choices made through inexperience.

Preparing the VB6 Application

There are a number of common issues that arise during the migration process, issues that can't be addressed by a wizard. These issues generate the large majority of problems that are found when migrating and include such items as

  • The default property is no longer used
  • The property/method is not found in VB.NET
  • The property/method is found, but has a slightly different behavior
  • COM functionality has changed

If these issues are addressed before the migration (that is, in the VB6 application), it can help speed up the migration process. Each of these issues actually results in a cascade of problems (on the order of 5 VB.NET problems for each instance of a VB6 problem) in the migrated application, so it is worthwhile to spend some time 'fixing' the VB6 application in order to get it ready for migration.

While there are other considerations involved (is the new application to be web-enabled? is there integration with other legacy applications?, etc.), these items are the largest sources of cost associated with migrating from VB6 to .NET.

Looking for Mr (or Ms) GoodDeveloper

Business has been booming of late at ObjectSharp. Don't know whether it's the weather or the business cycle, but our recent company barbeque had more new faces that I've seen in many years. And we haven't lost any of the old faces either.

And yet it doesn't seem to end. At the moment, we're looking to add some consultants to our team. Specifically, we have the need for someone with Windows Forms experience, either in creating commercial-grade user interfaces on their own or with the CAB application block.

If you (or someone you know) has that experience and is looking to join a fun team of top-notch developers, drop me your (or your friend's) resume. You'll get a chance to work on projects that use cutting edge technology. You'll learn about (and sometimes use) technologies that aren't yet available. And, if you have the interest, we have six MVPs on staff to help you get your own designation.

If you don't fit into this skill set, fret not. There will be others coming along in the very near future. Just keep your eye tuned to this blog. 

How geeks pass the time

As I mentioned earlier, I was at the Visual Studio/SQL Server/BizTalk product launch in Ottawa yesterday. I was lucky enough to be included I love getting a chance to talk to people who are just getting into .NET 2.0. I have been working with it on a daily basis for more than 6 months, to the point where I almost forget whas VS 2003 is like. Answering questions helps put my world back into perspective.

But it's not all talk at this sort of event. There is a fair bit of down time while most of the people are in sessions and there is no one to ask us questions. And when a bunch of geeks get bored, you know the results are not going to be pretty.  One of my co-experts, Richard Lander, presented the following challenge: http://hoser.lander.ca/PermaLink,guid,20c75894-5947-4a62-a9c6-01b14516ecf8.aspx

Make sure that you give the code a try. I can pretty much guarantee that the number of people who will guess correctly will be incredibly small. After all, Richard is on the CLR team (and smart to boot) and he wasn't jumping up and down with the answer. Even after seeing the results, those of us in the room were scratching our heads looking for the reason. So give it your best shot and let me what you think and why.

Toronto Code Camp Registration Opens

The rumours have been swirling. Now the truth is out.

On Sat. Jan 14, there will be a Toronto Code Camp. You can register/find more infomation/hang out at http://www.torontocodecamp.com/

If you're a developer looking for in-depth content from the people who know (that would be other developers), then the code camp is the place to be. If you're a developer that has in-depth content that other developers could use, this is you're chance to shine. Regardless, it will be a blast. Clear your schedule now to be there.

The Dual Life of Code Behind

When you create an ASP.NET page using Visual Studio .NET, the default processing model is to use code-behind (the basics of which I described here). One of the more interesting aspects of code-behind is that you can specify the code for the code-behind assembly using two different techniques.  The first, and most commonly used is to build an assembly and deploy it to the bin directory on the web server.  The second mechanism is to specify the file containing the source code in the Page directive for the ASPX file.

Compiled Assembly

<%@ Page Language=”C#” Inherits=”ObjectSharp.WebPageClass” %>

Source Code

<%@ Page Language=”C#” Src=”WebPageClass.cs” %>

There are pros and cons to both of these approaches.  Functionally, they are equivalent.  When the first request is made to the page, the source code file is compiled and the resulting assembly loaded into memory. For every subsequent request, no compilation is required. Not only does this mean functional equivalence, but it also means that there is no performance penalty for deploying source code instead of a compiled assembly.

The biggest downside of the source code technique is exactly that, however.  That the code file needs to be deployed onto the web server.  This, naturally, has the potential to be a security problem.  A deployed assembly doesn’t have quite the same exposure, if only because it can be deployed into the Global Assembly Cache instead of directly into the application’s virtual server.

While it might seem that the security risk might tilt the tables entirely towards compiled assemblies, that isn’t true.  The problem with compiled assemblies has to do with updates to the web site.  When a new version of a compiled assembly is deployed to the web server, IIS is smart enough to detect the change.  The current web application is stopped and restarted so that the modified assembly can be loaded.  Unfortunately, the stopping and starting of the web application means that every Application and, more importantly, Session variable is discarded.  Depending on how the web application has been designed, this can be a significant problem.

Source code deployment doesn’t suffer from the same problem.  As with compiled assemblies, IIS monitors the source code files, so that when an update occurs, a recompilation takes place.  So the updates do get activated immediately.  The difference is that the web application does not have to be stopped and started in order to get the changes in place.

Choices, choices.  The trick to ASP.NET, as it is with almost any discipline, is to understand not only the choices but when each can and should be used.  This not only helps you design better web applications, but also solve those nagging times when the web application seems to restart for no apparent reason.