Next Generation Developer Training

 
I've been (in some manner) involved in the software developer training business for over 10 years now. Over the past 3 years however, I've really been questioning the value and purpose of classroom training for software developers. So has Don Box. The internet has had a lot to do with that I think and the # of developers taking a week off work to sit in on a class has dropped in recent years. There was a buzz about elearning for awhile - but it hasn't really gone mainstream - and you hear about blended learning now too.
 
Vendor-based classroom training typically amounts to not much more than reference manuals. A component is introduced, a few demo's or scenarios on how you can use it - and a lab to follow. About 80% of what I see in these classes I could find on google. And the best part about google is that I can find it when I need it....just in time, on the job. After I learn something on google, I get to use it in a real life scenario so absorption is pretty high that way.
 
Classroom training has the advantage of taking you outside of your typical day (usually for a week) and forces you to sit and spend some quality time with some new technology on a grand scale. The problem with googling for small bits of information is that you miss the bigger picture and a full architectural understanding of how best to accomplish something. The instructor is an important part and can make the difference between a good class and a great class. But the problem remains with traditional training in that they are really just showing you how to swing their hammer. There is only a small percentage of leeway when an instructor can add extra value above and beyond the curriculum. The good ones do, but there is never enough time.
 
Several months ago we took a hard look at what people really needed and what kind of value we could bring to bear above and beyond what people could learn from reading the online help or googling. That extra value is of course the experiences of the instructor and the resulting set of best practices....stuff that you rarely find in any book.
 
The problem of course with relying on an instructor to make the difference is that sometimes they don't. And sometimes their experiences are different than others. You end up with a very inconsistent delivery.
 
So we decided to create new courses based primarily around the best practices captured from the experiences of several developers. We still cover some fundamental tools & techniques but quickly move beyond that into the best practices of how to apply that. The idea is to have students spend less time on things they can learn on their own time. How often to you get to spend a week with an expert who has been using a new technology for a few years? The idea is to maximize the time for that week.
 
We haven't relied on just our own experiences either. We've decided to lean heavily on the community in this regard, in particular, the content coming out of the MS Patterns and Practices Group. The culmination of all this work was the first delivery of our new courseware based on "Best Practices" a couple of weeks ago. It was also John Lam's first course with ObjectSharp. I had the opportunity to talk to a few students, including a couple of our own instructors who sat in on the course, and I even managed to drop in for about 30 minutes on the last day.
 
The comments are great on the evals too. Our evals are always good, but these evals were awesome. "The most professionally run course I have ever taken." "The best course I've ever taken". Our salesperson told me that she even had a student ask in the middle of the week if we were going to be handing out evals because he wanted to make sure he had an opportunity to comment on how great the course was. I'm really proud of what we accomplished but I'm even happier that we've touched a nerve with our customers and found a way to maximize the value to them for taking a full week out of their lives. I can't wait until I get to teach one of these new courses.

Risk: It's a 4 letter word

Risk is bad. But it doesn't have to kill you if you acknowledge, plan for and manage it. The most important part of risk management is to avoid the evil consequences as soon as you can in your project. Having risks show up the day before a delivery date (or later) is really really bad.

Both the Rational Unified Process and the Microsoft Solution Framework do good jobs at addressing perhaps one of the most important project management practices. I recommend to clients to make risk management a part of their team meetings - weekly if not more often. As a team, we need to identify, analyze and prioritize risks so that we can plan to deal with them effectively.

As part of identifying and analyzing risks is to accurate assess the consequences of the risk should it happen, and while this might seem silly, an accurate description of how we know the risk has turned into a problem. That may be a drop dead date, or some other description.

A good way to prioritize risks (using MSF) is to rank the impact of a risk should it actually happen. Combine that with a probability of the risk occurring and multiple them you get a probable impact or in MSF terms, Exposure. Ranking by Exposure will help you quickly identify what risks you should spend some resources on trying to mitigate.

All of this is described in more detail in the MSF Risk Management Discipline v. 1.1 pdf.

You can also download a couple of nice spreadsheets as part of the MSF Sample Project Lifecycle Deliverables which includes a huge array of other types of documents related to MSF. But I recommend starting with the Simple Risk Assessment Tool.xls at the very least.

Data Driven Development

So we have Test Driven Development and Model Driven Development or Design by Contract (similar perspective). But in the past, I've been a fan of Data Driven Development. This is a technique I haven't had the pleasure of using recently....because it relies on you building new applications with new databases.

What is this technique you ask? Well for me it is designing the data model first. In the early days of Client/Server, PowerBuilder and ERwin were my tools of choice. New applications. New databases. My design process (and that of many of my associates) was not so much to design a database but to document the data that existed in the organization - and do that in 3rd normal form. ERwin still stands as one of the best modeling tools ever because it actually made the job of coding up a database schema easier and faster than any other alternative. I could also use my model throughout the entire lifecycle since it did an excellent job at full round trip engineering/synchronization.

One of the cool features of PowerBuilder was your ability to annotate your database schema with UI hints. So you could say that a given column in your database should by default be shown as a checkbox, and that checked should be saved as “true“ and unchecked as “false“ - or whatever weird thing your DBA said it had to store. Whenever you designed a screen with that column, bam you'd have it the way you'd expect - as a checkbox. The downside of PowerBuilder's datawindows of course was that the data store/entity/container was quite pretty tied to your database and they made no attempts to hide that fact. But boy, productivity was really high - although I was producing tightly coupled, loosely coupled code :( .NET let's me build better code now, but productivity is still lacking.

At TechEd a couple of weeks ago, I stopped by the DeKlarit booth for a demo of their product by their lead architect Andres Aguiar. I was happy to see a tool that builds upon the Data Driven Development process. Of course, you don't have to start with an empty database, but this tool does an excellent job of making your job easy when starting from scratch. Andres promised to send me an eval so I can play with it some more to see how it works with existing databases but this tool so stay tuned. I could easily see this tool paying for itself in a matter of a couple of weeks.

As for ERwin, I'm still a fan although it really hasn't changed much in the past 10 years. I remember the first copy I had fit on a single floppy. So did the 200 table model I created with it. I was using LBMS System Designer who stored my model in some kind of 10mb black hole and took 10 minutes to generate a schema. When I first installed ERwin, I had it installed and reverse engineered by LBMS model - and forward engineered to from Oracle to SqlServer inside of 10 minutes. I couldn't believe the schema generation took 20 seconds compared to LBMS at 10 minutes.

SOA Challenges: Entity Aggregation

Ramkumar Kothandaraman has a good article just released on MSDN discussing SOA Challenges: Entity Aggregation. Aggregation is a much better name than “composable entities“ since it's definition implies that property sets of an entity grow as more child entities are merged into it. This also implies that you need a mapping layer and conflict resolution to resolve duplicate property names or just rename them for that matter.

This is becoming an important technique for passing xml documents up the stack of web services, each one adding their own value to the entity - or aggregating in a master/slave hierarchy topology. Either way, one of the subtle things about entity aggregation is that you can also think of it as a lightweight form of multiple inheritance for the properties of your domain objects. Is that useful or am I just bent?