Getting the App Config Path

Continuing my weekend Internet cruising, I also came across a piece of information that I had wondered about, but never found.  How to get the path to the .config file being used by a .NET application.  Apparently, this information is available through the AppDomain. Specifically, the following method retrieves said path:

AppDomain.CurrentDomain.GetData("APP_CONFIG_FILE")

The things that are available on the Internet if you know where to find them.  Or, in my case, just randomly stumble across them.

Legal Analysis from an Easy Chair

So I was cruising around the Internet, as I frequently spend my spare time on the weekend (as sad as that is), and I came across this post by John Dvorak.  He refers to a BBC article that describes how Microsoft is clamping down on sites using peer-to-peer software to distribute XP's SP2.  Specifically, he is of the opinion that this is "wrong on so many levels".  In fact, his sole opinion is pretty much the above quote.  No details about the levels where he believes the choice to be wrong. I guess he things they are 'obvious' to any 'reasonable' person.  Not to me.  In fact, to me, the decision seems quite reasonable. And, while he was sniping at Microsoft, perhaps Mr. Dvorak might menition the levels on which the decision is correct?  Does he really believe that there might not be a legitimate rationale for the choice?

This is actually an on-going pet peeve for me.  There are a group of people, many of whom are journalists, who believe that any legal action taken by a large corporation (say, for example, Microsoft) is an example of all that is wrong with capitalism. In this instance, all the Downhill Battle people were doing was trying to assist Microsoft in distributing the service pack. In another, a Canadian teenager, Mike Rowe, was served with a request to transfer ownership of the domain "mikerowesoft.com" to Microsoft.  In both cases, the extensive legal team at Microsoft mobilised to squash the dreams and ambitions of young and impressionable technologists.

This perspective annoys me.  It gives no weight to the possibility that there might be a rationale beyond world dominance for Microsoft's “heavy-handed” approach.  For example, it is a legal requirement that the holder of a trademark actively defend that trademark against any unlicensed use that it is aware of.  In other words, Microsoft had no choice but to defend the term "Microsoft", wherever it might be use.  A group of baby seals start up a company called My Crow Soft. Regardless of the clubbing analogies that will appear in the press, Microsoft still has to go after them.  They have no choice.  Let me repeat this to make sure that there is no misunderstanding. They have no choice.  Is that ever reported in the mainstream press?  Or the majority of the computer trades?  No.  It's too easy to portray Microsoft as a juggernaut bend on crushing everything in it's sight.

The SP2 peer-to-peer download situation is an offshoot of this same problem.  Let's say that someone were to somehow add a virus to an SP2 update that resides on a peer-to-peer platform.  That virus-laden update would then be installed with no concern by anyone connected to the particular peer.  At some point in the future, the bad things related to that virus would start to happen and be covered by the press.  Because the virus came with the update, SP2 would be blamed.  Which, in turn, would slow the acceptance of an important upgrade.

So what is the solution.  In my ideal world, commentators would be much less biased and provide a more thorough analysis of issues that are being covered. Something a little more substantial than 'wrong on so many levels' But unfortunately, it is easier, and much more popular, to take the 'bash-Microsoft' role.

Guess there's no chance this rant will be slash-dotted.

Smart Client seminar

We have scheduled the second in our series of Architect's Breakfast seminars.  It takes place on Sept 23 in downtown Toronto and will be digging into the architectural details surrounding Smart Client technology.  If you are trying to resolve the conflict between rich client experience and ease of deployment, our seminar is the place to be. For more information, including the wheres and hows of registration, click here.

Pair Programming for the Travel Challenged

I first heard of Facetop through Chris Sells' blog here.  My first thought was "genius".  I think it has the potential to make pair programming possible across distances, something that could make my life as a consultant easier.  And I'm always looking for that.  My second thought coincided with Chris', that being "I want it". 

DevCan 2004

I'm co-chairing two tracks of DevCan coming up in Setp/Oct in Vancover/Toronto (exact dates to follow) - see www.devcan.com for more.

I'm doing the architect track and web track. If you have ideas for content you'd like to see, or have a topic you'd like to present in either of those categories, send them to me. You don't have to be canadian, but it helps :)

New Service Packs for 1.0 & 1.1 .NET Frameworks imminent

.NET Framework 1.0 SP3 and 1.1 SP1 are in tech preview at the moment. Had a nagging bug and want to know if it's fixed?

The contents & links to Tech Preview Downloads can be found here:

http://msdn.microsoft.com/netframework/downloads/updates/sptechpreview/default.aspx

 

Versions in Services

One of my colleagues, John Lam, has been starting down the services road lately. We were kicking around some of the problems that can be encountered by SOA developers and John made an interesting comment. He said that the problems we were discussing had already been solved...in COM/DCOM.

This made me sit up and think. I'd been wrestling with how to best deal with versioning in services for while now. So I ask him. How does COM handle versions? The answer I got surprised me. It doesn't. Coming from Chapter 3 of Essential COM (thank you Mr. Box), COM interfaces are frequently given a different CLSID. This allows "clients to indicate explicitly which version is required." Even if the new COM interface is an extension of the old one. In fact, there is a function (CoTreatAsClass) whose purpose is to route instantiation requests from the old CLSID to the new one.

In other words, there is no real 'versioning' in COM. Each version is a class unto itself. It just happens to have the same interface or an extension of same.

So let's apply this to SOA. In the .NET world, an interface is roughly the equivalent of an ASMX file. So to create a different version of a service, create a different ASMX file, copying/changing/adding the web methods as needed.

The real challenge is ensuring that the second tenet of service orientation (services are autonomous, something I've blogged about here and here) is adhered to. Each 'version' service must have a certain level of autonomy over the underlying data. It is important (actually, critical) to eliminate any 'side effects'when designing the different versions . If a client executing the new version of the service causes the old version of the service to break in any way, shape or form, then autonomy is violated and you will have trouble on your hands.

So there you have it. No versions of services. Instead, just create a new service that implements the modified interface. If nothing else, this is a good reason to implement the web service class itself using the Facade design pattern, thus keeping the content of the class to a minimum.

Next Generation Developer Training

 
I've been (in some manner) involved in the software developer training business for over 10 years now. Over the past 3 years however, I've really been questioning the value and purpose of classroom training for software developers. So has Don Box. The internet has had a lot to do with that I think and the # of developers taking a week off work to sit in on a class has dropped in recent years. There was a buzz about elearning for awhile - but it hasn't really gone mainstream - and you hear about blended learning now too.
 
Vendor-based classroom training typically amounts to not much more than reference manuals. A component is introduced, a few demo's or scenarios on how you can use it - and a lab to follow. About 80% of what I see in these classes I could find on google. And the best part about google is that I can find it when I need it....just in time, on the job. After I learn something on google, I get to use it in a real life scenario so absorption is pretty high that way.
 
Classroom training has the advantage of taking you outside of your typical day (usually for a week) and forces you to sit and spend some quality time with some new technology on a grand scale. The problem with googling for small bits of information is that you miss the bigger picture and a full architectural understanding of how best to accomplish something. The instructor is an important part and can make the difference between a good class and a great class. But the problem remains with traditional training in that they are really just showing you how to swing their hammer. There is only a small percentage of leeway when an instructor can add extra value above and beyond the curriculum. The good ones do, but there is never enough time.
 
Several months ago we took a hard look at what people really needed and what kind of value we could bring to bear above and beyond what people could learn from reading the online help or googling. That extra value is of course the experiences of the instructor and the resulting set of best practices....stuff that you rarely find in any book.
 
The problem of course with relying on an instructor to make the difference is that sometimes they don't. And sometimes their experiences are different than others. You end up with a very inconsistent delivery.
 
So we decided to create new courses based primarily around the best practices captured from the experiences of several developers. We still cover some fundamental tools & techniques but quickly move beyond that into the best practices of how to apply that. The idea is to have students spend less time on things they can learn on their own time. How often to you get to spend a week with an expert who has been using a new technology for a few years? The idea is to maximize the time for that week.
 
We haven't relied on just our own experiences either. We've decided to lean heavily on the community in this regard, in particular, the content coming out of the MS Patterns and Practices Group. The culmination of all this work was the first delivery of our new courseware based on "Best Practices" a couple of weeks ago. It was also John Lam's first course with ObjectSharp. I had the opportunity to talk to a few students, including a couple of our own instructors who sat in on the course, and I even managed to drop in for about 30 minutes on the last day.
 
The comments are great on the evals too. Our evals are always good, but these evals were awesome. "The most professionally run course I have ever taken." "The best course I've ever taken". Our salesperson told me that she even had a student ask in the middle of the week if we were going to be handing out evals because he wanted to make sure he had an opportunity to comment on how great the course was. I'm really proud of what we accomplished but I'm even happier that we've touched a nerve with our customers and found a way to maximize the value to them for taking a full week out of their lives. I can't wait until I get to teach one of these new courses.

Risk: It's a 4 letter word

Risk is bad. But it doesn't have to kill you if you acknowledge, plan for and manage it. The most important part of risk management is to avoid the evil consequences as soon as you can in your project. Having risks show up the day before a delivery date (or later) is really really bad.

Both the Rational Unified Process and the Microsoft Solution Framework do good jobs at addressing perhaps one of the most important project management practices. I recommend to clients to make risk management a part of their team meetings - weekly if not more often. As a team, we need to identify, analyze and prioritize risks so that we can plan to deal with them effectively.

As part of identifying and analyzing risks is to accurate assess the consequences of the risk should it happen, and while this might seem silly, an accurate description of how we know the risk has turned into a problem. That may be a drop dead date, or some other description.

A good way to prioritize risks (using MSF) is to rank the impact of a risk should it actually happen. Combine that with a probability of the risk occurring and multiple them you get a probable impact or in MSF terms, Exposure. Ranking by Exposure will help you quickly identify what risks you should spend some resources on trying to mitigate.

All of this is described in more detail in the MSF Risk Management Discipline v. 1.1 pdf.

You can also download a couple of nice spreadsheets as part of the MSF Sample Project Lifecycle Deliverables which includes a huge array of other types of documents related to MSF. But I recommend starting with the Simple Risk Assessment Tool.xls at the very least.

BizTalk 2004: New Training Course for Developers

We now have a course available for BizTalk 2004 in our Toronto office. I get so many of these requests about BTS2K4 these days. Matt Meleski, who is our BTS guru is teaching the first one on July 5th. Matt's been using BTS 2004 right throughout the beta.  BTS has improved dramatically over 2002, it's quite amazing. I hope I have a chance to sit in on part of it.