Killer Tablet PC Application for Software Designers?

For those familiar with my past life, you know that I'm a supporter of model-driven development - as long as it helps me develop faster, and doesn't constrain me....throughout the entire SDLC.

A couple of years ago, I stumbled on to DENIM and was intrigued, but didn't really take it up. I've been thinking about getting a tablet PC but have really been waiting for the killer app. I thought it might have been One Note, but I'm not too sure on that yet. I used One Note through the beta and lost all of my data on one occasion and have been afraid of it ever since. I have a pretty low acceptance factor (LAF) of applications that hide the location of their data. Maybe DENIM is the killer app I'm looking for.

Do yourself a favour and watch the video. DENIM seems complicated enough that my mom won't be able to do anything meaningful with it. But I could easily see myself getting my mother to watch me draw a prototype application without being bored to tears.

Stored procedures vs Dynamic SQL

If you're locked in a battle with a DBA over the 'benefits' of using stored procedures, check out this blog entry by Frans Bouma.  Cogent argument in favor of dynamic SQL combined with the passion of his beliefs.  Quite interesting reading, even if it goes too far in the other direction. My own personal belief is that there are instances where SPs are better than dynamic SQL in terms of overall performance (such as when a complex calculation is being performed on a large set of data across a relatively slow network connection).

EIF Presentation at CTT User Group Meeting

For all of you fans out there, I will be giving a presentation on the Enterprise Instrumentation Framework at 6:30 on November 26.  If you're interesting in reading the abstract or registering for it, visit the CTTDNUG web site.

Share "source" files between projects.

By default, each project in your solution has an AssemblyInfo.??. Amongst other things, it contains an AssemblyInfo attribute that will end up stamping the dll or exe with it's version number. This is the version number used by the CLR to make sure that when you reference a dll - it finds the correct version.

In a solution comprised of many projects, you may want them all to share the same build number. By default, VS.NET sets this version to 1.0.* -- * meaning that it will increment the number. Sometimes you build just one project, sometimes all of the projects in a solution. Some developers may even have their own solution files to just work on a subset of the projects in the master solution.

What I'm saying here is that you really ought to take better care (and control) of this version number. It would be nice to have all of your dll's in the solution share the same assembly version. Sure you can hard code it, but manually incrementing it then becomes tedious. The secret to this tip is that the AssemblyVersion attribute doesn't have to be in the AssemblyInfo.?? file. It can be in it's own file. In fact, that file doesn't have to even physically exist in the same subdirectory as the project thanks to “Linked“ files.

So follow these steps.

  1. Remove the “AssemblyVersion“ attributes from the AssemblyInfo in each of your projects.
  2. Create a “VersionInfo.cs“ (or .vb) in the root of your solution - probably one level up from your projects. It should include an AssemblyVersion attribute like the one you took out of each of your AssemblyInfo files. It should also include a using System.Reflection since that is the required namespace.
  3. In each of your projects make a shortcut or link reference to this new file. To do this, right click on each project and select “Add Existing Item“. Browse to the VersionInfo.cs (or .vb) file and instead of clicking “Open“ select “Link“ from the drop down on the open button in the File Open Dialog.

Now you have only one place to increment the version for your entire solution. If you are using NAnt, you can have it do this for you with this simple task:

 

<version path="VersionInfo.txt" startDate="2003-10-1" buildType="monthday" prefix="assembly." />

<asminfo output="VersionInfo.cs" language="CSharp">

<imports>

<import name="System" />

<import name="System.Reflection"/>

</imports>

<attributes>

<attribute type="AssemblyVersionAttribute" value="${assembly.version}" />

</attributes>

</asminfo>

The first version task increments the build number and stores it in both the assembly.version variable and also the VersionInfo.txt file. The second task recreates a new AssemblyInfo file and uses the assembly.version variable.

Putting Attributes to Use

Whether you know it or now, attributes are already part of your development life. Creating a web service using ASP.NET?  Attributes are used to define the methods that are exposed.  Make a call to a non-COM DLL?  Attributes are used to define the entry point for the DLL, as well as the parameters that are passed.  In fact, closer examination shows that attributes play a large part in many cool (the programmer’s code word for ‘advanced’) features. For this reason, as well as the possibility of putting them to use in your own environment, a deeper understanding of attributes is worth gaining (a writer’s code phrase for ‘topic’).

Why are Attributes Useful

One of the starting points for a discussion about the why and when of attributes is with objects. Not a surprising starting point, given the prevalence of objects within the .NET Framework. But before believing that objects are the be all and end all of programming, consider the effort that a developer goes through while creating a typical object oriented application.  The classes that are part of the hierarchy lay strewn about the workspace.  When the situation calls for it, the developer grabs one of the classes and inserts it into their project.  They then surround the class with the code necessary to integrate it into the application and perform all of the common functions.  To a certain extent, this code is basically the mortar that holds the various classes together.

But think about the problems associated with this development process.  The common functions alreadya mentioned include logging the method calls, validating that the inbound parameters are correct and ensuring that the caller is authorized to make the call.  Every time that a class is created, all of the supporting code needs to be included in each method. 

This problem results from the fact that the common functions don’t really follow the same flow as the class hierarchy.  A class hierarchy assumes that common functions move up and down the inheritance path.  In other words, a Person class shares methods with an Employee class which in turn shares methods with an HourlyEmployee class. However, the functions described in the previous paragraph apply across all of the classes in a model, not just those in the same inheritance path.  Which means that the necessary code might very well have to be implemented more than once.

Is there a solution within the object-oriented world?  Yes and no.  The solution requires that a couple of things come together.  First, all of the classes in the hierarchy would need to be derived from the same ancestor.  Then the procedures required by the common functions would need to be baked into the ancestor.  Finally, each of the methods that want to implement the common functions (which could very well be all of them) would call the procedures in the ancestor.  Do you find this beginning to sound like the gluing together of pieces that we’re trying to avoid in the first place?  Me too.

Separation of Concerns

The solution, at least at a conceptual level, revolves around separating concerns. When used in this context, a ‘concern’ is a set of related functions.  Going back to the examples mentioned in the previous section, the functions that log a method call would make up one concern. The validation of inbound parameters would be a second. By keeping the implementation of the concerns separate, a number of different benefits ensue.

First of all, keeping the concerns separate increases the level of abstraction for the entire application.  In general, the functionality required by the concern will be implemented in a single class.  So while in development, the methods can be implemented and tested without regard to the other elements of the application.

The second rationale for separation is also a justification for using an object-oriented architecture.  The level of coupling when a concern is maintained in a separate class is quite low.  Only the public interface is used by the various classes, meaning that the details of the implementation are hidden.  It also means that the details can be changed at the whim…er…judgment of the developer. 

To be completely up front, a concern does not have to be a set of functions that can be applied to multiple classes.  The functions in a single class can also be categorized as a concern.  The difference, as has already been noted, is that the method logging and parameter validation apply to different classes throughout the hierarchy.  For this reason, they are known as ‘cross-cutting concerns’.

One more thing before we get into how concerns are actually put together in .NET.  In many articles on this topic, the term ‘aspect’ is used to describe what we are calling a concern.  For the most part, ‘aspect’ and ‘concern’ can be used interchangeably.  And from this, the more commonly heard term ‘aspect oriented programming’ is derived. 

Creating Concerns in .NET

Actually, it’s not the creation of a concern that is the difficult part.  After all, a concern is just a collection of related functions.  In .NET (as well as in pretty much every object-oriented language), that is just a fancy term for a class.  The difficult part is in integrating the functionality provided by the concern into existing classes without impacting the class itself.  To accomplish this feat requires the use of Application Domains

In the .NET Framework, an application domain is a logical boundary that the common language runtime (CLR) creates within a single process.  The key to the application domain concept is one of isolation.  The CLR can load different .NET applications into different domains in a single process.  Each of the applications runs not only independently of one another, but cannot directly impact one another.  They don’t directly share memory.  They each have their own process stack.  They even can load and use different versions of the same assembly.  For all intents and purposes, the applications are isolated. 


It might not immediately be apparent why an application domain is required.  Even though application domains are isolated from one another, programs running in one can still invoke methods running in another.  This is accomplished through a technique not that dissimilar to .NET Remoting.  And since we take advantage of this remoting to implement a cross-cutting concern, understanding it is worth a few more words.

 

Figure 1 – Cross-Domain Method Invocation Flow

The diagram shown in Figure 1 shows the flow of a call between the client class and the receiving object.  The assumption built into the diagram is that the client and the recipient are in different application domains.  First, transparent proxy is created.  This proxy contains an interface identical to the recipient, so that the caller is kept in the dark about the ultimate location of the callee.  The transparent proxy calls the real proxy, whose job it is to marshal the parameters of the method across the application domain.  As it turns out, before the receiving object sees the call there are zero or more message sink classes that get called.  These classes can perform a variety of functions, as well shall see shortly.  The last sink in the chain is the stack builder sink. This sink takes the parameters and places them onto the stack before invoking the method in the receiving object.  By doing this, the recipient remains as oblivious to the mechanism used to make the call as the initiator is.

A review of this flow reveals that chain of message sinks.  These are classes that get called every time a cross-domain call is made.  And they would seem to be the perfect place to implement our cross-concern functionality. So the question becomes one of how do this message sinks get created and how do we tell the CLR to use them.

The answer is two-fold.  First, in order to indicate that an object should be created in a separate application domain, the recipient class needs to be inherited from ContextBoundObject.  So much for the easy part.  The second step is to create an attribute which is used to decorate the recipient class. 

Context Attributes

The attribute class which decorates the recipient class is one that we create ourselves.  It allows us to provide the message sinks which will ultimately implement the cross-cutting concern. The class itself must be derived from the ContextAttribute class.  The ContextAttribute class requires that two methods be implemented.  First, the IsContextOk function returns a Boolean value indicating whether the current application domain is acceptable for this instance of the class.  Since the purpose of our attribute is to force the object to be placed into a separate domain, this function should always return a true. 

_

Public Class TraceLogger

   Inherits ContextAttribute

   Public Overloads Function IsContextOK(ByVal ctx As Context, _

ByVal ctorMsg As IConstructionCallMessage) As Boolean _

Implements IContextAttribute.IsContextOK

      Return False

   End Function

End Class

The second function that is required by the ContextAttribute base class is one called GetPropertiesForNewContext.  The purpose of this method is to allow information about the new context.  It is in this method that we indicate the message sinks that are to be included in the method invocation.

Public Overl"

      End Get

    End Property

    Public Overloads Sub Freeze(ByVal NewContext As Context) _

      Implements IContextProperty.Freeze

      Return

    End Sub

End Class

I know it seems like a long journey, but we’re almost at the end.  The TraceLoggerObjectSink class used in the GetObjectSink method is where the cross-cutting concern function actually gets implemented.  In order to be included in the chain of message sinks, this class needs to implement the IMessageSink interface.  As well, at least one constructor in the class needs to accept the next message sink in the chain as a parameter. 

Public Class TraceLoggerObjectSink

    Implements IMessageSink

    Private fNextSink As IMessageSink

    Public Sub New(ByVal ims As IMessageSink)

        fNextSink = ims

    End Sub

End Class


The other two methods required by IMessageSink are SyncProcessMessage and AsyncProcessMessage.  These methods are called when the recipient method is invoked synchronously and asynchronously respective.  For simplicity, we’ll just focus on SyncProcessMessage. The diagram shown in Figure 2 illustrates how SyncProcessMessage comes into play.

Figure 2 – Process Flow within SyncProcessMessage

As part of the Message Sink chain, the SyncProcessMessage method of this class is called.  Within the method, any preprocessing of the request is performed.  The method then invokes the SyncProcessMessage method for the message sink provided in the constructor.  In its turn, each of the message sinks perform the same logic until the recipient object is called.  On the way back, the flow is reversed.  From the perspective of just this one sink, however, the flow is return through the same procedure.  So after the call to SyncProcessMessage, any post processing of the response is done and control allowed to pass back up the chain.

Public Overloads Function SyncProcessMessage(ByVal Message As IMessage) _

   As IMessage Implements IMessageSink.SyncProcessMessage

   Dim RetValue As IMessage

   PreProcess(Message)

   RetValue = _NextSink.SyncProcessMessage(Message)

   PostProcess(Message, RetValue)

   Return RetValue

End Function

Within the PreProcess and PostProcess methods, any manner of processing can take place.  The actual name of the method, the parameter values and the return values are all available to be view and modified.  And it is in these methods that the cross-cutting concern functions, such as tracing, validation and authentication can be performed.

Summary

Is the use of a ContextBoundObject and cross-domain calls the best way to implement a cross-cutting concern.  Probably not.  The restriction that the class must be inherited from ContextBoundObject means that this technique is not suitable for every situation.  For example, if you wanted to implement method tracing for an object that was being used in COM+, you have a problem.  The COM+ class needs to inherit from System.EnterpriseServices and there is no way to have a single class inherit from multiple bases.  However for relatively straightforward situations, this technique is not only effective, but also useful.  After all, being able to add functionality without altering the underlying class is always a good way of improving programmer productivity and lowering the rate of defects.  Quite a noble goal in its own right.

Another O#er is blogging...

Stewart Zanolla

.NET 2.0 Whidbey Presentation in Toronto

On December 9th, myself and Dave Lloyd will present an in depth sneak peak of the next release of .NET: Visual Studio .NET 2.0 (code-named "Whidbey"). This should be a really fun presentation but seating is pretty limited. Adam Gallant from Microsoft is going to come by and also demo Longhorn and Avalon. He's getting daily builds so hopefully he'll show us some cool stuff that wasn't in the PDC build and is maybe even newer than some of the stuff they showed us at PDC. Hope to see some of you there.

NAntContrib "slingshot" vs. NAnt "solution"

I was trying to build the latest NAntContrib project so I could take advantage of their Slingshot task which automagically converts a visual studio solution (*.sln) and projects (*.csproj - not sure about *.vbproj) into a handy-dandy NAnt *.build file complete with all reference, source inclusions, dependencies, debug, release, and clean targets. Nice.

The only problem with this approach of course is that you need to run it daily if you don't want your NAnt builds to break when a new project or reference is added to your solution. Fortunately, NAntContrib exposes slingshot not only as a command line tool but also as a native NAnt task. The pain is that NAntContrib doesn't have a “stable release” but only a “nightly build”.....which of course is not built against the NAnt “stable release”. So I had to throw away my NAnt stable release that I've been using and opt for the latest nightly build for it too. After I got both to compile successfully I started to battle goofy slingshot issues like having to map web projects to my local hd path, avoiding spaces in my paths, and ReallyReallyReallyLongPaths.  I ended up doing a subst to map a directory to a drive letter to make it short. All this to create a build file that will get thrown away each and every night.

While browsing through the new things added to NAnt since the stable release I was accustomed too, I discovered a new task “Solution”. It compiles a whole solution - sln file. No generation of a build file. My build file literally goes from hundreds if not thousands of lines to this.

<target name="Debug">

<solution solutionfile="ObjectSharp.Framework.sln" configuration="Debug" />

target>

This compiles our entire framework, with all 8 projects as referenced in the sln file, in the right order, with the same dependencies used by a developer while using the VS.NET IDE. What a concept: share the build files used by your build server and IDE so that there are no surprises or impedance mismatches. Such a great idea that MS is doing this with MSBuild in Whidbey. I wonder if MSBuild will have add on tasks like NAntContrib, like for Visual Source Safe, sending email, running NUnit, and executing SQL. I like my NAnt - not sure if I'll be able to break free with MSBuild.

MSDN Experimental Annotations Service uses RSS

I miss Win95 winhelp. In particular, I was sad to see in Win98 that HTML Help had not included the annotation feature, the ability to add your own notes to a help topic - any help topic. These were stored in a local .ann file next to the help file if memory serves.

During his PDC keynote, Eric Rudder mentioned and briefly showed some stuff they were doing with the Longhorn SDK to enable threaded annotations, kind of like discussions to a help topic. So I've stumbled on what promises to be a cool site: lab.msdn.microsoft.com. One of the play things is the MSDN Annotations Service. It requires the download of a small plug in for your browser.

It basically works like this...

You visit any page (in theory) on the msdn site (including the longhorn sdk) and you get an annotations window on the bottom. This allows you to add your own comments. Nice. The cool thing is, you get to see other users annotations as well. These annotations are not stored in a local .ann file, no they are stored on the Microsoft site.

Maybe you don't want other people to see your goofy code snippets. Fortunately you can subscribe to your own feeds - so long as they are exposed as an RSS, like say - this blog. If you want to make an entry to a page your visiting, simple paste in the URL to your blog entry (like so: http://longhorn.msdn.microsoft.com/lhsdk/ref/ns/microsoft.build.buildengine/c/target/target.aspx).

The annotation service allows you to subscribe to a feed. While you are looking at a given page - like the one above, if the subscribed feed contains an URL to that page, then presto it shows up as an annotation. Very cool.  The stipulation here is that in the RSS XML feed, the tag has to contain an anchor with that URL.

So does MS listen to your subscribed feeds? No, that's what the small utility plug in is for. It's done on the client.

Yet another creative use of RSS. I'm also told that the MS provided annotations also are scraped from newsgroups.

Using a DLL from Web Service

I had the distinct pleasure of trying to incorporate a non-COM compliant DLL into a web service yesterday.  Along with the issues associated with marshalling parameters (and which I'll mention in a separate blog entry), I also had to get the web service to find the DLL that needed to be loaded.  I would have thought that simply placing it into the bin directory under the virtual root would be sufficient.  Apparently not. Even after doing so, I still got a could not load DLL message

The correct answer is that the DLL needs to be someplace in the directories listed in PATH.  And by default, the bin directory is not in this list. 

For those who are unfamiliar with the process, the PATH is build from a combination of sources.  First, the System Path as defined in the System Environment Variables in the System administration applet is included.  Then the User path as defined in the same place is appended.  Finally the directories added to the PATH through Autoexec.bat are included. 

So if you plan on using a DLL in a web service, make sure that either the DLL is installed someplace along your PATH or your PATH is modified to include the web service's bin directory.