Visual Studio Team System and XBox/Halo 2

What does project management, test management, defect tracking, build servers, methodology, automated testing, code coverage, and software diagramming have to do with Halo 2? I'm not sure really, but if you want both - then you need to come to the Toronto Visual Basic User Group meeting tomorrow night. I'll be doing a “powerpoint free” drive through of Visual Studio Team System AND raffling off an xbox console and a copy of Halo 2, worth about $270.  More details here: http://www.tvbug.com/

Random ASP.NET AppDomain Restarts

Are you having a problem with apparently random application restarts in your ASP.NET application?  Does your session information mystically disappear with no discernable pattern?  Well, the following may help you out.

One of the things that ASP.NET does to help make it easier for developers to modify running web sites is to keep an eye on files that are part of the virtual directory.  If you drop a new version of a DLL into the bin directory, it takes effect from the next request on.  If you make a change to an ASPX file, it too is detected and becomes 'live' with any subsequent request.  No question that this is useful functionality.

A little known fact about this process is that ASP.NET is also aware of the amount of memory that maintaining two versions of the same assembly takes.  If changes were allowed to be made without a restart, eventually it could become detrimental to the overall performance ogf ASP.NET. To combat this, ASP.NET also tracks the number of files that are changed and after a certain number, performs an Application Restart.  As with any other restart, this causes the current session information to be lost.  By default, the number of changes before a restate is 15.  To modify this, you can change the numRecompilesBeforeApprestart attribute in the machine.config file.

While all of this is useful information, there is a small twist that can make our life more difficult.  The files that ASP.NET is watching while looking for a change is not limited to those with an .aspx extension. In fact, it is any file in the virtual directory.  Also, the name of the setting implies that the Recompiles are what is being counted.  It's not.  What is being counted is the number of files that change.  Or, more accurately, the number of changes to any file in the virtual directory. 

How can this cause problems?  What if you were to put a log file into the virtual directory of a web site.  What if that log file were opened and kept open while the information was written to it.  When that log file filled up, it is closed and a new one opened.  Of course, this close counts as a file change.  That would mean that after 15 log files are created (assuming no other changes to the web site), the virtual directory would automatically reset.  And since the amount of information written to the log file will be different depending on which pages/web methods are called, the frequency of the reset is dependent on factors that are not immediately apparent.  Imagine the joy of trying to discover why ASP.NET applications are being randomly reset without the benefit of this blog.

The moral of this story:  Don't put any non-static files into an ASP.NET virtual directory. My good deed for the day is done. 

Assembly Reference Resolving in Visual Studio.

I got a question today about a problematic assembly references during a build of a large project.

Visual Studio projects have a user file (*.csproj,user, *.vbproj.user) that stores a “References Path”. You can edit this list of paths in your project properties dialog and it works like a PATH variable.

The twist comes in when you add a reference to a project. The IDE creates a relative formed path to the assembly you picked in the add reference dialog and places that as a “hint path” in the .csproj/.vbproj file.

So imagine you have some carefully crafted hint paths in your project file references, and no references paths in your user file. It's still possible, for your assembly to not be found where you expect it.

Did you know that VS.NET will automatically add References Paths for you as part of it's build process. Let's say you have a list of references in your project. As VS.NET iterates through the list of them one at a time, it will first check the references path. If the assembly is not found, it will use the hint path. If it finds it with the hint path - it will take the fully qualified path to that assembly - and put it in references path of your user file. When it gets to the next assembly it will check all those references paths - including the one it just automatically added.

What happens if you have multiple copies of your referenced dll hanging around? It could find one other than the one you referenced in your hint path.

This is all pretty rare to happen, but if you are like some folks who use Visual Studio itself as your build server (not NAnt) and as a matter of practice you delete your .user files as part of that build (with the predefined reference paths), you could find yourself in hot water. The only solution in this mixed up process is to make sure you don't have multiple copies/versions of your referenced assemblies lying around. Or better yet, use NAnt.

When is a cache not really a cache

I spent a large portion of the day inside of the Caching Application Block.  Specifically, a colleague and I were tracking down what appeared to be a nasty threading bug caused by having the creation of two different cached objects that were related to one another.  As it turned out, the bug that existed couldn't explain away all of the behaviors that we observed.

As it turns out, there was what appears to be a poorly documented aspect of the Caching Application Block that was causing us grief.  The problem is that the default number of objects that the cache can store before scavenging begins is set very low.  Specifically, it is set to 5.  As well, there is a UtilizationForScavenging setting (by default, it is 80) that lowers this number ever more.  Once the cache contains the (maximum * utilization / 100) items, the scavenger starts to remove the excess items from the cache, trying to keep the number of items below the calculated value.  With the default values, this means that no more than 3 elements will be saved in the cache at a time.  The scavenging class uses a least recently used algorithm, however if you're using an absolute time expiration, there is no 'last used' information saved.  So the scavenger appears to remove the last added item. 

That's right.  Unless you make some changes to the default config, only three elements are kept in the cache.  Probably not the performance enhancer that you were looking for from a cache.  Fortunately, the values can easily be changed.  The can be found in the ScavengingInfo tag in app.config. And there is no reason not to set the maximum value much higher, as there is no allocations performed until actual items are cached. It was just the initial surprise (and subsequent fallout) that caused me to, once again, question how closely related the parents of the designers were.  But only for a moment. ;)

As one further word of warning, if there is no ScavengingInfo tag in the config class, then the default class (the same LruScavenging class just described) is used. And instead of getting the maximum cache information from the config file, a file called CacheManagerText.resx is used.  In that file, the entries called RES_MaxCacheStorageSize and RES_CacheUtilizationToScavenge are used to determine how many items to keep in the cache.  Out of the box, these values are set to the same 5 and 80 that the config file contains.

Solving the "No such interface is supported" problem

I was asked a question today about a fairly common error that occurs when serviced components are used in conjunction with ASP.NET. Specifically, a COM+ component (one that is derived from ServicedComponent) was being used on an ASP.NET page.  When the page was loaded, an error of "No such interface is supported" was raised.

To understand the why of this error requires a little bit on knowledge about COM+.  When a serviced component is first instantiated, the CLR checks the COM+ catalog for information about the runtime requirements of the class. But if the class had not previously been registered, then no information will be available.  To correct this discrepancy, the CLR automatically creates a type library for the class and uses that information to populate the COM+ catalog. This mechanism is called lazy registration. 

But wait.  It requires a privileged account in order to register a component in the COM+ catalog.  In particular, you need to be a machine admin.  And, unless you have been silly enough to grant the ASP.NET user admin rights to your machine, the update of the COM+ catalog fails.  No catalog information, no instantiation.  An unsuccessful instantiation means an exception.  An exception that includes an error message of "No such interface is supported". Go figure.

So ultimately, the solution is not to depend upon lazy registration of COM+ components that are deployed for use in ASP.NET.  Instead, perform the registration using the regsvcs.exe command manually.

Cookieless Sessions and Security

In a previous blog, I pointed out that Microsoft had created an HttpModule that mitigated the ASP.NET cannonicalization issue that was first described a couple of weeks ago. In one of the comments, Amir asked about the security issues surrounding the use of cookieless sessions.  Specifically, he was wondering if ASP.NET could tell if a request containing a cookieless session component was coming from a different browser instance or even a different location.  In brief, the answer is “No“.

When used as part of a plain text web site (i.e. no SSL), cookieless sessions are not secure at all. The session id is placed into the URL in every local link on a page.  The form of the URLs would not be as follows:  http://domain.com/(sessionid)/Page.aspx. This session id is not automatically tied to a specific browser instance or even the IP address of the initial request.  There is no way that I'm aware of to tie the request to a browser (other then using cookies ;). And while a session can be associated to an IP address, this requires some additional work on your part, in the form of an HTTP Module.  The association between session Id and IP address could be done by generating a hash of the IP address for the request (Request.ServerVariables ["REMOTE_ADDR"]) and the session id and using the resulting value to access the session variables.  However, this solution doesn't take into consideration the case where people are behind a proxy that causes the IP address to change from request to request. For this, you might want to just use the first two components of the IP address, since these are not normally varied by the proxy. But if the spoofer is on the same subnet...  As you can see, all in all, this is a difficult problem to solve.

I should probably mention that cookie-based session suffer from the same problem.  There is nothing inherent in how cookied sessions work that makes them safer than their cookieless counterparts.  A hacker can easily create a request that includes a spoofed cookies containing a hijacked session identifier.  But at least a cookie-based session solution can use SSL to encrypt the entire request.  Since the session is not included in the URL directly, it is not easily accessible if the request/response gets hijacked.

So to summarize, while cookieless sessions are nice in theory, in practice they really should only be used on sites where every page is accessed through SSL.  And the implementation should be customized to ensure that a portion of the IP address is used to verify the originating IP address of subsequent requests corresponds to the original request.

Remove Those Annoying System Tray Balloons

If you're like me, the system balloons that are automatically popped by system tray application are annoying.  This blog from Scott Howlett describes how to eliminate them from your life.

Poor operator overloading (or why I'm an evil developer)

In the project that I'm working on, I had the need to overload the Equals operator for a custom class that was being used.   This is a fairly well known problem, although if you haven't done it before, there are enough traps to make it challenging.  More accurately, it is tricky enough that it is better to borrow already working and optimized code then to create it yourself. So naturally, I turn to Google.

I find a couple of MSDN sites, including this one which describes how to do it.  Down towards the bottom of the article, there is a C# example as follows

public static bool operator ==(Complex x, Complex y)
{
   return x.re == y.re && x.im == y.im;
}

Perfect.  I cut, paste, modify to fit my class, run a couple of basic tests and check the code in. 

A few hours later...boom.

All of a sudden some of the other unit tests are failing.  Specifically, statements like the following are throwing a NullReferenceException

if (testValue == null) 

Hold on a second. This is a test for null.  How can it throw an exception.

Here's the catch.  The example I took from MSDN assumes that the objects being compared are structs.  As such, they can never be null, which is why x.re == y.re can never throw a NullReferenceException.  However, if the objects being compared are reference types, then either side can be null.  Which means that the sample code would throw an exception.  The correct overload (taken from here, which also includes a wonderful description of how to optimize Equals) is as follows.

bool operator ==(SomeType o1, SomeType o2)
{
return Object.Equals(o1, o2);
}

This format not only reuses the logic included in the Equals method, it also has one additional benefit.  No NullReferenceExceptions.  Unless, of course, your Equals method throws one. But who would make that mistake. ;)

So why am I evil? Well, in the brief period of time that the bad code was checked in, a colleague updated his sandbox, said update including this bug. Three days later, he runs into a NullReferenceException while checking for null. Wastes time scratching his head wondering why a check for null is throwing a null exception. Suggests that the runtime's parents were closely related. Finally, upon consultation with me (because, honestly, who *expects* that the equality operator will be overloaded), realizes that my old code is the source of the problem. Usually my mistakes impact only me. This one, unfortunately, spread its love just a little bit further.

Automatic mitigation for ASP.NET vulnerability

By now, most of you will have heard about the ASP.NET vulnerability that allows creatively formed URLs to bypass forms or Windows-based authentication.  And while there has been a piece of code that can be added to global.asax, Microsoft has released a more easily deployed mechanism for mitigating the security risk.  Check out http://www.microsoft.com/security/incident/aspnet.mspx to download an msi file that installs an HTTP Module that protects all of the sites on a  web server.

CTTDNUG UG Tomorrow Night: Building Pocket PC Applications with the Compact Framework and SQL CE

As part of the continuing MSDN User Group Tour, I'll be speaking at the Canadian Technology Triangle .NET User Group in Waterloo on Thursday October 7th (tomorrow night). There is a new location for the meeting at Agfa (formerly Mitra) in Waterloo. All the details are here.

Correction: Adam Gallant is going to be the speaker at this event tomorrow night. Sorry for the confusion. It should be a good talk.