Coming at the debugger the other way.

When writing services, I often find myself having to attach to processes manually from within VS.NET. When you can't simply run the code directly from VS.NET and step your way through it this is a common choice. Every time I have to do this though I cringe because I'm walking on thin ice. Sometimes it doesn't work, or you hang, or weird things seem to happen.

I was having a particularly difficult time with a client yesterday who was debugging through some HTTP handlers when I remember a question from one of the MCSD.NET exams that clued me into the fact that you can programmatically cause a break point - that's right - I said “programmatically“.

System.Diagnostics.Debugger.Break();

That nifty little class & function call will bring up a dialog box offering to allow you to attach to a new instance of VS.NET or an existing one you might have open - similar to when you get an unhandled exception. For this to work the user running the process requires UIPermission. Not surprisingly the default aspnet user that asp.net normally runs under when the machine.config processmodel section's user is set to “machine” does not have this permission by default. If you are a developer running IIS locally, consider temporarily changing it to “system” or some other user but be careful because doing so causes asp.net to run under the local system account - which is a security risk.

Too bad there is no T-SQL version of this function - maybe in Yukon.

Creating an Absolute or Sliding Cache Dependency

This post is as much about making a mental note as it is about anything new.  As part of the project that is creating an RSS feed from a VSS database, I wanted to to perfrom some caching of the feed.  If you haven't worked with the ActiveX component that provides the API to VSS, trust me when I say that it's slow.  And because the retrieval of the history is recursive, the slowness can be magnified by an order of magnitude.  So given the lack of volitility of the data, caching seemed in order.

Now there is a fair bit of information about how to add items to the ASP.NET Cache object.  The Insert method is used, followed by a number of parameters.  And because there are different combinations of criteria that can be used to identify when the cached item expires, the Insert method has a number of different overloads.  The one that I'm interested in is

Cache.Insert(string, object, CacheDependency, DateTime, TimeSpan)

This particular overload is used to specify a cache dependency and either an absolute or a sliding expiration time.  Ignore the cache dependency, as it is not germaine to the discussion.  The issue arises when trying to specify either an absolute expiration time or the sliding time span.  Since these choices are mutually exclusive, my problem was how to correctly provide a null value for the unimportant parameter.  In my case, it was how to define a 'null' TimeSpan, as I wanted to use an absolute expiration. It took a little bit of searching before I found the answer (which, by the way, is the reason for this post...future documentation).

To define an absolute expiration, use TimeSpan.Zero as the last value.  For example:

Cache.Insert(”key”, value, null, DateTime.Now.AddMinutes(15), TimeSpan.Zero)

To define a sliding expiration, use DateTime.Now as the fourth parameter.  For example:

Cache.Insert(”key”, value, null, DateTime.Now, new TimeSpan(0, 15, 0))

My instinct (although I have nothing to back it up) is that under the covers the Insert method uses both parmaeter values to determine when the expiration should take place.  But regardless, I'm left wondering why this particular overload exists.  Why is there not one overload for absolute expiration and another for sliding?  The signatures would be different, so that's not the reason.  It wouldn't conflict with one of the other signatures. I'm left scratching my head.  Hopefully someone out there in the blogsphere who's reading this post will have an answer. 

Beuller?   Beuller?

Upating Performance Counters from ASP.NET

While there are a number of quite useful articles about how to access and increment PerformanceCounters through the .NET Framework (the PerformanceCounter class description on MSDN and Creating Custom Performance Counters, also on MSDN to name two), the actual deployment of a web service (or any ASP.NET application, as it turns out) is not so thoroughly covered. 

The biggest problem surrounding the move into production of an ASP.NET application that updates performance counters is permissions.  By default, in order to increment a performance counter, the user needs to have Administrator or Power User rights.  You could change the processModel value in machine.config to System, but that leaves a security hole wide enough to drive an 18-wheeler through.  Which is another way of saying “Don't do this!!!!!”.

For completeness, the event log entry that appear as a result of the lack of permissions is as follows:

Event ID:  1000
Source: Perflib
Access to performance data was denied to ASPNET as attempted from C:\WINNT\Microsoft.NET\Framework\v1.1.4322\aspnet_wp.exe

Also, on the actual call to increment the PerformanceCounter, the following exception is thrown:

System.ComponentModel.Win32Exception: Access is denied

with the stack trace pointing to the GetData method in the PerformanceMonitor class.

As it turns out, the permission set that is required is much smaller than running as “System”.  In the registry key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib, set the Access Control List so that the ASPNET user has Full Control.  Voila, the problem goes away.

WSDL and XSL

If this solution is familiar to experienced XSL users, forgive me.  I'm a casual XSL user, so the problem was difficult to identify and I didn't find much in the way of Google entries covering the answer.  So that (in my small, secluded little world) makes it blog-worthy.

The situation I found myself in was needing to convert the WSDL output from an ASP.NET page to a specific XML format.  The reason for the format is irrelevant.  Suffice it to say, that I needed to create a list of the valid SOAP operations for a particular web service.  I tool the WSDL that was generated by ASP.NET and started the trial and error process of identifying the correct XSL stylesheet to use.  This is where I ran into problems.

I do understand the basics of XPath enough to try simple queries.  My starting point was to list out the portTypes for the WSDL.  To do this, my initial XPath query was //definitions/portType.  This didn't actually return any nodes.  I thought this strange, so I dropped down to the more straightforward //definitions.  This too returned nothing.  Running out of drop-back room, I went with the wildcard //*.  Fortunately for what remains of my hair, this worked. So the question of why the other queries didn't work remained.

After more painful attempts (and a weekend to allow my brain some reset), I finally came across the solution.  XPath queries don't work with the empty namespace.  They require that the namespace be explicit within the query.  But the WSDL generated by ASP.NET does not assign a namespace to the WSDL elements, choosing to use the empty namespace.  So the XSL file needed to be modified to define a namespace for the URI associated with the WSDL elements.  When done, the xsl:stylesheet element looked like the following:

<xsl:stylesheet version="1.0"
   xmlns:xsl="http://www.w3.org/1999/XSL/Transform
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" >
 

The key here is the xmlns:wsdl attribute.  This defines a namespace called wsdl and associates it with the listed URI.  This URI needs to match exactly the URI associated with the empty namespace in the WSDL file.  And I do mean exactly.  Character for character.  Byte for byte. 

Once this has been added to the stylesheet tag, the XPath queries can be modified to be //wsdl:definitions/wsdl:portType and the results will be as expected.

Identifying SOAP Requests in Http Modules

This might seem a little on the simplistic side, but given the difficulty that I had finding this information, I'm posting it in the hope that it helps others.

I have created an HTTP Module whose job it is to raise EIF (Enterprise Instrumentation Framework) events on the receipt and response to SOAP requests.  The key here is the SOAP requests.  I don't want to pay any attention to the non-SOAP requests that come through.  So I needed to find a way to separate the two classes of messages using the information that was available.

The answer is to use the HTTP_SOAPACTION header that is included with SOAP requests (but not with the normal GETs and POSTs that a web site sees).  For example, the following code simply skips processing any non-SOAP requests.

private void Application_BeginRequest(Object source, EventArgs e)
  {
   HttpApplication application = (HttpApplication)source;

   // Don't do anything for non-SOAP requests
   if (application.Context.Request.ServerVariables["HTTP_SOAPACTION"] == null)
       return;

   }

Naturally, I have tied this procedure into the BeginRequest event for the HttpApplication object in the Init method for the HttpModule.

If someone has a better idea (or a reason why/when this won't work), I'm open to suggestions.  But for what I'm trying to accomplish it did the trick.

Share &quot;source&quot; files between projects.

By default, each project in your solution has an AssemblyInfo.??. Amongst other things, it contains an AssemblyInfo attribute that will end up stamping the dll or exe with it's version number. This is the version number used by the CLR to make sure that when you reference a dll - it finds the correct version.

In a solution comprised of many projects, you may want them all to share the same build number. By default, VS.NET sets this version to 1.0.* -- * meaning that it will increment the number. Sometimes you build just one project, sometimes all of the projects in a solution. Some developers may even have their own solution files to just work on a subset of the projects in the master solution.

What I'm saying here is that you really ought to take better care (and control) of this version number. It would be nice to have all of your dll's in the solution share the same assembly version. Sure you can hard code it, but manually incrementing it then becomes tedious. The secret to this tip is that the AssemblyVersion attribute doesn't have to be in the AssemblyInfo.?? file. It can be in it's own file. In fact, that file doesn't have to even physically exist in the same subdirectory as the project thanks to “Linked“ files.

So follow these steps.

  1. Remove the “AssemblyVersion“ attributes from the AssemblyInfo in each of your projects.
  2. Create a “VersionInfo.cs“ (or .vb) in the root of your solution - probably one level up from your projects. It should include an AssemblyVersion attribute like the one you took out of each of your AssemblyInfo files. It should also include a using System.Reflection since that is the required namespace.
  3. In each of your projects make a shortcut or link reference to this new file. To do this, right click on each project and select “Add Existing Item“. Browse to the VersionInfo.cs (or .vb) file and instead of clicking “Open“ select “Link“ from the drop down on the open button in the File Open Dialog.

Now you have only one place to increment the version for your entire solution. If you are using NAnt, you can have it do this for you with this simple task:

 

<version path="VersionInfo.txt" startDate="2003-10-1" buildType="monthday" prefix="assembly." />

<asminfo output="VersionInfo.cs" language="CSharp">

<imports>

<import name="System" />

<import name="System.Reflection"/>

</imports>

<attributes>

<attribute type="AssemblyVersionAttribute" value="${assembly.version}" />

</attributes>

</asminfo>

The first version task increments the build number and stores it in both the assembly.version variable and also the VersionInfo.txt file. The second task recreates a new AssemblyInfo file and uses the assembly.version variable.

NAntContrib "slingshot" vs. NAnt "solution"

I was trying to build the latest NAntContrib project so I could take advantage of their Slingshot task which automagically converts a visual studio solution (*.sln) and projects (*.csproj - not sure about *.vbproj) into a handy-dandy NAnt *.build file complete with all reference, source inclusions, dependencies, debug, release, and clean targets. Nice.

The only problem with this approach of course is that you need to run it daily if you don't want your NAnt builds to break when a new project or reference is added to your solution. Fortunately, NAntContrib exposes slingshot not only as a command line tool but also as a native NAnt task. The pain is that NAntContrib doesn't have a “stable release” but only a “nightly build”.....which of course is not built against the NAnt “stable release”. So I had to throw away my NAnt stable release that I've been using and opt for the latest nightly build for it too. After I got both to compile successfully I started to battle goofy slingshot issues like having to map web projects to my local hd path, avoiding spaces in my paths, and ReallyReallyReallyLongPaths.  I ended up doing a subst to map a directory to a drive letter to make it short. All this to create a build file that will get thrown away each and every night.

While browsing through the new things added to NAnt since the stable release I was accustomed too, I discovered a new task “Solution”. It compiles a whole solution - sln file. No generation of a build file. My build file literally goes from hundreds if not thousands of lines to this.

<target name="Debug">

<solution solutionfile="ObjectSharp.Framework.sln" configuration="Debug" />

target>

This compiles our entire framework, with all 8 projects as referenced in the sln file, in the right order, with the same dependencies used by a developer while using the VS.NET IDE. What a concept: share the build files used by your build server and IDE so that there are no surprises or impedance mismatches. Such a great idea that MS is doing this with MSBuild in Whidbey. I wonder if MSBuild will have add on tasks like NAntContrib, like for Visual Source Safe, sending email, running NUnit, and executing SQL. I like my NAnt - not sure if I'll be able to break free with MSBuild.

C# vs. VB.NET Interfaces

Visual Basic has an interest syntax for implementing interface members.

Public Interface IAdd

Function Execute(ByVal i As Integer, ByVal j As Integer) As Integer

End Interface

Public Interface ISubtract

Function Execute(ByVal i As Integer, ByVal j As Integer) As Integer

End Interface

Public Class Calculator

Implements IAdd, ISubtract

Public Function Add(ByVal i As Integer, ByVal j As Integer) As Integer Implements IAdd.Execute

Return i + j

End Function

Public Function Subtract(ByVal i As Integer, ByVal j As Integer) As Integer Implements ISubtract.Execute

Return i - j

End Function

End Class

The key here is that the function names don't have to be the same as they are in the interface definition. I have two interfaces that both expose a Execute method. They are implemented separately as Add and Subtract. Thanks VB team for the “Implements“ keyword. C# is a different story - and up until a few days ago I didn't think you could do this at all in in C#. Here you go:

interface IAdd

{

int Execute(int i, int j);

}

interface ISubtract

{

int Execute(int i, int j);

}

public class Calculator: IAdd, ISubtract

{

#region IAdd Members

public int Execute(int i, int j)

{

return i + j;

}

#endregion

#region
ISubtract Members

int InterfacesCS.ISubtract.Execute(int i, int j)

{

return i - j;

}

#endregion

}

I used the auto-magic interface template generator built into the VS.NET C# editor. After you type ISubtract on the class declaration, wait for a little tool tip that tells you to hit TAB for a default implementation template. You'll notice the fully qualified function name “InterfacesCS.ISubtract.Execute“ on the second declaration of “Execute”. The whipper snappers out there will also notice that the second method is not public. So it's private you might say. The class view and object browser would agree with you. Don't try putting “private” in front of the declaration. The compiler won't like that. Don't even think of trying to call it directly though. The only way to call this second method either internally to the class (why not it's private) or externally is by casting the reference to the interface, like so..

Calculator calc = new Calculator();

int result = ((ISubtract)calc).Execute(5,4);

 

If you really want to call the method directly without casting it, you'll have to create a wrapper method and expose it publicly or privately if that suits your needs.

public int Subtract(int i, int j)

{

return ((ISubtract)this).Execute(i, j);

}

Thanks to Alex Bershadski for pointing me to a few code snippets in the MS Press C# book about this. The book could really do a better job of identifying the limitations of C# in this regard. At least it can be done. Up until Tuesday, I didn't think it was possible.

ASP.NET and the Event Log

Today's tidbit revolves around enabling the ASP.NET user to generate entries into the event log.  In an ideal world (hint, hint Microsoft designers), this would be a relatively straightforward process.  Or at least one that didn't require a direct hack into the registry.  But that is not the case at the moment.  So without further ado, here are the steps involved in enabling the ASP.NET user to create event log entries.

1. Launch RegEdit
2. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\
    CurrentControlSet\Services\EventLog\
3. From the menu, select Edit->Permissions
4. Click the Add button and write ASPNET.  (if ASP.NET is running under a different user id, use that id instead)
5. Click OK.
6. Select the newly added user from the list (ASP.NET Machine User by default).
7. Click on Full Control in the Allow column.
8. Click OK.

It is usually a good idea at this point to restart IIS with the IISReset command (Start | Run | IISReset).

For those concerned with the security hole that has been opened up.  Once these changes are implemented, the ASP.NET user has full control over the Application event log.  Worst case scenario, a bad process could fill up the event log or delete existing log entries.  However, as far as security breaches go, these are fairly minor, especially when compared to the benefits of being able to view log entries.