Missing random files while serializing nested classes

One more post aimed at solving the mystery of a strange error message.  The situation is as follows:  A public class is defined so as to include a nested class.  Within the public class, a method is defined that XML serializes the nested client, the reason being to convert the nested object into a string that can be passed across a service boundary.  The code for the class ressembles the following:

public class OuterClass
{
   class InnerClass
   {
      private string innerProperty;
      public string InnerProperty
      {
         get { return outerProperty; }
         set { outerProperty = value; }
      }
   }

   public void SerializeThis()
   {
     
try
      {
         XmlSerializer s = new XmlSerializer(typeof(OuterClass.InnerClass));
         Console.WriteLine("Serializer created");
      }
      catch (Exception ex)
      {
         Console.WriteLine("Serializer not created: " + ex.Message);
      }
     
finally
      {
         Console.ReadLine();
      }
   }
}

When the SerializeThis method is executed, an exception is thrown when the XmlSerializer is instantiated, specifically a FileNotFoundException with a message of "File or assembly name hzi9lzkp.dll, or one of its dependencies, was not found."  Not an exceptionally helpful message, when it comes to figuring out what's going wrong. 

As it turns out, the problem is that the InnerClass class is not exposed as a public class.  Once that is done, the exception disappears and the serialization works as expected.

IIS 6.0 Isolation Mode and ASP.NET worker process identity

A client ran into an intriguing problem the other day.  The application under development has a number of web services the get deployed onto one server or another as different versions are released for client testing.  Underneath the services, LDAP is used to store roles, preferences and other flotsam and jetsom as needed.  In order to gain access to this information, the IIS_WPG group is given read access to LDAP. Relatively straightforward.

The problem arose when the current version was deployed onto a new machine.  Instead of being able to connect to LDAP, an exception was being thrown.  The ASPNET worker process didn't have access to LDAP.  We were able to connect to the LDAP server, but attempts to negotiate access were being denied.

Four heads are now being scratched.  We removed and add the IIS_WPG user to LDAP.  We do a couple of IISResets to make sure the security context isn't being cached.  Checked the SID that is included as the DN for the IIS_WPG group to make sure something hasn't been installed correctly.  Nothing.  We then gave Everyone access to LDAP.  The service started working again.  So we knew it had something to do with permissions.

In a fit of, well, despiration, we gave permission for the ASPNET user to access LDAP.  Wouldn't you know it.  Things worked again.  But this was unexpected.  One of the things that changed with IIS 6.0 was that ASPNET was no longer the identity under which the worker process runs.  Wasn't it?

As it turns out, the real answer is "it depends".  If you install IIS 6.0 freshly, then IIS_WPG is the group to which the permissions you'd like ASP.NET to have should be assigned.  That is to say, that IIS 6.0 runs in Worker Process Isolation Mode. However, if (and this is the 'if' that caught us) you upgrade from IIS 5.0 to 6.0, the ASPNET user is still the security context for the ASP.NET process.  This can be modified by changing the isolation mode in IIS.  Quite easy, if you know how.  The trick, as we found out, was knowing that we even had to.

 

WS-Security vs. SSL

A blog post by Doug Reilly brought to mind discussions that I had with some service architects earlier in the year.  The question was whether it was better to use SSL or WS-Security to secure SOAP messages as they travel from the client to the server and back again.  While using SSL is certainly the easier of the two choices, there are a number of reasons why WS-Security is generally superior.

SSL Provides In-Transit Security Only

The basic mechanism behind SSL is that the client encrypts all of the requests based on a key retrieved from a third party.  When the request is received at the destination, it is decrypted and presented to the service.  This is a well understood process.  However, when you look a little deeper, you'll begin to realize that the request is only encrypted while it is travelling between the client and the server.  Once it hits the server, it is decrypted from that moment on.

To be completely accurately, it might not even need to hit the server to be decrypted.  If, for example, you have a proxy server in front of you web server, it is possible that the decryption certificate has been installed there.  That way the server can examine the message to determine the correct routing.  However, the message may not be re-encrypted before it is set to the web server that will actually handle the request.  So now that  'secure' request is travelling along a network in clear text.  Granted, the network that is travels along is quite likely the internal one for the company hosting the server.  Still, there is the possibility that sensitive data can be picked up.

Further, what if the web service logs all of the incoming requests into a database.  Now not only does the request travel unencrypted across the wire, but it is also stored in a format for all to see.

WS-Security alleviates this problem by maintaining its encryption right up to the point where the request is being process.  Also, if the request is logged, the logged version will quite likely be encrypted (the logging portion of the service *could* log the message in unencrypted form, but it would have to do so explicitly).

Targeted Security

If SSL is used to encrypt a web service request, it's an all or nothing proposition.  SSL secures the entire message, whether all of it is sensitive or not.  WS-Security allows you to secure only that part (or parts) of the message that needs to be secured.  Given that encryption/decryption is not a cheap operatio, this can be a performance boost.

It is also possible with WS-Security to secure different parts of the message using different keys or even different algorithems.  This allows separates parts of the message to be read by different people without exposing other, unneeded information.

Faster Routing

Although not part of the mainstream yet, look for intelligent load balancing based on the content of incoming requests in the near future.  When this does happen, wouldn't it be better not to have the router decrypt the request before determining where it should go?

So given all of this, when why would you need to use SSL?  Because there are still a lot of people for whom SSL is the ultimate in security over the web.  Without the comfort of https, some companies feel that their information is being sent naked into the wild.  Not true, but it's not always appropriate to get into screaming matches with clients. ;)  Sigh.  I guess more educating is in order.

Update:  My colleague, John Lam, pointed out that my comment about the key to SSL encryption being retrieved from a third party was inaccurate.  In actuality, the SSL mechanism involves the following steps (taken from here)

  1. A browser requests a secure page (usually https://).

  2. The web server sends its public key with its certificate.

  3. The browser checks that the certificate was issued by a trusted party (usually a trusted root CA), that the certificate is still valid and that the certificate is related to the site contacted.

  4. The browser then uses the public key, to encrypt a random symmetric encryption key and sends it to the server with the encrypted URL required as well as other encrypted http data.

  5. The web server decrypts the symmetric encryption key using its private key and uses the symmetric key to decrypt the URL and http data.

  6. The web server sends back the requested html document and http data encrypted with the symmetric key.

  7. The browser decrypts the http data and html document using the symmetric key and displays the information

You will notice that, while there is a third party involved in validating the certificate, the key does not come from the third party but from the server.

John also mentioned that the availability of SSL Accelerators makes the performance argument moot.  I don't agree with this.  While SSL Accelerators certainly increase the throughput of secured sites, we need to compare apples to apples.  There are also XML Accelerators available in hardware to decrypt incoming requests.  Certainly using hardware makes it easier to justify staying with just SSL, all you're really doing is pushing the bottleneck further out. Ultimately, because encryption is a computationally expensive operation, the less that gets encrypted the greater the overall throughput.

Finally, there is one further reason to choose WS-Security over SSL that I forgot to mention.  SSL is closely tied to HTTP.  Which is to say that SSL can't be used if the mechanism for transporting service requests is something other than HTTP.  At the moment, this isn't the case for the vast majority of requests.  But there are already SOA examples using UDP and SMTP as the transport.  WS-Security works independently of the underlying protocol, making it much easier to adapt to whatever the future requires.

Oshawa .NET: Building Mobile Applications

I'm doing a talk at the East of GTA .NET users group tonight in Oshawa. This is the same MSDN User Group tour event sweeping across Canada. I'll be talking about some of the limitations of the Compact Framework and SqlCE. Should be fun - hope to see you there.

Registration Links and slides (afterwards) can be found here.

VS.NET IDE Teaching an old Dog new Tricks

I love hanging out with new VS.NET developers. It's enlightening to hear the troubles the face and their new found energy to solve them. I have to blog more about these - but in general, there are often things that I do out of habit in the IDE or things that I live with because I'm too lazy (or tired or busy) to find a way around them.

Two new tricks were brought to my attention by an associate of mine.

“How do I find all of the references to a class or usages of a member?” or “When I right click on a class and select Go To Reference, it goes to the first one it finds. How do I go to the next one?”.

The best answer is CTRL+SHIFT+1 which will jump you to the next reference. CTRL+SHIFT+2 will take you to the next reference.

I couldn't find this short cut anywhere in the menu's. The complete list of short cuts can be found here.

Which begged another question. Can I put short cuts or favorites in the IDE? Indeed, under View>Other Windows there is a favorites window which is your machines favorites. This is great to have docked right next to your Dynamic Help (if you have it turned on).

A similar question was “How do I find all of the descendent's of a class or implementations of an interface?”. I always using the online help for that, which doesn't help for your own code. One of the solutions I found (and maybe there is a better one) is to use Find Symbol under the Find and Replace menu (ALT-F12).

 

Illegal Variable Name/Number in Oracle

Once more my work took me into the bowels of Oracle.  Okay, not so much the bowels as just deep enough to be up to my ankles.  The following error took me about an hour of my time (and about 2 minutes of Marc Durand's), which is the reason for this post.

The OracleException coming out of ADP.NET includes in the description: ORA-01036: Illegal variable name/number. The statement being executed is an UPDATE that contains a large number of parameters, so I'm looking for a typo.  After flailing around for a while, the useful kernel of knowledge is flung at me.  In Oracle, there is a hard limit of 31 characters for parameter names.  As it turns out, one of the parameter names is 32 characters long.  Shorten the parameter name, rerun the unit test and life is allowed to continue normally. Would it really have been that hard to include a “Parameter Name too long” error message?

Penguins are sneaking into my house and leaving the doors unlocked.

In the past 3 weeks I have purchase, installed and used 2 Linux systems in my house....accidentally. First, I purchased a Roku High Definition Photo Viewer and MP3 video player for my TV. This is a nice little device that acts as a screen saver for your TV/Plasma Screen to avoid burn in....say of the DVD logo that you see from your DVD player when there is no disc inserted. The device sits between the TV and the rest of your home theatre video inputs - daisy chain style. It monitors the video traffic for no signal or no motion and after a time duration kicks in with your family photos. The photos can be retrieved over the network jack to a series of shares on your home network, or via a plugged in USB wireless adapter. It also has Compact Flash, SD/MMC, SmartMedia and memory stick slots. Not to mention of course I find out its running Linux. There was a bit of novelty involved in telnetting into my TV and using VI. That soon wore off when I discovered that the root password was blank, and that the change password binary was missing off install of Linux so I couldn't even change the password. Combine this with the fact that the setup wizard walks you through finding the network shares in your house and storing your userid/password credentials - this becomes a rather obvious security hole that could have been fixed by the manufacturers fairly easily.

Is this security attitude prevalent in the Linux world? I hope not, because yesterday I discovered another Linux box in my house.

I also recently acquired a NetGear Media Router. It's a regular router with the addition of a USB host port. This allows you to plug in a memory stick or a USB external drive to share as NAS storage. I was a bit surprised to see it show up in my Network Neighbourhood as a UPnP device named “Linux Internet Gateway“. There is also a GPL license in the box so I think that all points to it running Linux.

The device also has a nice feature that when you turn it on and it detects a network connection, it automatically decides to download and install updates to the flash bios. God forbid I turn the device off while it is doing this unbenounced to me. Bam, too late. I guess the power light goes from green to yellow when it's doing this. The 1 page card manual included with the device doesn't mention this nice “feature”. I found out the hard way. When you go the web page to administer the device in this mode you get to see that file system in it's raw form.

Downloading the manual tells me to reset the factory bios I have to hold down the reset switch with a pin for 90 seconds. Nice. I was able to do that but can't seem to get an IP out of the device any more.

I'm still evaluating the security risks of this device. It is slightly more secure with my data (via USB storage) by including a password on the administration of the machine - which is “password”. There is no password on the share it exposes and I can't see an option to put a password on the share so every body on my network (say when my geek friends come over and plug in) will have access to my financial records and family photos. Nice.

So I have accidentally installed 2 Linux boxes in my house with major security holes. I'm savvy enough to discover this on my own, but I doubt the typical residential consumers of these products would realize the security hole they are introducing into their personal data stores.

With the proliferation of these types of Linux devices into the average home, I'm sure this will draw the attention of script kiddies. Wouldn't it be cool to take over somebody's television set?  Maybe they'd throw some porn up during daytime TV, or steal my personal data - or delete it. Scary.

GDI+ Security Vulnerability

There is a new critical security vulnerability that affects a wide range of software that can't be easily patched through Windows Update. The vulnerability lies inside of GDI+ and can allow a maliciously formed JPEG image file to create a buffer overrun and inject malicious code - even through a web page's graphics...no scripting or anything.

Windows Update will go ahead and update major components but you also need to go to the Office Update site as well as update a bunch of other software you might have on your machine.

In particular for developers, the .NET Framework (pre-latest service pack) and even Visual Studio.NET 2003 and 2002 are affected and need to be separately patched.

The full bulletin with links for all the various patches are available here. http://www.microsoft.com/technet/security/bulletin/MS04-028.mspx

If you go to Windows Update it will also provide you with a GDI+ Detection tool that will scan your hard drive looking for affected components. I strongly you recommend everybody jump all over this one quickly.

Access Denied on Web Service Calls

This question has been asked of me enough that I feel it's worth a blog.  It's not the this solution is unique, but I'm hoping that Google will do its thing with respect to getting the word out to people who need it.

First of all, the symptom we're addressing is an HTTP status code of 401 (Access denied) that is returned when making a call to a web service method through a proxy class. The solution is quite simple.  Actually, to be more precise, there are two solutions, with the best one depending (of course) on your goals. First, the virtual directory in which the web service is running can be modified to allow anonymous access.  Alternatively, the credentials associated with the current user can be attached to the web service call by including the following statement prior to the method call.

ws.Credentials = System.Net.CredentialCache.DefaultCredentials;

Now that the solution has been addressed, let's take a brief look at why the problem exists in the first place.  When a request is made from a browser to a web server, the server may require some form of authentication. It is possible that the initial request can include the authentication information, but if the server doesn't see it, then an HTTP 401 status code is returned.  Included in the response is an Authenticate header which indicates the type of authencation that is expected and the name of the realm being accessed.  The browser then reissues the request, providing an Authentication header including your current credentials.  If that request fails (returns another 401 status), the browser prompts you for a set of credentials.  This is a sequence of events that all of you have seen before, even if the underlying rationale is new information.

However, when you make a web method call through the proxy class, the handshaking that goes on with respect to authentication doesn't take place. It's the browser that does this magic and the proxy class doesn't include that code. The result is that the client application receives the access denied status. So you need to configure the virtual directory to not reject unauthenticated requests (by turning anonymous access on) or provide your own set of credentials with the call (by populating the Credentials property on the proxy class).

VS Live Orlando: Building "Operations-Friendly" ASP.NET Applications with Instrumentation and Logging

Yes, it's the longest title of all VS Live Orlando presentations! It's a big topic and it deserves a big name.

I'm heading out Monday night to hurricane country to deliver this talk on Tuesday morning. I like this topic because when you get into it, it's like an onion. It doesn't look like something terribly sophisticated but as you get into you find there are more and more layers to peel back.