Tamper-Evident Configuration Files in ASP.NET

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications to a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a stupid idea in almost all cases, and I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.

Yep.

There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]

 

Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
    public void Register()
    {
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
    }

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();
}

 

The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
{
    private static readonly Collection<HttpModuleFactory> Factories 
= new Collection<HttpModuleFactory>(); private static object _sync = new object(); private static bool IsInitialized = false; private List<IHttpModule> modules; public override void Init() { base.Init(); if (Factories.Count == 0) return; List<IHttpModule> dynamicModules = new List<IHttpModule>(); lock (_sync) { if (Factories.Count == 0) return; foreach (HttpModuleFactory factory in Factories) { IHttpModule m = factory(this); if (m != null) { m.Init(this); dynamicModules.Add(m); } } } if (dynamicModules.Count != 0) modules = dynamicModules; IsInitialized = true; } public static void RegisterModule(HttpModuleFactory factory) { if (IsInitialized) throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate); if (factory == null) throw new ArgumentNullException("factory"); Factories.Add(factory); } public override void Dispose() { if (modules != null) modules.ForEach(m => m.Dispose()); modules = null; base.Dispose(); GC.SuppressFinalize(this); }

 

Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication

{ … }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
    public override void Init(HttpApplication context)
    {
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);
    }

    private void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator 
= new SignatureValidator(app.Request.PhysicalApplicationPath); validator.ValidateConfigurationSignatures(
CertificateLocator.LocateSigningCertificate()); } private void context_Error(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; foreach (var exception in app.Context.AllErrors) { if (exception is XmlSignatureValidationFailedException) { // Maybe do something // Or don't... break; } } } public override void Dispose() { } }

 

Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
{
    public SignatureValidator(string physicalApplicationPath)
    {
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, 
"App_Data\\Signature.xml"); } private string physicalApplicationPath; private string signatureFilePath; public void ValidateConfigurationSignatures(X509Certificate2 cert) { Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath); if (cert == null) throw new ArgumentNullException("cert"); if (cert.HasPrivateKey) throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey); if (!File.Exists(signatureFilePath)) throw new SecurityException(Exceptions.CouldNotLoadSignatureFile); XmlDocument doc = new XmlDocument() { PreserveWhitespace = true }; doc.Load(signatureFilePath); ValidateXmlSchema(doc); CheckForUnsignedConfig(doc); if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc))) throw new XmlSignatureValidationFailedException(
Exceptions.SignatureFileNotSignedByExpectedCertificate); List<XmlSignature> signatures = ParseSignatures(doc); ValidateSignatures(signatures, cert); } private void CheckForUnsignedConfig(XmlDocument doc) { List<string> signedFiles = new List<string>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName = Path.Combine(this.physicalApplicationPath,
file["FileName"].InnerText); signedFiles.Add(fileName.ToUpperInvariant()); } CheckConfigFiles(signedFiles); } private void CheckConfigFiles(List<string> signedFiles) { foreach (string file in Directory.EnumerateFiles(
this.physicalApplicationPath, "*.config", SearchOption.AllDirectories)) { string path = Path.Combine(this.physicalApplicationPath, file); if (!signedFiles.Contains(path.ToUpperInvariant())) throw new XmlSignatureValidationFailedException(
string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path)); } } private void ValidateXmlSchema(XmlDocument doc) { using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema)) using (StringReader signatureReader = new StringReader(Constants.SignatureSchema)) { XmlSchema fileSchema = XmlSchema.Read(fileReader, null); XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null); doc.Schemas.Add(fileSchema); doc.Schemas.Add(signatureSchema); doc.Validate(Schemas_ValidationEventHandler); } } void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e) { throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception); } public static X509Certificate2 ValidateSignature(XmlDocument xml) { if (xml == null) throw new ArgumentNullException("xml"); XmlElement signature = ExtractSignature(xml.DocumentElement); return ValidateSignature(xml, signature); } public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature) { if (doc == null) throw new ArgumentNullException("doc"); if (signature == null) throw new ArgumentNullException("signature"); X509Certificate2 signingCert = null; SignedXml signed = new SignedXml(doc); signed.LoadXml(signature); foreach (KeyInfoClause clause in signed.KeyInfo) { KeyInfoX509Data key = clause as KeyInfoX509Data; if (key == null || key.Certificates.Count != 1) continue; signingCert = (X509Certificate2)key.Certificates[0]; } if (signingCert == null) throw new CryptographicException(Exceptions.SigningKeyNotFound); if (!signed.CheckSignature()) throw new CryptographicException(Exceptions.SignatureValidationFailed); return signingCert; } private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert) { foreach (XmlSignature signature in signatures) { X509Certificate2 signingCert
= ValidateSignature(signature.Document, signature.Signature); if (!X509CertificateCompare.Compare(cert, signingCert)) throw new XmlSignatureValidationFailedException( string.Format(CultureInfo.CurrentCulture,
Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName)); } } private List<XmlSignature> ParseSignatures(XmlDocument doc) { List<XmlSignature> signatures = new List<XmlSignature>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName
= Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName); if (!File.Exists(fileName)) throw new FileNotFoundException(
string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName)); XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true }; fileDoc.Load(fileName); XmlElement sig = file["FileSignature"] as XmlElement; signatures.Add(new XmlSignature() { FileName = fileName, Document = fileDoc, Signature = ExtractSignature(sig) }); } return signatures; } private static XmlElement ExtractSignature(XmlElement xml) { XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature"); if (xmlSignatureNode.Count <= 0) throw new CryptographicException(Exceptions.SignatureNotFound); return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement; } }

 

You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
{
    public XmlSigner(string appPath)
    {
        this.physicalApplicationPath = appPath;
    }

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
    {
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();

        sb.Append("<Configuration>");
        sb.Append("<Files>");

        foreach (string p in paths)
        {
            sb.Append("<File>");

            sb.AppendFormat("<FileName>{0}</FileName>", 
p.Replace(this.physicalApplicationPath, "")); sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>",
SignFile(p, cert).InnerXml); sb.Append("</File>"); } sb.Append("</Files>"); sb.Append("</Configuration>"); doc.LoadXml(sb.ToString()); doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true)); return doc; } public static XmlElement SignFile(string path, X509Certificate2 cert) { if (string.IsNullOrWhiteSpace(path)) throw new ArgumentNullException("path"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path); XmlDocument doc = new XmlDocument(); doc.PreserveWhitespace = true; doc.Load(path); return SignXmlDocument(doc, cert); } public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert) { if (doc == null) throw new ArgumentNullException("doc"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey }; Reference reference = new Reference() { Uri = "" }; XmlDsigC14NTransform transform = new XmlDsigC14NTransform(); reference.AddTransform(transform); XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(envelope); signed.AddReference(reference); KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert)); signed.KeyInfo = keyInfo; signed.ComputeSignature(); XmlElement xmlSignature = signed.GetXml(); return xmlSignature; } }

 

To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml"); XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath); XmlDocument xml = signer.SignFiles(new string[] { @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config" }, new X509Certificate2(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1")); xml.WriteTo(writer); writer.Flush();

 

Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source here.

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:

image

Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:

image

From there I modified the HomeRealmDiscovery page to do the check:

//------------------------------------------------------------
// Copyright (c) Microsoft Corporation.  All rights reserved.
//------------------------------------------------------------

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
{
    protected void Page_Init(object sender, EventArgs e)
    {
    }

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
    {
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
        {
            SetError("Please enter an email address");
            return;
        }

        try
        {
            SelectHomeRealm(FindHomeRealmByEmail(email));
        }
        catch (ApplicationException)
        {
            SetError("Cannot find home realm based on email address");
        }
    }

    private string FindHomeRealmByEmail(string email)
    {
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
        {
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();
        }
    }

    private string ParseDomain(string email)
    {
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);
    }

    private void SetError(string p)
    {
        lblError.Text = p;
    }
}

 

If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here: http://www.syfuhs.net/AdfsHomeRealm.zip

Talking about Security Article Series

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications.  It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities.

Obviously it's not a series on everything you need to know about security, but hopefully it's a starting point.  My goal is to get people to at least start talking about security in their applications.

This is the series:

SAML Protocol Extension CTP for Windows Identity Foundation

Earlier this morning the Geneva (WIF/ADFS) Product Team announced a CTP for supporting the SAML protocol within WIF.  WIF has supported SAML tokens since it's inception, however it hasn't supported the SAML protocol until now.  According to the team:

This WIF extension allows .NET developers to easily create claims-based SP-Lite compliant Service Provider applications that use SAML 2.0 conformant identity providers such as AD FS 2.0.

This is the first I've seen this CTP, so I decided to jump into the Quick Start solution to get a feel for what's going on.  Here is the solution hierarchy:

image

There isn't much to it.  We have the sample identity provider that generates a token for us, a relying party application (service provider), and a utilities project to help with some sample-related duties.

In most cases, we really only need to worry about the Service Provider, as the IdP probably already exists.  I think creating an IdP using this framework is for a different post.

If we consider that WIF mostly works via configuration changes to the web.config, it stands to reason that the SAML extensions will too.  Lets take a look at the web.config file.

There are three new things in the web.config that are different from a default-configured WIF application.

First we see a new configSection declaration:

<section name="microsoft.identityModel.saml" type="Microsoft.IdentityModel.Web.Configuration.MicrosoftIdentityModelSamlSection, Microsoft.IdentityModel.Protocols"/>

This creates a new configuration section called microsoft.identityModel.saml.

Interestingly, this doesn't actually contain much.  Just pointers to metadata:

<microsoft.identityModel.saml metadata="bin\App_Data\serviceprovider.xml">
    <identityProviders>
        <metadata file="bin\App_Data\identityprovider.xml"/>
    </identityProviders>
</microsoft.identityModel.saml>

Now this is a step away from WIF-ness.  These metadata documents are consumed by the extension.  They contain certificates and endpoint references:

<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://localhost:6010/IdentityProvider/saml/redirect/sso"/>

I can see some extensibility options here.

Finally, an HTTP Module is added to handle the token response:

<add name="Saml2AuthenticationModule" type="Microsoft.IdentityModel.Web.Saml2AuthenticationModule"/>

This module works similarly to the WSFederationAuthenticationModule used by WIF out of the box.

It then uses the SessionAuthenticationModule to handle session creation and management, which is the same module used by WIF.

As you start digging through the rest of the project, there isn't actually anything too surprising to see.  The default.aspx page just grabs a claim from the IClaimsidentity object and adds a control used by the sample to display SAML data.  There is a signout button though which calls the following line of code:

Saml2AuthenticationModule.Current.SignOut( "~/Login.aspx" );

In the Login.aspx page there is a sign in button that calls a similar line of code:

Saml2AuthenticationModule.Current.SignIn( "~/Default.aspx" );

All in all, this SAML protocol extension seems to making federating with a SAML IdP fairly simple and straightforward.

Making the X509Store more Friendly

When you need to grab a certificate out of a Windows Certificate Store, you can use a class called X509Store.  It's very simple to use:

X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);

store.Open(OpenFlags.ReadOnly);

X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);

store.Close();

However, I don't like this open/close mechanism.  It reminds me too much of Dispose(), except I can't use a using statement.  There are lots of arguments around whether a using statement is a good way of doing things and I'm in the camp of yes, it is.  When they are used properly they make code a lot more logical.  It creates a scope for an object explicitly.  I want to do something like this:

using (X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser, OpenFlags.ReadOnly))
{
    X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);
}

The simple solution would be to subclass this, implement IDisposable, and overwrite some of the internals.  The problem though is that someone on the .NET team thought it would be wise to seal the class.  Crap.  Okay, lets create a new class:

public class X509Store2 : IDisposable
{
    private X509Store store;

    public X509Store2(IntPtr storeHandle, OpenFlags flags)
    {
        store = new X509Store(storeHandle);
        store.Open(flags);
    }

    public X509Store2(StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeLocation);
        store.Open(flags);
    }

    public X509Store2(StoreName storeName, OpenFlags flags)
    {
        store = new X509Store(storeName);
        store.Open(flags);
    }

    public X509Store2(string storeName, OpenFlags flags)
    {
        store = new X509Store(storeName);
        store.Open(flags);
    }

    public X509Store2(StoreName storeName, StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeName, storeLocation);
        store.Open(flags);
    }

    public X509Store2(string storeName, StoreLocation storeLocation, OpenFlags flags)
    {
        store = new X509Store(storeName, storeLocation);
        store.Open(flags);
    }

    public X509Certificate2Collection Certificates { get { return store.Certificates; } }

    public StoreLocation Location { get { return store.Location; } }

    public string Name { get { return store.Name; } }

    public IntPtr StoreHandle { get { return store.StoreHandle; } }

    public void Add(X509Certificate2 certificate)
    {
        store.Add(certificate);
    }

    public void AddRange(X509Certificate2Collection certificates)
    {
        store.AddRange(certificates);
    }

    private void Close()
    {
        store.Close();
    }

    private void Open(OpenFlags flags)
    {
        store.Open(flags);
    }

    public void Remove(X509Certificate2 certificate)
    {
        store.Remove(certificate);
    }
    public void RemoveRange(X509Certificate2Collection certificates)
    {
        store.RemoveRange(certificates);
    }

    public void Dispose()
    {
        this.Close();
    }
}

At this point I've copied all the public members of the X509Store class and called their counterparts in the store.  I've also set Open() and Close() to private so they can't be called.  In theory I could just remove them, but I didn't.

Enjoy!

PrairieDevCon Identity and Security Presentations on June 13th and 14th

Sometime last week I got confirmation that my sessions were accepted for PrairieDevCon!  The schedule has not yet been announced, but here are the two sessions I will be presenting:

Changing the Identity Game with the Windows Identity Foundation

Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as requiring the application to keep record of the accounts. There is a fundamental shift in the way we manage these users and their accounts in a Claims Based world. The Windows Identity Foundation builds on top of a Claim based architecture and helps solve some real world problems. This session will be a discussion on Claims as well as how WIF fits into the mix.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

Building a Security Token Service In the Time It Takes to Brew a Pot of Coffee

One of the fundamental pieces of a Claims Based Authentication model is the Security Token Service. Using the Windows Identity Framework it is deceivingly simple to build one, so in this session we will.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

What is PrairieDevCon?

The Prairie Developer Conference is the conference event for software professionals in the Canadian prairies!

Featuring more than 30 presenters, over 60 sessions, and including session styles such as hands-on coding, panel discussions, and lectures, Prairie Developer Conference is an exceptional learning opportunity!
Register for our June 2011 event today!

Okay, how much $$$?

Register early and take advantage of Early Bird pricing!
Get 50% off the post-conference price when you bundle it with your conference registration!

Conference Conference +
Post-Conf Workshop
Bundle
Until February 28 $299.99 $449.99
Until March 31 $399.99 $549.99
Until April 30 $499.99 $649.99
May and June $599.99 $749.99
Post-Conference Workshop Only $299.99

For more information check out the registration section.

Vote for my Mix 2011 Session on Identity!

Mix 2011 has opened voting for public session submissions, and I submitted one!  Here is the abstract:

Identity Bests – Managing User Identity in the new Decade

Presenter: Steve Syfuhs

Identity is a tricky thing to manage. These days every website requires some knowledge of the user, which inevitably requires users to log in to identify themselves. Over the next few years we will start seeing a shift toward a centralized identity model removing the need to manage users and their credentials for each website. This session will cover the fundamentals of Claims Based Authentication using the Windows Identity Foundation and how you can easily manage user identities across multiple websites as well across organizational boundaries.

If you think this session should be presented please vote: http://live.visitmix.com/OpenCall/Vote/Session/182.

(Please vote even if you don’t! Winking smile)

Claims, MEF, and Parallelization, Oh My

One of the projects I’ve been working on for the last couple months has a requirement to aggregate a set of claims from multiple data sources for an identity and return the collection.  It all seems pretty straightforward as long as you know what the data sources are at development time as well as how you want to transform the data to claims. 

In the real world though, chances are you will need to modify how that transformation happens or modify the data sources in some way.  There are lots of ways this can be accomplished, and I’m going to look at how you can do it with the Managed Extensibility Framework (MEF).

Whenever I think of MEF, this is the best way I can describe how it works:

image

MEF being the magical part.  In actual fact, it is pretty straightforward how the underlying pieces work, but here is the sales bit:

Application requirements change frequently and software is constantly evolving. As a result, such applications often become monolithic making it difficult to add new functionality. The Managed Extensibility Framework (MEF) is a new library in .NET Framework 4 and Silverlight 4 that addresses this problem by simplifying the design of extensible applications and components.

The architecture of it can be explained on the Codeplex site:

MEF_Diagram.png

The composition container is designed to discover ComposablePart’s that have Export attributes, and assign these Parts to an object with an Import attribute.

Think of it this way (this is just one possible way it could work).  Let’s say I have a bunch of classes that are plugins for some system.  I will attach an Export attribute to each of those classes.  Then within the system itself I have a class that manages these plugins.  That class will contain an object that is a collection of the plugin class type, and it will have an attribute of ImportMany.  Within this manager class is some code that will discover the Exported classes, and generate a collection of them instantiated.  You can then iterate through the collection and do something with those plugins.  Some code might help.

First, we need something to tie the Import/Export attributes together.  For a plugin-type situation I prefer to use an interface.

namespace PluginInterfaces
{
    public interface IPlugin
    {
        public string PlugInName { get; set; }
    }
}

Then we need to create a plugin.

using PluginInterfaces;

namespace SomePlugin
{
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

Then we need to actually Export the plugin.  Notice the namespace addition.  The namespace can be found in the System.ComponentModel.Composition assembly in .NET 4.

using PluginInterfaces;
using System.ComponentModel.Composition;

namespace SomePlugin
{
    [Export(typeof(IPlugin))]
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

The [Export(typeof(IPlugin))] is a way of tying the Export to the Import.

Importing the plugin’s requires a little bit more code.  First we need to create a collection to import into:

[ImportMany(typeof(IPlugin))]
List<IPlugin> plugins = new List<IPlugin>();

Notice the typeof(IPlugin).

Next we need to compose the pieces:

using (DirectoryCatalog catalog = new DirectoryCatalog(pathToPluginDlls))
using (CompositionContainer container = new CompositionContainer(catalog))
{
    container.ComposeParts(this);
}

The ComposeParts() method is looking at the passed object and finds anything with the Import or ImportMany attributes and then looks into the DirectoryCatalog to find any classes with the Export attribute, and then tries to tie everything together based on the typeof(IPlugin).

At this point we should now have a collection of plugins that we could iterate through and do whatever we want with each plugin.

So what does that have to do with Claims?

If you continue down the Claims Model path, eventually you will get tired of having to modify the STS every time you wanted to change what data is returned from the RST (Request for Security Token).  Imagine if you could create a plugin model that all you had to do was create a new plugin for any new data source, or all you had to do was modify the plugins instead of the STS itself.  You could even build a transformation engine similar to Active Directory Federation Services and create a DSL that is executed at runtime.  It would make for simpler deployment, that’s for sure.

And what about Parallelization?

If you have a large collection of plugins, it may be beneficial to run some things in parallel, such as a GetClaims([identity]) type call.

Using the Parallel libraries within .NET 4, you could very easily do something like:

Parallel.ForEach<IPlugin>(plugins, (plugin) =>
{
    plugin.GetClaims(identity);
});

The basic idea for this method is to take a collection, and do an action on each item in the collection, potentially in parallel.   The ForEach method is described as:

ForEach<TSource>(IEnumerable<TSource> source, Action<TSource> action)

When everything is all said and done, you now have a basic parallelized plugin model for your Security Token Service.  Pretty cool, I think.

Authentication in an Active Claims Model

When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive.  However, there are a few cases where things differ… a lot.  The biggest of course being how a Request for Security Token (RST) is authenticated.  In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled.  Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.

Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS.  To hand off to the STS, we can just call:

FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request, 
HttpContext.Current.User, 
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(), 
HttpContext.Current.Response); 

However, it’s a little different with the active model.

Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials.  The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS.  The STS deserializes the token and passes it off to a SecurityTokenHandler.  This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.

Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider.  90% of the time, this is probably just fine.  Other times you may need to create your own SecurityTokenHandler.  It’s actually not that hard to do.

First you need to know what sort of token is being passed across the wire.  The big three are:

  • UserNameSecurityToken – Has a username and password pair
  • WindowsSecurityToken – Used for Windows authentication using NTLM or Kerberos
  • X509SecurityToken – Uses x509 certificate for authentication

Each is pretty self explanatory.

Some others out of the box are:

image

Reflector is an awesome tool.  Just sayin’.

Now that we know what type of token we are expecting we can build the token handler.  For the sake of simplicity let’s create one for the UserNameSecurityToken.

To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler.  We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working.  Suffice to say it’s mostly boilerplate code.

We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.

TokenType is used later on to tell what kind of token the handler can actually validate.  More on that in a minute.

Overriding ValidateToken is fairly trivial*.  This is where we actually handle the authentication.  However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception.  I would recommend the SecurityTokenValidationException.  Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection.  We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection.

public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
{
    UserNameSecurityToken userToken = token as UserNameSecurityToken;

    if (userToken == null)
        throw new ArgumentNullException("token");

    string username = userToken.UserName;
    string pass = userToken.Password;

    if (!Membership.ValidateUser(username, pass))
        throw new SecurityTokenValidationException("Username or password is wrong.");

    IClaimsIdentity ident = new ClaimsIdentity();
    ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

    return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
}

Next we need set the TokenType:

public override Type TokenType
{
    get
    {
        return typeof(UserNameSecurityToken);
    }
}

This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns.  The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.

To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:

securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())

To add it via configuration you first need to remove any other handlers that can validate the same type of token:

<microsoft.identityModel>
<service>
<securityTokenHandlers>
<remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" />
</securityTokenHandlers>

That’s pretty much all there is to it.  Here is the class for the sake of completeness:

using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;

namespace Syfuhs.IdentityModel.Tokens
{
    public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
    {
        public override bool CanValidateToken { get { return true; } }

        public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
        {
            UserNameSecurityToken userToken = token as UserNameSecurityToken;

            if (userToken == null)
                throw new ArgumentNullException("token");

            string username = userToken.UserName;
            string pass = userToken.Password;

            if (!Membership.ValidateUser(username, pass))
                throw new SecurityTokenValidationException("Username or password is wrong.");

            IClaimsIdentity ident = new ClaimsIdentity();
            ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

            return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
        }
    }
}

* Trivial in the development sense, not trivial in the security sense.

Preventing Frame Exploits in a Passive Claims Model

At a presentation a few weeks ago someone asked me about capturing session details during authentication at an STS by way of frames and JavaScript.  To paraphrase the question: “What prevents a malicious developer from sticking an RP within an iframe, cause a redirect to an STS, get some user to log in, and then capture the details through JavaScript from the parent page?”  There are a couple of ways this problem can be solved.  It’s a defense-in-depth problem where on their own, each piece won’t close every attack vector, but when used together you end up with a pretty solid solution.

  • First, a lot of new browsers will actually prevent cross-frame JavaScript calls when SSL is involved.  Depending on the browser, the JavaScript will throw the equivalent of an Access Denied exception.  This is not the case with all browser versions though.  Older browsers may not do this.
  • Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL.  The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
  • Third, you could write some JavaScript for the STS to bust out of the frame.  It would look something like this:

if (top != self)
{
    try
    {
        top.location.replace(self.location.href);
    }
    catch (e)
    {
    }
}

The problem with this is that it wouldn’t work if the browser has JavaScript disabled.

  • Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request.  Safari and Chrome support it natively, and Firefox supports it with the NoScript add on.  The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page.  E.g. the parent is somesite.com/page and the framed page is somesite.com/page.

There are a couple of ways to add this header to your page.  First you can add it via ASP.NET:

Context.Response.AddHeader("x-frame-options", "DENY");

Or you could add it to all pages via IIS.  To do this open the IIS Manager and select the site in question.  Then select the Feature “HTTP Response Headers”:

image

Select Add… and then set the name to x-frame-options and the value to DENY:

image

By keeping in mind these options you can do a lot to prevent any exploits that use frames.