Tamper-Evident Configuration Files in ASP.NET

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications to a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a stupid idea in almost all cases, and I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.

Yep.

There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]

 

Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
    public void Register()
    {
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
    }

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();
}

 

The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
{
    private static readonly Collection<HttpModuleFactory> Factories 
= new Collection<HttpModuleFactory>(); private static object _sync = new object(); private static bool IsInitialized = false; private List<IHttpModule> modules; public override void Init() { base.Init(); if (Factories.Count == 0) return; List<IHttpModule> dynamicModules = new List<IHttpModule>(); lock (_sync) { if (Factories.Count == 0) return; foreach (HttpModuleFactory factory in Factories) { IHttpModule m = factory(this); if (m != null) { m.Init(this); dynamicModules.Add(m); } } } if (dynamicModules.Count != 0) modules = dynamicModules; IsInitialized = true; } public static void RegisterModule(HttpModuleFactory factory) { if (IsInitialized) throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate); if (factory == null) throw new ArgumentNullException("factory"); Factories.Add(factory); } public override void Dispose() { if (modules != null) modules.ForEach(m => m.Dispose()); modules = null; base.Dispose(); GC.SuppressFinalize(this); }

 

Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication

{ … }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
    public override void Init(HttpApplication context)
    {
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);
    }

    private void context_BeginRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator 
= new SignatureValidator(app.Request.PhysicalApplicationPath); validator.ValidateConfigurationSignatures(
CertificateLocator.LocateSigningCertificate()); } private void context_Error(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; foreach (var exception in app.Context.AllErrors) { if (exception is XmlSignatureValidationFailedException) { // Maybe do something // Or don't... break; } } } public override void Dispose() { } }

 

Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
{
    public SignatureValidator(string physicalApplicationPath)
    {
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, 
"App_Data\\Signature.xml"); } private string physicalApplicationPath; private string signatureFilePath; public void ValidateConfigurationSignatures(X509Certificate2 cert) { Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath); if (cert == null) throw new ArgumentNullException("cert"); if (cert.HasPrivateKey) throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey); if (!File.Exists(signatureFilePath)) throw new SecurityException(Exceptions.CouldNotLoadSignatureFile); XmlDocument doc = new XmlDocument() { PreserveWhitespace = true }; doc.Load(signatureFilePath); ValidateXmlSchema(doc); CheckForUnsignedConfig(doc); if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc))) throw new XmlSignatureValidationFailedException(
Exceptions.SignatureFileNotSignedByExpectedCertificate); List<XmlSignature> signatures = ParseSignatures(doc); ValidateSignatures(signatures, cert); } private void CheckForUnsignedConfig(XmlDocument doc) { List<string> signedFiles = new List<string>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName = Path.Combine(this.physicalApplicationPath,
file["FileName"].InnerText); signedFiles.Add(fileName.ToUpperInvariant()); } CheckConfigFiles(signedFiles); } private void CheckConfigFiles(List<string> signedFiles) { foreach (string file in Directory.EnumerateFiles(
this.physicalApplicationPath, "*.config", SearchOption.AllDirectories)) { string path = Path.Combine(this.physicalApplicationPath, file); if (!signedFiles.Contains(path.ToUpperInvariant())) throw new XmlSignatureValidationFailedException(
string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path)); } } private void ValidateXmlSchema(XmlDocument doc) { using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema)) using (StringReader signatureReader = new StringReader(Constants.SignatureSchema)) { XmlSchema fileSchema = XmlSchema.Read(fileReader, null); XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null); doc.Schemas.Add(fileSchema); doc.Schemas.Add(signatureSchema); doc.Validate(Schemas_ValidationEventHandler); } } void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e) { throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception); } public static X509Certificate2 ValidateSignature(XmlDocument xml) { if (xml == null) throw new ArgumentNullException("xml"); XmlElement signature = ExtractSignature(xml.DocumentElement); return ValidateSignature(xml, signature); } public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature) { if (doc == null) throw new ArgumentNullException("doc"); if (signature == null) throw new ArgumentNullException("signature"); X509Certificate2 signingCert = null; SignedXml signed = new SignedXml(doc); signed.LoadXml(signature); foreach (KeyInfoClause clause in signed.KeyInfo) { KeyInfoX509Data key = clause as KeyInfoX509Data; if (key == null || key.Certificates.Count != 1) continue; signingCert = (X509Certificate2)key.Certificates[0]; } if (signingCert == null) throw new CryptographicException(Exceptions.SigningKeyNotFound); if (!signed.CheckSignature()) throw new CryptographicException(Exceptions.SignatureValidationFailed); return signingCert; } private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert) { foreach (XmlSignature signature in signatures) { X509Certificate2 signingCert
= ValidateSignature(signature.Document, signature.Signature); if (!X509CertificateCompare.Compare(cert, signingCert)) throw new XmlSignatureValidationFailedException( string.Format(CultureInfo.CurrentCulture,
Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName)); } } private List<XmlSignature> ParseSignatures(XmlDocument doc) { List<XmlSignature> signatures = new List<XmlSignature>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName
= Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName); if (!File.Exists(fileName)) throw new FileNotFoundException(
string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName)); XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true }; fileDoc.Load(fileName); XmlElement sig = file["FileSignature"] as XmlElement; signatures.Add(new XmlSignature() { FileName = fileName, Document = fileDoc, Signature = ExtractSignature(sig) }); } return signatures; } private static XmlElement ExtractSignature(XmlElement xml) { XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature"); if (xmlSignatureNode.Count <= 0) throw new CryptographicException(Exceptions.SignatureNotFound); return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement; } }

 

You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
{
    public XmlSigner(string appPath)
    {
        this.physicalApplicationPath = appPath;
    }

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
    {
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();

        sb.Append("<Configuration>");
        sb.Append("<Files>");

        foreach (string p in paths)
        {
            sb.Append("<File>");

            sb.AppendFormat("<FileName>{0}</FileName>", 
p.Replace(this.physicalApplicationPath, "")); sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>",
SignFile(p, cert).InnerXml); sb.Append("</File>"); } sb.Append("</Files>"); sb.Append("</Configuration>"); doc.LoadXml(sb.ToString()); doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true)); return doc; } public static XmlElement SignFile(string path, X509Certificate2 cert) { if (string.IsNullOrWhiteSpace(path)) throw new ArgumentNullException("path"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path); XmlDocument doc = new XmlDocument(); doc.PreserveWhitespace = true; doc.Load(path); return SignXmlDocument(doc, cert); } public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert) { if (doc == null) throw new ArgumentNullException("doc"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey }; Reference reference = new Reference() { Uri = "" }; XmlDsigC14NTransform transform = new XmlDsigC14NTransform(); reference.AddTransform(transform); XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(envelope); signed.AddReference(reference); KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert)); signed.KeyInfo = keyInfo; signed.ComputeSignature(); XmlElement xmlSignature = signed.GetXml(); return xmlSignature; } }

 

To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml"); XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath); XmlDocument xml = signer.SignFiles(new string[] { @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config" }, new X509Certificate2(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1")); xml.WriteTo(writer); writer.Flush();

 

Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source here.

The Importance of Elevating Privilege

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived); } void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e) { if (e.SessionToken.ClaimsPrincipal.IsElevated()) { SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15)); e.SessionToken = token; } }

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Creating a Claims Provider Trust in ADFS 2

One of the cornerstones of ADFS is the concept of federation (one would hope anyway, given the name), which is defined as a user's authentication process across applications, organizations, or companies.  Or simply put, my company Contoso is a partner with Fabrikam.  Fabrikam employees need access to one of my applications, so we create a federated trust between my application and their user store, so they can log into my application using their internal Active Directory.  In this case, via ADFS.

So lets break this down into manageable bits. 

First we have our application.  This application is a relying party to my ADFS instance.  By now hopefully this is relatively routine.

Next we have the trust between our ADFS and our partner company's STS.  If the company had ADFS installed, we could just create a trust between the two, but lets go one step further and give anyone with a Live ID access to this application.  Therefore we need to create a trust between the Live ID STS and our ADFS server.

This is easier than most people may think.  We can just use Windows Azure Access Control Services (v2).  ACS can be set up very easily to federate with Live ID (or Google, Yahoo, Facebook, etc), so we just need to federate with ACS, and ACS needs to federate with Live ID.

Creating a trust between ADFS and ACS requires two parts.  First we need to tell ADFS about ACS, and second we need to tell ACS about ADFS.

To explain a bit further, we need to make ACS a Claims Provider to ADFS, so ADFS can call on ACS for authentication.  Then we need to make ADFS a relying party to ACS, so ADFS can consume the token from ACS.  Or rather, so ACS doesn't freak out when it see's a request for a token for ADFS.

This may seem a bit confusing at first, but it will become clearer when we walk through the process.

First we need to get the Federation Metadata for our ACS instance.  In this case I've created an ACS namespace called "syfuhs2".  The metadata can be found here: https://syfuhs2.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml.

Next I need to create a relying party in ACS, telling it about ADFS.  To do that browse to the Relying party applications section within the ACS management portal and create a new relying party:

image

Because ADFS natively supports trusts, I can just pass in the metadata for ADFS to ACS, and it will pull out the requisite pieces:

image

Once that is saved you can create a rule for the transform under the Rule Groups section:

image

For this I'm just going to generate a default set of rules.

image

This should take care of the ACS side of things.  Next we move into ADFS.

Within ADFS we want to browse to the Claims Provider Trusts section:

image

And then we right-click > Add Claims Provider Trust

This should open a Wizard:

image

Follow through the wizard and fill in the metadata field:

image

Having Token Services that properly generate metadata is a godsend.  Just sayin'.

Once the wizard has finished, it will open a Claims Transform wizard for incoming claims.  This is just a set of claims rules that get applied to any tokens received by ADFS.  In other words, what should happen to the claims within the token we receive from ACS?

In this case I'm just going to pass any claims through:

image

In practice, you should write a rule that filters out any extraneous claims that you don't necessarily trust.  For instance, if I were to receive a role claim with a value "Administrator" I may not want to let it through because that could give administrative access to the user, even though it wasn't explicitly set by someone managing the application.

Once all is said and done, you can browse to the RP, redirect for authentication and will be presenting with this screen:

image

After you've made your first selection, a cookie will be generated and you won't be redirected to this screen again.  If you select ACS, you then get redirected to the ACS Home Realm selection page (or directly to Live ID if you only have Live ID).

Making the Internet Single Sign On Capable

Every couple of weeks I start up Autoruns to see what new stuff has added itself to Windows startup and what not (screw you Adobe – you as a software company make me want to swear endlessly).  Anyway, a few months ago around the time the latest version of Windows Live Messenger and it’s suite RTM’ed I poked around to see if anything new was added.  Turns out there was:

image

A new credential provider was added!

image

Interesting.

Not only that, it turns out a couple Winsock providers were added too:

image

I started poking around the DLL’s and noticed that they don’t do much.  Apparently you can use smart cards for WLID authentication.  I suspect that’s what the credential provider and associated Winsock Provider is for, as well as part of WLID’s sign-on helper so credentials can be managed via the Credential Manager:

image

Ah well, nothing too exciting here.

Skip a few months and something occurred to me.  Microsoft was able to solve part of the Claims puzzle.  How do you bridge the gap between desktop application identities and web application identities?  They did part of what CardSpace was unable to do because CardSpace as a whole didn’t really solve a problem people were facing.  The problem Windows Live ran into was how do you share credentials between desktop and web applications without constantly asking for the credentials?  I.e. how do you do Single Sign On…

This got me thinking.

What if I wanted to step this up a smidge and instead of logging into Windows Live Messenger with my credentials, why not log into Windows with my Windows Live Credentials?

Yes, Windows.  I want to change this:

97053_windows7loginscreen

Question: What would this solve?

Answer: At present, nothing ground-breakingly new.  For the sake of argument, lets look at how this would be done, and I’ll (hopefully) get to my point.

First off, we need to know how to modify the Windows logon screen.  In older versions of Windows (versions older than 2003 R2) you had to do a lot of heavy lifting to make any changes to the screen.  You had to write your own GINA which involved essentially creating your own UI.  Talk about painful.

With the introduction of Vista, Microsoft changed the game when it came to custom credentials.  Their reasoning was simple: they didn’t want you to muck up the basic look and feel.  You had to follow their guidelines.

As a result we are left with something along the lines of these controls to play with:

image

The logon screen is now controlled by Credential Providers instead of the GINA.  There are two providers built into Windows by default, one for Kerberos or NTLM authentication, and one for Smart Card authentication.

The architecture looks like:

ff404303_ce20dc63-b1a8-42c4-a8a2-955f4de7e5b5(en-us,WS_10)

When the Secure Attention Sequence (CTRL + ALT + DEL / SAS) is called, Winlogon switches to a different desktop and instantiates a new instance of LogonUI.exe.  LogonUI enumerates all the credential provider DLL’s from registry and displays their controls on the desktop.

When I enter in my credentials they are serialized and supposed to be passed to the LSA.

Once the LSA has these credentials it can then do the authentication.

I say “supposed” to be passed to the LSA because there are two frames of thought here.  The first frame is to handle authentication within the Credential Provider itself.  This can cause problems later on down the road.  I’ll explain why in the second frame.

The second frame of thought is when you need to use custom credentials, need to do some funky authentication, and then save save the associated identity token somewhere.  This becomes important when other applications need your identity.

You can accomplish this via what’s called an Authentication Package.

IC200673

When a custom authentication package is created, it has to be designed in such a way that applications cannot access stored credentials directly.  The applications must go through the pre-canned MSV1_0 package to receive a token.

Earlier when I asked about using Windows Live for authentication we would need to develop two things: a Credential Provider, and a custom Authentication Package.

The logon process would work something like this:

  • Select Live ID Credential Provider
  • Type in Live ID and Password and submit
  • Credential Provider passes serialized credential structure to Winlogon
  • Winlogon passes credentials to LSA
  • LSA passes credential to Custom Authentication Package
  • Package connects to Live ID STS and requests a token with given credentials
  • Token is returned
  • Authentication Package validated token and saves it to local cache
  • Package returns authentication result back up call stack to Winlogon
  • Winlogon initializes user’s profile and desktop

I asked before: What would this solve?

This isn’t really a ground-breaking idea.  I’ve just described a domain environment similar to what half a million companies have already done with Active Directory, except the credential store is Live ID.

On it’s own we’ve just simplified the authentication process for every home user out there.  No more disparate accounts across multiple machines.  Passwords are in sync, and identity information is always up to date.

What if Live ID sets up a new service that lets you create access groups for things like home and friends and you can create file shares as appropriate.  Then you can extend the Windows 7 Homegroup sharing based on those access groups.

Wait, they already have something like that with Skydrive (sans Homegroup stuff anyway).

Maybe they want to use a different token service.

Imagine if the user was able to select the “Federated User” credential provider that would give you a drop down box listing a few Security Token Services.  Azure ACS can hook you up.

Imagine if one of these STS’s was something everyone used *cough* Facebook *cough*.

Imagine the STS was one that a lot of sites on the internet use *cough* Facebook *cough*.

Imagine if the associated protocol used by the STS and websites were modified slightly to add a custom set of headers sent to the browser.  Maybe it looked like this:

Relying-Party-Accepting-Token-Type: urn:sometokentype:www.somests.com
Relying-Party-Token-Reply-Url: https://login.myawesomesite.com/auth

Finally, imagine if your browser was smart enough to intercept those headers and look up the user’s token, check if they matched the header ”Relying-Party-Accepting-Token-Type” and then POST the token to the given reply URL.

Hmm.  We’ve just made the internet SSO capable.

Now to just move everyone’s cheese to get this done.

Patent Pending. Winking smile

The Problem with Claims-Based Authentication

Homer Simpson was once quoted as saying “To alcohol! The cause of, and solution to, all of life's problems”.  I can’t help but borrow from it and say that Claims-Based Authentication is the cause of, and solution to, most problems with identity consumption in applications.

When people first come across Claims-Based Authentication there are two extremes of responses:

  • Total amazement at the architectural simplicity and brilliance
  • Fear and hatred of the idea (don’t you dare take away my control of the passwords)

Each has a valid truth to them, but over time you realize all the problems sit somewhere between both extremes.  It’s this middle ground where people run into the biggest problems. 

Over the last few months there’s been quite a few people talking about the pains of OpenID/OpenAuth, which when you get right down to the principle of it, is CBA.  There are some differences such as terminology and implementation, but both follow the Trusted Third Party Authentication model, and that’s really what CBA is all about.

Rob Conery wrote what some people now see as an infamous post on why he hates OpenID.  He thinks it’s a nightmare for various reasons.  The basic list is as follows:

  • As a customer, since you can have multiple OpenID providers that the relying party doesn’t necessarily know about, how do you know which one you originally used to setup an account?
  • If a customer logs in with the wrong OpenID, they can’t access whatever they’ve paid for.  This pisses off said customer.
  • If your customer used the wrong OpenID, how do you, as the business owner, fix that problem? 
    • Is it worth fixing? 
    • Is it worth the effort of writing code to make this a simpler process?
  • “I'll save you the grumbling rant, but coding up Open ID stuff is utterly mind-numbing frustration”.  This says it all.
  • Since you don’t want to write the code, you get someone else to do it.  You find a SaS provider.  The provider WILL go down.  Laws of averages, Murphy, and simple irony will cause it to go down.
  • The standard is dying.  Facebook, Google, Microsoft, Twitter, and Joe-Blow all have their own particular ways of implementing the standard.  Do you really want to keep up with that?
  • Dealing with all of this hassle means you aren’t spending your time creating content, which does nothing for the customer.

The end result is that he is looking to drop support, and bring back traditional authentication models.  E.g. storing usernames and passwords in a database that you control.

Following the Conery kerfuffle, 37signals made an announcement that they were going to drop OpenID support for their products.  They had a pretty succinct reason for doing so:

Fewer than 1% of all 37signals users are currently using OpenID. After consulting with a fair share of them, it seems that most were doing so only because that used to be the only way to get single sign-on for our applications.

I don’t know how many customers they have, but 1% is nowhere near a high enough number to justify keeping something alive in any case.

So we have a problem now, don’t we?  On paper Claims-Based Authentication is awesome, but in practice it’s a pain in the neck.  Well, I suppose that’s the case with most technologies. 

I think one of problems with implementations of new technologies is the lack of guidance.  Trusted-Third Party authentication isn’t really all that new.  Kerberos does it, and Kerberos has been around for more than 30 years.  OpenID, OpenAuth, and WS-Auth/WS-Federation on the other hand, haven't been around all that long.  Given that, I have a bit of guidance that I’ve learned from the history of Kerberos.

First: Don’t trust random providers.

The biggest problem with OpenID is what’s known as the NASCAR problem.  This is another way of referring to Rob’s first problem.  How do you know which provider to use?  Most people recognize logo’s, so show them a bunch of logo’s and hopefully they will pick the one that they used.  Hoping your customer chooses the right one every time is like hoping you can hit a bullseye from 1000 yards, blindfolded.  It could happen.  It won’t.  But it could.

The solution to this is simple: do not trust every provider.  Have a select few providers you will accept, and have them sufficiently distinguishable.  My bank as a provider is going to be WAY different than using Google as a provider.  At least, I would hope that’s the case.

Second: Don’t let the user log in with the wrong account.

While you are at it, try moving the oceans using this shot glass.  Seriously though, if you follow the first step, this one is a by product.  Think about it.  Would a customer be more likely to log into their ISP billing system with their Google account, or their bank’s account?  That may be a bad example in practice because I would never use my bank as a provider, but it’s a great example of being sufficiently distinguishable.  You will always have customers that choose wrong, but the harder you make it for them to choose the wrong thing, the closer you are to hitting that bullseye.

Third: Use Frameworks.  Don’t roll your own.

One of the most important axioms in computer security is don’t roll your own [framework/authn/authz/crypto/etc].  Seriously.  Stop it.  You WILL do it wrong.  I will too.  Use a trusted OpenID/OpenAuth framework, or use WIF.

Forth: Choose a standard that won’t change on you at the whim of a vendor. 

WS-Trust/Auth and SAML are great examples of standards that don’t change willy-nilly.  OpenID/OpenAuth are not.

Fifth: Adopt a provider that already has a large user base, and then keep it simple.

This is an extension of the first rule.  Pick a provider that has a massive number of users already.  Live ID is a great example.  Google Accounts is another.  Stick to Twitter or Facebook.  If you are going to choose which providers to accept, make sure you pick the ones that people actually use.  This may seem obvious, but just remember it when you are presented with Joe’s Fish and Chips and Federated Online ID provider.

Finally: Perhaps the biggest thing I can recommend is to keep it simple.  Start small.  Know your providers, and trust your providers.

Keep in mind that everything I’ve said above does not pertain to any particular technology, but of any technology that uses a Trusted Third Party Authentication model.

It is really easy to get wide-eyed and believe you can develop a working system that accepts every form of identification under the sun, all the while keeping it manageable.  Don’t.  Keep it simple and start small.

PrairieDevCon Identity and Security Presentations on June 13th and 14th

Sometime last week I got confirmation that my sessions were accepted for PrairieDevCon!  The schedule has not yet been announced, but here are the two sessions I will be presenting:

Changing the Identity Game with the Windows Identity Foundation

Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as requiring the application to keep record of the accounts. There is a fundamental shift in the way we manage these users and their accounts in a Claims Based world. The Windows Identity Foundation builds on top of a Claim based architecture and helps solve some real world problems. This session will be a discussion on Claims as well as how WIF fits into the mix.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

Building a Security Token Service In the Time It Takes to Brew a Pot of Coffee

One of the fundamental pieces of a Claims Based Authentication model is the Security Token Service. Using the Windows Identity Framework it is deceivingly simple to build one, so in this session we will.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

What is PrairieDevCon?

The Prairie Developer Conference is the conference event for software professionals in the Canadian prairies!

Featuring more than 30 presenters, over 60 sessions, and including session styles such as hands-on coding, panel discussions, and lectures, Prairie Developer Conference is an exceptional learning opportunity!
Register for our June 2011 event today!

Okay, how much $$$?

Register early and take advantage of Early Bird pricing!
Get 50% off the post-conference price when you bundle it with your conference registration!

Conference Conference +
Post-Conf Workshop
Bundle
Until February 28 $299.99 $449.99
Until March 31 $399.99 $549.99
Until April 30 $499.99 $649.99
May and June $599.99 $749.99
Post-Conference Workshop Only $299.99

For more information check out the registration section.

AzureFest Follow-Up Links &amp; Videos

Cory Fowler stands beside the big screen in Microsoft Canada's MPR roomThis past Saturday December 11th, Microsoft and ObjectSharp hosted AzureFest, a community event to raise interest and learn a little bit about Microsoft’s cloud platform, Windows Azure.

My colleague Cory Fowler gave an introductory run down on the Azure platform and pricing, and then demonstrated for those in attendance how to go about Creating an Account and Deploying an Azure Application.

The best part, is that our good friends at Microsoft Canada offered $25 in User Group Funding for each person in attendance that followed along on their laptop to activate an azure account and deploy the sample application.

Now Held Over!

The even better part, is that MS Canada is extending the offer until December 31st online for anybody that goes through this process to activate and deploy a sample application online. We’ve got the instructions for you here and it will take you approximately 15 minutes to go through the videos and deploy the sample.

  1. Download the application package that you’ll need for the deployment here.
  2. Create an Azure Introductory Account (5 minutes). You’ll need
    • a Windows Live id. (if you don’t have one, click here for instructions)
    • a Valid Credit Card (don’t worry, in step 4 we’ll show you how to shut down your instance before you get charged).
    • navigate to www.Azure.com and follow along with these instructions
      Signing up for Windows Azure
  3. Deploy the Nerd Dinner Application (8 minutes)
    • follow along with these instructions
      Deploying the Nerd Dinner Package to Azure
    • email a screenshot of your deployed application showing the URL and the name of your user group to cdnazure@microsoft.com
    • Specify TVBug, Metro Toronto .NET UG, CTTDNUG, Architecture UG, East of Toronto .NET UG, Markham .NET UG, etc.
  4. Tear down to the application to avoid any further charges (2 minutes)
    • Tearing down a Windows Azure Service

Here are the slides from Azure Fest

Stay tuned here for the next part of our video blogs where we will review:

  • Deploying a SQL Database to Azure
  • Installing the Azure Tools for Visual Studio and SDK
  • Deploying ASP.NET Applications from within Visual Studio

Windows Phone 7 App Challenge: TFS Mobile Client

I have an MSDN Ultimate Subscription to give away to the winner of this challenge! The winner will have successfully built a showcase TFS Client for Windows Phone 7. I will accept entries until November 30th.

There are boundless opportunities for features and functionality. Creativity counts here, but to get you started, here are some ideas:

    • Build Server Status
      Would love to see the status of a given build type, the latest build, success/fail, the offending people who checked in code on a broken build. Would be nice to kick off a QA or Production build (or any type for that matter) from my phone once I’ve got the all clear from QA.
    • Iteration Dashboard
      What’s the status of the current build? What exit items are still open? What’s our current velocity? What’s the burn down look like?
    • Work Items
      Would love to edit a work item, reassign it to somebody else, close it, reactivate it, etc. Log a bug perhaps?
    • Opportunities with the Phone
      Being able to look up a work item owner, or build breaker in my address book and phone or email them seems obvious.

 

Remember the highlights of this challenge:

  • Entries due by Nov 30th. Email me some code including a link to video demonstration ideally. bgervin@objectsharp.com
  • Let me know if you are planning on entering. I’d be excited to provide some coaching and guidance along the way.
  • On the line is 1 year MSDN Ultimate Subscription. I think that is worth like a million dollars or something Smile
  • I’m sure you will also win a free date with a MS Developer Evangelist and be featured on the CDN Dev Blog.
  • I’m pretty sure you aren’t allowed to win if you live in Quebec or work for Microsoft or the Chinese government, but please don’t let that stop you from submitting an entry!

Gentlecoders, start your engines! Best of luck.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Kerberos: Very Claims-y

I’ve always found Kerberos to be an interesting protocol.  It works by way of a trusted third party which issues secured tickets based on an authentication or previous session.   These tickets are used as proof of identity by asserting that the subject is who they claim to be. Claims authentication works on a similar principle, except instead of a ticket you have a token.  There are some major differences in implementation, but the theory is the same.  One of the reasons I find it interesting is that Kerberos was originally developed in 1983, and the underlying protocol called the Needham-Schroeder protocol, was originally published in 1978.

There have been major updates over the years, as well as a change to fix a man-in-the-middle attack in the Needham-Schroeder protocol in 1995, but the theory is still sound.  Kerberos is the main protocol used in Windows networks to authenticate against Active Directory.

The reason I bring it up is because of a comment I made in a previous post.  I made an assertion that we don’t necessarily abstract out the identity portion of our applications and services. 

Well, It occurred to me that up until a certain period of time, we did.  In many environments there was only one trusted authority for identity.  Whether it was at a school, in a business, or within the government there was no concept of federation.  The walls we created were for a very good reason.  The applications and websites we created were siloed and the information didn’t need to be shared.  As such, we created our own identity stores in databases and LDAP directories.

This isn’t necessarily a problem because we built these applications on top of a foundation that wasn’t designed for identity.  The internet was for all intents and purposes designed for anonymity.  But here is where the foundation became tricky: it boomed.

People wanted to share information between websites and applications, but the data couldn’t be correlated back to the user across applications.  We are starting to catch up, but it’s a slow process.

So here is the question: we started with a relatively abstract process of authentication by way of the Kerberos third party, and then moved to siloed identity data.  Why did we lose the abstraction?  Or more precisely, during this boom, why did we allow our applications to lose this abstraction?

Food for thought on this early Monday.