The Importance of Elevating Privilege

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived); } void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e) { if (e.SessionToken.ClaimsPrincipal.IsElevated()) { SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15)); e.SessionToken = token; } }

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:

image

Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:

image

From there I modified the HomeRealmDiscovery page to do the check:

//------------------------------------------------------------
// Copyright (c) Microsoft Corporation.  All rights reserved.
//------------------------------------------------------------

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
{
    protected void Page_Init(object sender, EventArgs e)
    {
    }

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
    {
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
        {
            SetError("Please enter an email address");
            return;
        }

        try
        {
            SelectHomeRealm(FindHomeRealmByEmail(email));
        }
        catch (ApplicationException)
        {
            SetError("Cannot find home realm based on email address");
        }
    }

    private string FindHomeRealmByEmail(string email)
    {
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
        {
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();
        }
    }

    private string ParseDomain(string email)
    {
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);
    }

    private void SetError(string p)
    {
        lblError.Text = p;
    }
}

 

If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here: http://www.syfuhs.net/AdfsHomeRealm.zip

Creating a Claims Provider Trust in ADFS 2

One of the cornerstones of ADFS is the concept of federation (one would hope anyway, given the name), which is defined as a user's authentication process across applications, organizations, or companies.  Or simply put, my company Contoso is a partner with Fabrikam.  Fabrikam employees need access to one of my applications, so we create a federated trust between my application and their user store, so they can log into my application using their internal Active Directory.  In this case, via ADFS.

So lets break this down into manageable bits. 

First we have our application.  This application is a relying party to my ADFS instance.  By now hopefully this is relatively routine.

Next we have the trust between our ADFS and our partner company's STS.  If the company had ADFS installed, we could just create a trust between the two, but lets go one step further and give anyone with a Live ID access to this application.  Therefore we need to create a trust between the Live ID STS and our ADFS server.

This is easier than most people may think.  We can just use Windows Azure Access Control Services (v2).  ACS can be set up very easily to federate with Live ID (or Google, Yahoo, Facebook, etc), so we just need to federate with ACS, and ACS needs to federate with Live ID.

Creating a trust between ADFS and ACS requires two parts.  First we need to tell ADFS about ACS, and second we need to tell ACS about ADFS.

To explain a bit further, we need to make ACS a Claims Provider to ADFS, so ADFS can call on ACS for authentication.  Then we need to make ADFS a relying party to ACS, so ADFS can consume the token from ACS.  Or rather, so ACS doesn't freak out when it see's a request for a token for ADFS.

This may seem a bit confusing at first, but it will become clearer when we walk through the process.

First we need to get the Federation Metadata for our ACS instance.  In this case I've created an ACS namespace called "syfuhs2".  The metadata can be found here: https://syfuhs2.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml.

Next I need to create a relying party in ACS, telling it about ADFS.  To do that browse to the Relying party applications section within the ACS management portal and create a new relying party:

image

Because ADFS natively supports trusts, I can just pass in the metadata for ADFS to ACS, and it will pull out the requisite pieces:

image

Once that is saved you can create a rule for the transform under the Rule Groups section:

image

For this I'm just going to generate a default set of rules.

image

This should take care of the ACS side of things.  Next we move into ADFS.

Within ADFS we want to browse to the Claims Provider Trusts section:

image

And then we right-click > Add Claims Provider Trust

This should open a Wizard:

image

Follow through the wizard and fill in the metadata field:

image

Having Token Services that properly generate metadata is a godsend.  Just sayin'.

Once the wizard has finished, it will open a Claims Transform wizard for incoming claims.  This is just a set of claims rules that get applied to any tokens received by ADFS.  In other words, what should happen to the claims within the token we receive from ACS?

In this case I'm just going to pass any claims through:

image

In practice, you should write a rule that filters out any extraneous claims that you don't necessarily trust.  For instance, if I were to receive a role claim with a value "Administrator" I may not want to let it through because that could give administrative access to the user, even though it wasn't explicitly set by someone managing the application.

Once all is said and done, you can browse to the RP, redirect for authentication and will be presenting with this screen:

image

After you've made your first selection, a cookie will be generated and you won't be redirected to this screen again.  If you select ACS, you then get redirected to the ACS Home Realm selection page (or directly to Live ID if you only have Live ID).

Windows Azure ACS v2 Mix Announcement

Part of the Mix11 announcement was that ACS v2 was released to production.  It was actually released last Thursday but we were told to keep as quiet as possible so they could announce it at Mix.  Here is the marketing speak:

The new ACS includes a plethora of new features that customers and partners have been asking with enthusiasm: single sign on from business and web identity providers, easy integration with our development tools, support for both enterprise-grade and web friendly protocols, out of the box integration with Facebook, Windows Live ID, Google and Yahoo, and many others.

Those features respond to such fundamental needs in modern cloud based systems that ACS has already become a key asset in many of our own offerings.

There is a substantial difference between v1 and v2.  In v2, we now see:

Federation provider and Security Token Service (FINALLY!)

  • Out of box federation with Active Directory Federation Services 2.0, Windows Live ID, Google, Yahoo, Facebook

New authorization scenarios

  • Delegation using OAuth 2.0

Improved developer experience

  • New web-based management portal
  • Fully programmatic management using OData
  • Works with Windows Identity Foundation

Additional protocol support

  • WS-Federation, WS-Trust, OpenID 2.0, OAuth 2.0 (Draft 13)

That's a lot of stuff to keep up with, but luckily Microsoft has made it easier for us by giving us a whole whack of content to learn from.

First off, all of the training kits have now been updated to support v2:

Second, there are a bunch of new Channel9 videos just released:

Third, and finally, the Claims Based Identity and Access Control Guide was updated!

Talk about a bunch of awesome stuff.

GoodBye CardSpace; Hello U-Prove

Other possible titles:

  • So Long and Thanks for all the Identity
  • Goodbye awesome technology; Hello Awesomer Technology
  • CardSpace? What’s CardSpace?

Over on the Claims Based Identity Blog they made an announcement that they have stopped development of CardSpace v2.  CardSpace was an excellent technology, but nobody used it.  Some of us saw the writing on the wall when Microsoft paused development last year, and kept quiet about why.  For better or for worse, Microsoft stopped development and moved on to a different technology affectionately called U-Prove.

U-Prove is an advanced cryptographic technology that, combined with existing standards-based identity solutions, overcomes this long-standing dilemma between identity assurance and privacy. This unlocks a broad range of scenarios that have historically been out of the reach of both the private and public sectors - cases where both verified identity information and privacy are required.

So what exactly does this mean?  Central to U-Prove is something called an Agent:

Specifically, the Agent provides a mechanism to separate the retrieval of identity information from trusted organizations from the release of this information to destination sites. The underlying mechanisms help prevent the issuing organizations from tracking where or when this information is used, and to help prevent different destination sites from trivially linking users’ actions together.

Alright, what does that really mean?

Short answer: it’s kind of like CardSpace, except you—the developer—manage the application that controls the flow of claims from IdP to RP.

The goal is to enable stronger control of the release of private data to relying parties.

For more information check out the FAQ section on Connect.

The Problem with Claims-Based Authentication

Homer Simpson was once quoted as saying “To alcohol! The cause of, and solution to, all of life's problems”.  I can’t help but borrow from it and say that Claims-Based Authentication is the cause of, and solution to, most problems with identity consumption in applications.

When people first come across Claims-Based Authentication there are two extremes of responses:

  • Total amazement at the architectural simplicity and brilliance
  • Fear and hatred of the idea (don’t you dare take away my control of the passwords)

Each has a valid truth to them, but over time you realize all the problems sit somewhere between both extremes.  It’s this middle ground where people run into the biggest problems. 

Over the last few months there’s been quite a few people talking about the pains of OpenID/OpenAuth, which when you get right down to the principle of it, is CBA.  There are some differences such as terminology and implementation, but both follow the Trusted Third Party Authentication model, and that’s really what CBA is all about.

Rob Conery wrote what some people now see as an infamous post on why he hates OpenID.  He thinks it’s a nightmare for various reasons.  The basic list is as follows:

  • As a customer, since you can have multiple OpenID providers that the relying party doesn’t necessarily know about, how do you know which one you originally used to setup an account?
  • If a customer logs in with the wrong OpenID, they can’t access whatever they’ve paid for.  This pisses off said customer.
  • If your customer used the wrong OpenID, how do you, as the business owner, fix that problem? 
    • Is it worth fixing? 
    • Is it worth the effort of writing code to make this a simpler process?
  • “I'll save you the grumbling rant, but coding up Open ID stuff is utterly mind-numbing frustration”.  This says it all.
  • Since you don’t want to write the code, you get someone else to do it.  You find a SaS provider.  The provider WILL go down.  Laws of averages, Murphy, and simple irony will cause it to go down.
  • The standard is dying.  Facebook, Google, Microsoft, Twitter, and Joe-Blow all have their own particular ways of implementing the standard.  Do you really want to keep up with that?
  • Dealing with all of this hassle means you aren’t spending your time creating content, which does nothing for the customer.

The end result is that he is looking to drop support, and bring back traditional authentication models.  E.g. storing usernames and passwords in a database that you control.

Following the Conery kerfuffle, 37signals made an announcement that they were going to drop OpenID support for their products.  They had a pretty succinct reason for doing so:

Fewer than 1% of all 37signals users are currently using OpenID. After consulting with a fair share of them, it seems that most were doing so only because that used to be the only way to get single sign-on for our applications.

I don’t know how many customers they have, but 1% is nowhere near a high enough number to justify keeping something alive in any case.

So we have a problem now, don’t we?  On paper Claims-Based Authentication is awesome, but in practice it’s a pain in the neck.  Well, I suppose that’s the case with most technologies. 

I think one of problems with implementations of new technologies is the lack of guidance.  Trusted-Third Party authentication isn’t really all that new.  Kerberos does it, and Kerberos has been around for more than 30 years.  OpenID, OpenAuth, and WS-Auth/WS-Federation on the other hand, haven't been around all that long.  Given that, I have a bit of guidance that I’ve learned from the history of Kerberos.

First: Don’t trust random providers.

The biggest problem with OpenID is what’s known as the NASCAR problem.  This is another way of referring to Rob’s first problem.  How do you know which provider to use?  Most people recognize logo’s, so show them a bunch of logo’s and hopefully they will pick the one that they used.  Hoping your customer chooses the right one every time is like hoping you can hit a bullseye from 1000 yards, blindfolded.  It could happen.  It won’t.  But it could.

The solution to this is simple: do not trust every provider.  Have a select few providers you will accept, and have them sufficiently distinguishable.  My bank as a provider is going to be WAY different than using Google as a provider.  At least, I would hope that’s the case.

Second: Don’t let the user log in with the wrong account.

While you are at it, try moving the oceans using this shot glass.  Seriously though, if you follow the first step, this one is a by product.  Think about it.  Would a customer be more likely to log into their ISP billing system with their Google account, or their bank’s account?  That may be a bad example in practice because I would never use my bank as a provider, but it’s a great example of being sufficiently distinguishable.  You will always have customers that choose wrong, but the harder you make it for them to choose the wrong thing, the closer you are to hitting that bullseye.

Third: Use Frameworks.  Don’t roll your own.

One of the most important axioms in computer security is don’t roll your own [framework/authn/authz/crypto/etc].  Seriously.  Stop it.  You WILL do it wrong.  I will too.  Use a trusted OpenID/OpenAuth framework, or use WIF.

Forth: Choose a standard that won’t change on you at the whim of a vendor. 

WS-Trust/Auth and SAML are great examples of standards that don’t change willy-nilly.  OpenID/OpenAuth are not.

Fifth: Adopt a provider that already has a large user base, and then keep it simple.

This is an extension of the first rule.  Pick a provider that has a massive number of users already.  Live ID is a great example.  Google Accounts is another.  Stick to Twitter or Facebook.  If you are going to choose which providers to accept, make sure you pick the ones that people actually use.  This may seem obvious, but just remember it when you are presented with Joe’s Fish and Chips and Federated Online ID provider.

Finally: Perhaps the biggest thing I can recommend is to keep it simple.  Start small.  Know your providers, and trust your providers.

Keep in mind that everything I’ve said above does not pertain to any particular technology, but of any technology that uses a Trusted Third Party Authentication model.

It is really easy to get wide-eyed and believe you can develop a working system that accepts every form of identification under the sun, all the while keeping it manageable.  Don’t.  Keep it simple and start small.

Claims, MEF, and Parallelization, Oh My

One of the projects I’ve been working on for the last couple months has a requirement to aggregate a set of claims from multiple data sources for an identity and return the collection.  It all seems pretty straightforward as long as you know what the data sources are at development time as well as how you want to transform the data to claims. 

In the real world though, chances are you will need to modify how that transformation happens or modify the data sources in some way.  There are lots of ways this can be accomplished, and I’m going to look at how you can do it with the Managed Extensibility Framework (MEF).

Whenever I think of MEF, this is the best way I can describe how it works:

image

MEF being the magical part.  In actual fact, it is pretty straightforward how the underlying pieces work, but here is the sales bit:

Application requirements change frequently and software is constantly evolving. As a result, such applications often become monolithic making it difficult to add new functionality. The Managed Extensibility Framework (MEF) is a new library in .NET Framework 4 and Silverlight 4 that addresses this problem by simplifying the design of extensible applications and components.

The architecture of it can be explained on the Codeplex site:

MEF_Diagram.png

The composition container is designed to discover ComposablePart’s that have Export attributes, and assign these Parts to an object with an Import attribute.

Think of it this way (this is just one possible way it could work).  Let’s say I have a bunch of classes that are plugins for some system.  I will attach an Export attribute to each of those classes.  Then within the system itself I have a class that manages these plugins.  That class will contain an object that is a collection of the plugin class type, and it will have an attribute of ImportMany.  Within this manager class is some code that will discover the Exported classes, and generate a collection of them instantiated.  You can then iterate through the collection and do something with those plugins.  Some code might help.

First, we need something to tie the Import/Export attributes together.  For a plugin-type situation I prefer to use an interface.

namespace PluginInterfaces
{
    public interface IPlugin
    {
        public string PlugInName { get; set; }
    }
}

Then we need to create a plugin.

using PluginInterfaces;

namespace SomePlugin
{
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

Then we need to actually Export the plugin.  Notice the namespace addition.  The namespace can be found in the System.ComponentModel.Composition assembly in .NET 4.

using PluginInterfaces;
using System.ComponentModel.Composition;

namespace SomePlugin
{
    [Export(typeof(IPlugin))]
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

The [Export(typeof(IPlugin))] is a way of tying the Export to the Import.

Importing the plugin’s requires a little bit more code.  First we need to create a collection to import into:

[ImportMany(typeof(IPlugin))]
List<IPlugin> plugins = new List<IPlugin>();

Notice the typeof(IPlugin).

Next we need to compose the pieces:

using (DirectoryCatalog catalog = new DirectoryCatalog(pathToPluginDlls))
using (CompositionContainer container = new CompositionContainer(catalog))
{
    container.ComposeParts(this);
}

The ComposeParts() method is looking at the passed object and finds anything with the Import or ImportMany attributes and then looks into the DirectoryCatalog to find any classes with the Export attribute, and then tries to tie everything together based on the typeof(IPlugin).

At this point we should now have a collection of plugins that we could iterate through and do whatever we want with each plugin.

So what does that have to do with Claims?

If you continue down the Claims Model path, eventually you will get tired of having to modify the STS every time you wanted to change what data is returned from the RST (Request for Security Token).  Imagine if you could create a plugin model that all you had to do was create a new plugin for any new data source, or all you had to do was modify the plugins instead of the STS itself.  You could even build a transformation engine similar to Active Directory Federation Services and create a DSL that is executed at runtime.  It would make for simpler deployment, that’s for sure.

And what about Parallelization?

If you have a large collection of plugins, it may be beneficial to run some things in parallel, such as a GetClaims([identity]) type call.

Using the Parallel libraries within .NET 4, you could very easily do something like:

Parallel.ForEach<IPlugin>(plugins, (plugin) =>
{
    plugin.GetClaims(identity);
});

The basic idea for this method is to take a collection, and do an action on each item in the collection, potentially in parallel.   The ForEach method is described as:

ForEach<TSource>(IEnumerable<TSource> source, Action<TSource> action)

When everything is all said and done, you now have a basic parallelized plugin model for your Security Token Service.  Pretty cool, I think.

Preventing Frame Exploits in a Passive Claims Model

At a presentation a few weeks ago someone asked me about capturing session details during authentication at an STS by way of frames and JavaScript.  To paraphrase the question: “What prevents a malicious developer from sticking an RP within an iframe, cause a redirect to an STS, get some user to log in, and then capture the details through JavaScript from the parent page?”  There are a couple of ways this problem can be solved.  It’s a defense-in-depth problem where on their own, each piece won’t close every attack vector, but when used together you end up with a pretty solid solution.

  • First, a lot of new browsers will actually prevent cross-frame JavaScript calls when SSL is involved.  Depending on the browser, the JavaScript will throw the equivalent of an Access Denied exception.  This is not the case with all browser versions though.  Older browsers may not do this.
  • Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL.  The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
  • Third, you could write some JavaScript for the STS to bust out of the frame.  It would look something like this:

if (top != self)
{
    try
    {
        top.location.replace(self.location.href);
    }
    catch (e)
    {
    }
}

The problem with this is that it wouldn’t work if the browser has JavaScript disabled.

  • Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request.  Safari and Chrome support it natively, and Firefox supports it with the NoScript add on.  The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page.  E.g. the parent is somesite.com/page and the framed page is somesite.com/page.

There are a couple of ways to add this header to your page.  First you can add it via ASP.NET:

Context.Response.AddHeader("x-frame-options", "DENY");

Or you could add it to all pages via IIS.  To do this open the IIS Manager and select the site in question.  Then select the Feature “HTTP Response Headers”:

image

Select Add… and then set the name to x-frame-options and the value to DENY:

image

By keeping in mind these options you can do a lot to prevent any exploits that use frames.

Generating Federation Metadata Dynamically

In a previous post we looked at what it takes to actually write a Security Token Service.  If we knew what the STS offered and required already, we could set up a relying party relatively easily with that setup.  However, we don’t always know what is going on.  That’s the purpose of federation metadata.  It gives us a basic breakdown of the STS so we can interact with it.

Now, if we are building a custom STS we don’t have anything that is creating this metadata.  We could do it manually by hardcoding stuff in an xml file and then signing it, but that gets ridiculously tedious after you have to make changes for the third or fourth time – which will happen.  A lot.  The better approach is to generate the metadata automatically.  So in this post we will do just that.

The first thing you need to do is create a endpoint.  There is a well known path of /FederationMetadata/2007-06/FederationMetadata.xml that is generally used, so let’s use that.  There are a lot of options to generate dynamic content and in Programming Windows Identity Foundation, Vitorrio uses a WCF Service:

[ServiceContract]
public interface IFederationMetadata
{
    [ServiceBehavior]
    [webGet(UriTemplate = "2007-06/FederationMetadata.xml")]
    XElement FederationMetadata();
}

It’s a great approach, but for some reason I prefer the way that Dominick Baier creates the endpoint in StarterSTS.  He uses an IHttpHandler and a web.config entry to create a handler:

<location path="FederationMetadata/2007-06">
<system.webServer>
<handlers>
<add
        name="MetadataGenerator"
        path="FederationMetadata.xml"
        verb="GET"
        type="Syfuhs.TokenService.WSTrust.FederationMetadataHandler" />
</handlers>
</system.webServer>
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>

As such, I’m going to go that route.  Let’s take a look at the implementation for the handler:

using System.Web;

namespace Syfuhs.TokenService.WSTrust
{
    public class FederationMetadataHandler : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            context.Response.ClearHeaders();

            context.Response.Clear();
            context.Response.ContentType = "text/xml";

            MyAwesomeTokenServiceConfiguration
.Current.SerializeMetadata(context.Response.OutputStream);
        }

        public bool IsReusable { get { return false; } }
    }
}

All the handler is doing is writing metadata out to a stream, which in this case is the response stream.  You can see that it is doing this through the MyAwesomeTokenServiceConfiguration class which we created in the previous article.  The SeriaizeMetadata method creates an instance of a MetadataSerializer and writes an entity to the stream:

public void SerializeMetadata(Stream stream)
{
    MetadataSerializer serializer = new MetadataSerializer();
    serializer.WriteMetadata(stream, GenerateEntities());
}

The entities are generated through a collection of tasks:

private EntityDescriptor GenerateEntities()
{
    if (entity != null)
        return entity;

    SecurityTokenServiceDescriptor sts = new SecurityTokenServiceDescriptor();

    FillOfferedClaimTypes(sts.ClaimTypesOffered);

    FillEndpoints(sts);
    FillSupportedProtocols(sts);
    FillSigningKey(sts);

    entity = new EntityDescriptor(new EntityId(string.Format("https://{0}", host)))
    {
        SigningCredentials = this.SigningCredentials
    };

    entity.RoleDescriptors.Add(sts);

    return entity;
}

The entity is generated, and an object is created to describe the STS called a SecurityTokenServiceDescriptor.  At this point it’s just a matter of sticking in the data and defining the credentials used to sign the metadata:

private void FillSigningKey(SecurityTokenServiceDescriptor sts)
{
    KeyDescriptor signingKey
= new KeyDescriptor(this.SigningCredentials.SigningKeyIdentifier)
{
Use = KeyType.Signing
};

    sts.Keys.Add(signingKey);
}

private void FillSupportedProtocols(SecurityTokenServiceDescriptor sts)
{
    sts.ProtocolsSupported.Add(new System.Uri(WSFederationConstants.Namespace));
}

private void FillEndpoints(SecurityTokenServiceDescriptor sts)
{
    EndpointAddress activeEndpoint
= new EndpointAddress(string.Format("https://{0}/TokenService/activeSTS.svc", host));
    sts.SecurityTokenServiceEndpoints.Add(activeEndpoint);
    sts.TargetScopes.Add(activeEndpoint);
}


private void FillOfferedClaimTypes(ICollection<DisplayClaim> claimTypes)
{
    claimTypes.Add(new DisplayClaim(ClaimTypes.Name, "Name", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Email, "Email", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Role, "Role", ""));
}

That in a nutshell is how to create a basic metadata document as well as sign it.  There is a lot more information you can put into this, and you can find more things to work with in the Microsoft.IdentityModel.Protocols.WSFederation.Metadata namespace.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.