GoodBye CardSpace; Hello U-Prove

Other possible titles:

  • So Long and Thanks for all the Identity
  • Goodbye awesome technology; Hello Awesomer Technology
  • CardSpace? What’s CardSpace?

Over on the Claims Based Identity Blog they made an announcement that they have stopped development of CardSpace v2.  CardSpace was an excellent technology, but nobody used it.  Some of us saw the writing on the wall when Microsoft paused development last year, and kept quiet about why.  For better or for worse, Microsoft stopped development and moved on to a different technology affectionately called U-Prove.

U-Prove is an advanced cryptographic technology that, combined with existing standards-based identity solutions, overcomes this long-standing dilemma between identity assurance and privacy. This unlocks a broad range of scenarios that have historically been out of the reach of both the private and public sectors - cases where both verified identity information and privacy are required.

So what exactly does this mean?  Central to U-Prove is something called an Agent:

Specifically, the Agent provides a mechanism to separate the retrieval of identity information from trusted organizations from the release of this information to destination sites. The underlying mechanisms help prevent the issuing organizations from tracking where or when this information is used, and to help prevent different destination sites from trivially linking users’ actions together.

Alright, what does that really mean?

Short answer: it’s kind of like CardSpace, except you—the developer—manage the application that controls the flow of claims from IdP to RP.

The goal is to enable stronger control of the release of private data to relying parties.

For more information check out the FAQ section on Connect.

Salesforce.com Single Sign On using ADFS v2

For the last few years ObjectSharp has been using Salesforce.com to help manage parts of the business.  As business increased, our reliance on Salesforce increased.  More and more users started getting added, and as all stories go, these accounts became one more burden to manage.

This is the universal identity problem – too many user accounts for the same person.  As such, one of my internal goals here is to simplify identity at ObjectSharp.

While working on another internal project with Salesforce i got to thinking about how it manages users.  It turns out Salesforce allows you to set it up as a SAML relying party.  ADFS v2 supports being a SAML IdP.  Theoretically we have both sides of the puzzle, but how does it work?

Well, first things first.  I checked out the security section of the configuration portal:

image

There was a Single Sign-On section, so I followed that and was given a pretty simple screen:

image

There isn’t much here to setup.  Going down the options, here is what I came up with:

SAML Version

I know from previous experience that ADFS supports version 2 of the SAML Protocol.

Issuer

What is the URI of the IdP, which in this case is going to be ADFS?  Within the ADFS MMC snap-in, if you right click the Service node you can access the properties:

image

In the properties dialog there is a textbox allowing you to change the Federation Service Identifier:

image

We want that URI.

Within Salesforce we set the Issuer to the identifier URI.

Identity Provider Certificate

Salesforce can’t just go and accept any token.  It needs to only be able to accept a token from my organization.  Therefore I upload the public key used to sign my tokens from ADFS.  You can access that token by going to ADFS and selecting the Certificates node:

image

Once in there you can select the signing certificate:

image

Just export the certificate and upload to Salesforce.

Custom Error URL

If the login fails for some reason, what URL should it go to?  If you leave it blank, it redirects to a generic Salesforce error page.

SAML User ID Type

This option is asking what information we are giving to Salesforce, so it can correlate that information to one of their internal ID’s.  Since for this demo I was just using my email address, I will leave it with Assertion contains User’s salesforce.com username.

SAML User ID Location

This option is asking where the above ID is located within the SAML token.  By default it will accept the nameidentifier but I don’t really want to pass my email as a name so I will select user ID is in an Attribute element.

Now I have to specify what claim type the email address is.  In this case I will go with the default for ADFS, which is http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

On to Active Directory Federation Services

We are about half way done.  Now we just need to tell ADFS about Salesforce.  It’s surprisingly simple.

Once you’ve saved the Salesforce settings, you are given a button to download the metadata:

image

Selecting that will let you download an XML document containing metadata about Salesforce as a relying party.

Telling ADFS about a relying party is pretty straightforward, and you can find the detailed steps in a previous post I wrote about halfway through the article.

Once you’ve added the relying party, all you need to do is create a rule that returns the user’s email address as the above claim type:

image

Everything should be properly configured at this point.  Now we need to test it.

When I first started out with ADFS and SAML early last year, I couldn’t figure out how to get ADFS to post the token to a relying party.  SAML is not a protocol that I’m very familiar with, so I felt kinda dumb when I realized there is an ADFS URL you need to hit.  In this case it’s https://[adfs.fqdn]/adfs/ls/IdpInitiatedSignOn.aspx.

It brings you to a form page to select which application to post a token to:

image

Select your relying party and then go.

It will POST back to an ADFS endpoint, and then POST the information to the URL within the metadata provided earlier.  Once the POST’ing has quieted down, you end up on your Salesforce dashboard:

image

All in all, it took about 10 minutes to get everything working.

The Problem with Claims-Based Authentication

Homer Simpson was once quoted as saying “To alcohol! The cause of, and solution to, all of life's problems”.  I can’t help but borrow from it and say that Claims-Based Authentication is the cause of, and solution to, most problems with identity consumption in applications.

When people first come across Claims-Based Authentication there are two extremes of responses:

  • Total amazement at the architectural simplicity and brilliance
  • Fear and hatred of the idea (don’t you dare take away my control of the passwords)

Each has a valid truth to them, but over time you realize all the problems sit somewhere between both extremes.  It’s this middle ground where people run into the biggest problems. 

Over the last few months there’s been quite a few people talking about the pains of OpenID/OpenAuth, which when you get right down to the principle of it, is CBA.  There are some differences such as terminology and implementation, but both follow the Trusted Third Party Authentication model, and that’s really what CBA is all about.

Rob Conery wrote what some people now see as an infamous post on why he hates OpenID.  He thinks it’s a nightmare for various reasons.  The basic list is as follows:

  • As a customer, since you can have multiple OpenID providers that the relying party doesn’t necessarily know about, how do you know which one you originally used to setup an account?
  • If a customer logs in with the wrong OpenID, they can’t access whatever they’ve paid for.  This pisses off said customer.
  • If your customer used the wrong OpenID, how do you, as the business owner, fix that problem? 
    • Is it worth fixing? 
    • Is it worth the effort of writing code to make this a simpler process?
  • “I'll save you the grumbling rant, but coding up Open ID stuff is utterly mind-numbing frustration”.  This says it all.
  • Since you don’t want to write the code, you get someone else to do it.  You find a SaS provider.  The provider WILL go down.  Laws of averages, Murphy, and simple irony will cause it to go down.
  • The standard is dying.  Facebook, Google, Microsoft, Twitter, and Joe-Blow all have their own particular ways of implementing the standard.  Do you really want to keep up with that?
  • Dealing with all of this hassle means you aren’t spending your time creating content, which does nothing for the customer.

The end result is that he is looking to drop support, and bring back traditional authentication models.  E.g. storing usernames and passwords in a database that you control.

Following the Conery kerfuffle, 37signals made an announcement that they were going to drop OpenID support for their products.  They had a pretty succinct reason for doing so:

Fewer than 1% of all 37signals users are currently using OpenID. After consulting with a fair share of them, it seems that most were doing so only because that used to be the only way to get single sign-on for our applications.

I don’t know how many customers they have, but 1% is nowhere near a high enough number to justify keeping something alive in any case.

So we have a problem now, don’t we?  On paper Claims-Based Authentication is awesome, but in practice it’s a pain in the neck.  Well, I suppose that’s the case with most technologies. 

I think one of problems with implementations of new technologies is the lack of guidance.  Trusted-Third Party authentication isn’t really all that new.  Kerberos does it, and Kerberos has been around for more than 30 years.  OpenID, OpenAuth, and WS-Auth/WS-Federation on the other hand, haven't been around all that long.  Given that, I have a bit of guidance that I’ve learned from the history of Kerberos.

First: Don’t trust random providers.

The biggest problem with OpenID is what’s known as the NASCAR problem.  This is another way of referring to Rob’s first problem.  How do you know which provider to use?  Most people recognize logo’s, so show them a bunch of logo’s and hopefully they will pick the one that they used.  Hoping your customer chooses the right one every time is like hoping you can hit a bullseye from 1000 yards, blindfolded.  It could happen.  It won’t.  But it could.

The solution to this is simple: do not trust every provider.  Have a select few providers you will accept, and have them sufficiently distinguishable.  My bank as a provider is going to be WAY different than using Google as a provider.  At least, I would hope that’s the case.

Second: Don’t let the user log in with the wrong account.

While you are at it, try moving the oceans using this shot glass.  Seriously though, if you follow the first step, this one is a by product.  Think about it.  Would a customer be more likely to log into their ISP billing system with their Google account, or their bank’s account?  That may be a bad example in practice because I would never use my bank as a provider, but it’s a great example of being sufficiently distinguishable.  You will always have customers that choose wrong, but the harder you make it for them to choose the wrong thing, the closer you are to hitting that bullseye.

Third: Use Frameworks.  Don’t roll your own.

One of the most important axioms in computer security is don’t roll your own [framework/authn/authz/crypto/etc].  Seriously.  Stop it.  You WILL do it wrong.  I will too.  Use a trusted OpenID/OpenAuth framework, or use WIF.

Forth: Choose a standard that won’t change on you at the whim of a vendor. 

WS-Trust/Auth and SAML are great examples of standards that don’t change willy-nilly.  OpenID/OpenAuth are not.

Fifth: Adopt a provider that already has a large user base, and then keep it simple.

This is an extension of the first rule.  Pick a provider that has a massive number of users already.  Live ID is a great example.  Google Accounts is another.  Stick to Twitter or Facebook.  If you are going to choose which providers to accept, make sure you pick the ones that people actually use.  This may seem obvious, but just remember it when you are presented with Joe’s Fish and Chips and Federated Online ID provider.

Finally: Perhaps the biggest thing I can recommend is to keep it simple.  Start small.  Know your providers, and trust your providers.

Keep in mind that everything I’ve said above does not pertain to any particular technology, but of any technology that uses a Trusted Third Party Authentication model.

It is really easy to get wide-eyed and believe you can develop a working system that accepts every form of identification under the sun, all the while keeping it manageable.  Don’t.  Keep it simple and start small.

Authentication in an Active Claims Model

When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive.  However, there are a few cases where things differ… a lot.  The biggest of course being how a Request for Security Token (RST) is authenticated.  In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled.  Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.

Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS.  To hand off to the STS, we can just call:

FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request, 
HttpContext.Current.User, 
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(), 
HttpContext.Current.Response); 

However, it’s a little different with the active model.

Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials.  The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS.  The STS deserializes the token and passes it off to a SecurityTokenHandler.  This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.

Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider.  90% of the time, this is probably just fine.  Other times you may need to create your own SecurityTokenHandler.  It’s actually not that hard to do.

First you need to know what sort of token is being passed across the wire.  The big three are:

  • UserNameSecurityToken – Has a username and password pair
  • WindowsSecurityToken – Used for Windows authentication using NTLM or Kerberos
  • X509SecurityToken – Uses x509 certificate for authentication

Each is pretty self explanatory.

Some others out of the box are:

image

Reflector is an awesome tool.  Just sayin’.

Now that we know what type of token we are expecting we can build the token handler.  For the sake of simplicity let’s create one for the UserNameSecurityToken.

To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler.  We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working.  Suffice to say it’s mostly boilerplate code.

We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.

TokenType is used later on to tell what kind of token the handler can actually validate.  More on that in a minute.

Overriding ValidateToken is fairly trivial*.  This is where we actually handle the authentication.  However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception.  I would recommend the SecurityTokenValidationException.  Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection.  We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection.

public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
{
    UserNameSecurityToken userToken = token as UserNameSecurityToken;

    if (userToken == null)
        throw new ArgumentNullException("token");

    string username = userToken.UserName;
    string pass = userToken.Password;

    if (!Membership.ValidateUser(username, pass))
        throw new SecurityTokenValidationException("Username or password is wrong.");

    IClaimsIdentity ident = new ClaimsIdentity();
    ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

    return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
}

Next we need set the TokenType:

public override Type TokenType
{
    get
    {
        return typeof(UserNameSecurityToken);
    }
}

This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns.  The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.

To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:

securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())

To add it via configuration you first need to remove any other handlers that can validate the same type of token:

<microsoft.identityModel>
<service>
<securityTokenHandlers>
<remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" />
</securityTokenHandlers>

That’s pretty much all there is to it.  Here is the class for the sake of completeness:

using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;

namespace Syfuhs.IdentityModel.Tokens
{
    public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
    {
        public override bool CanValidateToken { get { return true; } }

        public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
        {
            UserNameSecurityToken userToken = token as UserNameSecurityToken;

            if (userToken == null)
                throw new ArgumentNullException("token");

            string username = userToken.UserName;
            string pass = userToken.Password;

            if (!Membership.ValidateUser(username, pass))
                throw new SecurityTokenValidationException("Username or password is wrong.");

            IClaimsIdentity ident = new ClaimsIdentity();
            ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

            return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
        }
    }
}

* Trivial in the development sense, not trivial in the security sense.

Preventing Frame Exploits in a Passive Claims Model

At a presentation a few weeks ago someone asked me about capturing session details during authentication at an STS by way of frames and JavaScript.  To paraphrase the question: “What prevents a malicious developer from sticking an RP within an iframe, cause a redirect to an STS, get some user to log in, and then capture the details through JavaScript from the parent page?”  There are a couple of ways this problem can be solved.  It’s a defense-in-depth problem where on their own, each piece won’t close every attack vector, but when used together you end up with a pretty solid solution.

  • First, a lot of new browsers will actually prevent cross-frame JavaScript calls when SSL is involved.  Depending on the browser, the JavaScript will throw the equivalent of an Access Denied exception.  This is not the case with all browser versions though.  Older browsers may not do this.
  • Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL.  The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
  • Third, you could write some JavaScript for the STS to bust out of the frame.  It would look something like this:

if (top != self)
{
    try
    {
        top.location.replace(self.location.href);
    }
    catch (e)
    {
    }
}

The problem with this is that it wouldn’t work if the browser has JavaScript disabled.

  • Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request.  Safari and Chrome support it natively, and Firefox supports it with the NoScript add on.  The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page.  E.g. the parent is somesite.com/page and the framed page is somesite.com/page.

There are a couple of ways to add this header to your page.  First you can add it via ASP.NET:

Context.Response.AddHeader("x-frame-options", "DENY");

Or you could add it to all pages via IIS.  To do this open the IIS Manager and select the site in question.  Then select the Feature “HTTP Response Headers”:

image

Select Add… and then set the name to x-frame-options and the value to DENY:

image

By keeping in mind these options you can do a lot to prevent any exploits that use frames.

Generating Federation Metadata Dynamically

In a previous post we looked at what it takes to actually write a Security Token Service.  If we knew what the STS offered and required already, we could set up a relying party relatively easily with that setup.  However, we don’t always know what is going on.  That’s the purpose of federation metadata.  It gives us a basic breakdown of the STS so we can interact with it.

Now, if we are building a custom STS we don’t have anything that is creating this metadata.  We could do it manually by hardcoding stuff in an xml file and then signing it, but that gets ridiculously tedious after you have to make changes for the third or fourth time – which will happen.  A lot.  The better approach is to generate the metadata automatically.  So in this post we will do just that.

The first thing you need to do is create a endpoint.  There is a well known path of /FederationMetadata/2007-06/FederationMetadata.xml that is generally used, so let’s use that.  There are a lot of options to generate dynamic content and in Programming Windows Identity Foundation, Vitorrio uses a WCF Service:

[ServiceContract]
public interface IFederationMetadata
{
    [ServiceBehavior]
    [webGet(UriTemplate = "2007-06/FederationMetadata.xml")]
    XElement FederationMetadata();
}

It’s a great approach, but for some reason I prefer the way that Dominick Baier creates the endpoint in StarterSTS.  He uses an IHttpHandler and a web.config entry to create a handler:

<location path="FederationMetadata/2007-06">
<system.webServer>
<handlers>
<add
        name="MetadataGenerator"
        path="FederationMetadata.xml"
        verb="GET"
        type="Syfuhs.TokenService.WSTrust.FederationMetadataHandler" />
</handlers>
</system.webServer>
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>

As such, I’m going to go that route.  Let’s take a look at the implementation for the handler:

using System.Web;

namespace Syfuhs.TokenService.WSTrust
{
    public class FederationMetadataHandler : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            context.Response.ClearHeaders();

            context.Response.Clear();
            context.Response.ContentType = "text/xml";

            MyAwesomeTokenServiceConfiguration
.Current.SerializeMetadata(context.Response.OutputStream);
        }

        public bool IsReusable { get { return false; } }
    }
}

All the handler is doing is writing metadata out to a stream, which in this case is the response stream.  You can see that it is doing this through the MyAwesomeTokenServiceConfiguration class which we created in the previous article.  The SeriaizeMetadata method creates an instance of a MetadataSerializer and writes an entity to the stream:

public void SerializeMetadata(Stream stream)
{
    MetadataSerializer serializer = new MetadataSerializer();
    serializer.WriteMetadata(stream, GenerateEntities());
}

The entities are generated through a collection of tasks:

private EntityDescriptor GenerateEntities()
{
    if (entity != null)
        return entity;

    SecurityTokenServiceDescriptor sts = new SecurityTokenServiceDescriptor();

    FillOfferedClaimTypes(sts.ClaimTypesOffered);

    FillEndpoints(sts);
    FillSupportedProtocols(sts);
    FillSigningKey(sts);

    entity = new EntityDescriptor(new EntityId(string.Format("https://{0}", host)))
    {
        SigningCredentials = this.SigningCredentials
    };

    entity.RoleDescriptors.Add(sts);

    return entity;
}

The entity is generated, and an object is created to describe the STS called a SecurityTokenServiceDescriptor.  At this point it’s just a matter of sticking in the data and defining the credentials used to sign the metadata:

private void FillSigningKey(SecurityTokenServiceDescriptor sts)
{
    KeyDescriptor signingKey
= new KeyDescriptor(this.SigningCredentials.SigningKeyIdentifier)
{
Use = KeyType.Signing
};

    sts.Keys.Add(signingKey);
}

private void FillSupportedProtocols(SecurityTokenServiceDescriptor sts)
{
    sts.ProtocolsSupported.Add(new System.Uri(WSFederationConstants.Namespace));
}

private void FillEndpoints(SecurityTokenServiceDescriptor sts)
{
    EndpointAddress activeEndpoint
= new EndpointAddress(string.Format("https://{0}/TokenService/activeSTS.svc", host));
    sts.SecurityTokenServiceEndpoints.Add(activeEndpoint);
    sts.TargetScopes.Add(activeEndpoint);
}


private void FillOfferedClaimTypes(ICollection<DisplayClaim> claimTypes)
{
    claimTypes.Add(new DisplayClaim(ClaimTypes.Name, "Name", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Email, "Email", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Role, "Role", ""));
}

That in a nutshell is how to create a basic metadata document as well as sign it.  There is a lot more information you can put into this, and you can find more things to work with in the Microsoft.IdentityModel.Protocols.WSFederation.Metadata namespace.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Token Request Validation in ASP.NET

Earlier this week during my TechDays presentation on Windows Identity Foundation, there was a part during the demo that I said would fail miserably after the user was authenticated and the token was POST’ed back to the relying party.  Out of the box, ASP.NET does request validation.  If a user has submitted content through request parameters it goes through a validation step, and by default this step is to break on anything funky such as angle brackets.  This helps to deter things like cross site scripting attacks.  However, we were passing XML so we needed to turn off this validation.  There are two approaches to doing this.

The first approach, which is what I did in the demo, was to set the validation mode to “2.0”.  All this did was tell ASP.NET to use a less strict validation scheme.  To do that you need to add a line to the web.config file:

<system.web>
<httpRuntime requestValidationMode=”2.0” />
</system.web>

This is not the best way to do things though.  It creates a new vector for attack, as you’ve just allowed an endpoint to accept trivial data.  What is more preferred is to create a custom request validator.  You can find a great example in the Fabrikam Shipping demo.

It’s pretty straightforward to create a validator.  First you create a class that inherits System.Web.Util.RequestValidator, and then you override the method IsValidRequestString(…).  At that point you can do anything you want to validate, but the demo code tries to build a SignInResponseMessage object from the wresult parameter.  If it creates the object successfully the request is valid.  Otherwise it passes the request to the base implementation of IsValidRequestString(…).

The code to handle this validation is pretty straightforward:

    public class WSFederationRequestValidator : RequestValidator
    {
        protected override bool IsValidRequestString(HttpContext context,
            string value, RequestValidationSource requestValidationSource, 
            string collectionKey, out int validationFailureIndex)
        {
            validationFailureIndex = 0;

            if (requestValidationSource == RequestValidationSource.Form
                && collectionKey.Equals(WSFederationConstants.Parameters.Result, 
                   StringComparison.Ordinal))
            {
                SignInResponseMessage message =
                     WSFederationMessage.CreateFromFormPost(context.Request) 
                     as SignInResponseMessage;

                if (message != null)
                {
                    return true;
                }
            }

            return base.IsValidRequestString(context, value, requestValidationSource,
                   collectionKey, out validationFailureIndex);
        }
    }

Once you’ve created your request validator, you need to update the web.config file to tell .NET to use the validator.  You can do that by adding the following xml:

<system.web>
<httpRuntime requestValidationType="Microsoft.Samples.DPE.FabrikamShipping.Web.Security.WSFederationRequestValidator" />
</system.web>

You can find the validation code in FabrikamShipping.Web\Security\WSFederationRequestValidator.cs within the FabrikamShipping solution.

Kerberos: Very Claims-y

I’ve always found Kerberos to be an interesting protocol.  It works by way of a trusted third party which issues secured tickets based on an authentication or previous session.   These tickets are used as proof of identity by asserting that the subject is who they claim to be. Claims authentication works on a similar principle, except instead of a ticket you have a token.  There are some major differences in implementation, but the theory is the same.  One of the reasons I find it interesting is that Kerberos was originally developed in 1983, and the underlying protocol called the Needham-Schroeder protocol, was originally published in 1978.

There have been major updates over the years, as well as a change to fix a man-in-the-middle attack in the Needham-Schroeder protocol in 1995, but the theory is still sound.  Kerberos is the main protocol used in Windows networks to authenticate against Active Directory.

The reason I bring it up is because of a comment I made in a previous post.  I made an assertion that we don’t necessarily abstract out the identity portion of our applications and services. 

Well, It occurred to me that up until a certain period of time, we did.  In many environments there was only one trusted authority for identity.  Whether it was at a school, in a business, or within the government there was no concept of federation.  The walls we created were for a very good reason.  The applications and websites we created were siloed and the information didn’t need to be shared.  As such, we created our own identity stores in databases and LDAP directories.

This isn’t necessarily a problem because we built these applications on top of a foundation that wasn’t designed for identity.  The internet was for all intents and purposes designed for anonymity.  But here is where the foundation became tricky: it boomed.

People wanted to share information between websites and applications, but the data couldn’t be correlated back to the user across applications.  We are starting to catch up, but it’s a slow process.

So here is the question: we started with a relatively abstract process of authentication by way of the Kerberos third party, and then moved to siloed identity data.  Why did we lose the abstraction?  Or more precisely, during this boom, why did we allow our applications to lose this abstraction?

Food for thought on this early Monday.

Whitepaper on ADFS 2 Federation with Shibboleth and the InCommon Federation

Over on the Claims-Based Identity blog, they announced a whitepaper was just released on using ADFS 2 to federate with Shibboleth 2 and the InCommon Federation.  I just started reading through it, but it looks really well written.

Here is the abstract of the paper itself:

Through its support for the WS-Federation and Security Assertion Markup Language (SAML) 2.0 protocols, Microsoft® Active Directory® Federation Services 2.0 (AD FS 2.0) provides claims-based, cross-domain, Web single sign-on (SSO) interoperability with non-Microsoft federation solutions. Shibboleth® 2, through its support for SAML 2.0, enables cross-domain, federated SSO between environments that are running Microsoft and Shibboleth 2 federation infrastructures.

You can download the whitepaper in .docx format.

What is Shibboleth?

The Shibboleth System is a standards based, open source software package for web single sign-on across or within organizational boundaries. It allows sites to make informed authorization decisions for individual access of protected online resources in a privacy-preserving manner.

What is InCommon?

InCommon eliminates the need for researchers, students, and educators to maintain multiple passwords and usernames. Online service providers no longer need to maintain user accounts. Identity providers manage the levels of their users' privacy and information exchange. InCommon uses SAML-based authentication and authorization systems (such as Shibboleth®) to enable scalable, trusted collaborations among its community of participants.