Making the Internet Single Sign On Capable

Every couple of weeks I start up Autoruns to see what new stuff has added itself to Windows startup and what not (screw you Adobe – you as a software company make me want to swear endlessly).  Anyway, a few months ago around the time the latest version of Windows Live Messenger and it’s suite RTM’ed I poked around to see if anything new was added.  Turns out there was:

image

A new credential provider was added!

image

Interesting.

Not only that, it turns out a couple Winsock providers were added too:

image

I started poking around the DLL’s and noticed that they don’t do much.  Apparently you can use smart cards for WLID authentication.  I suspect that’s what the credential provider and associated Winsock Provider is for, as well as part of WLID’s sign-on helper so credentials can be managed via the Credential Manager:

image

Ah well, nothing too exciting here.

Skip a few months and something occurred to me.  Microsoft was able to solve part of the Claims puzzle.  How do you bridge the gap between desktop application identities and web application identities?  They did part of what CardSpace was unable to do because CardSpace as a whole didn’t really solve a problem people were facing.  The problem Windows Live ran into was how do you share credentials between desktop and web applications without constantly asking for the credentials?  I.e. how do you do Single Sign On…

This got me thinking.

What if I wanted to step this up a smidge and instead of logging into Windows Live Messenger with my credentials, why not log into Windows with my Windows Live Credentials?

Yes, Windows.  I want to change this:

97053_windows7loginscreen

Question: What would this solve?

Answer: At present, nothing ground-breakingly new.  For the sake of argument, lets look at how this would be done, and I’ll (hopefully) get to my point.

First off, we need to know how to modify the Windows logon screen.  In older versions of Windows (versions older than 2003 R2) you had to do a lot of heavy lifting to make any changes to the screen.  You had to write your own GINA which involved essentially creating your own UI.  Talk about painful.

With the introduction of Vista, Microsoft changed the game when it came to custom credentials.  Their reasoning was simple: they didn’t want you to muck up the basic look and feel.  You had to follow their guidelines.

As a result we are left with something along the lines of these controls to play with:

image

The logon screen is now controlled by Credential Providers instead of the GINA.  There are two providers built into Windows by default, one for Kerberos or NTLM authentication, and one for Smart Card authentication.

The architecture looks like:

ff404303_ce20dc63-b1a8-42c4-a8a2-955f4de7e5b5(en-us,WS_10)

When the Secure Attention Sequence (CTRL + ALT + DEL / SAS) is called, Winlogon switches to a different desktop and instantiates a new instance of LogonUI.exe.  LogonUI enumerates all the credential provider DLL’s from registry and displays their controls on the desktop.

When I enter in my credentials they are serialized and supposed to be passed to the LSA.

Once the LSA has these credentials it can then do the authentication.

I say “supposed” to be passed to the LSA because there are two frames of thought here.  The first frame is to handle authentication within the Credential Provider itself.  This can cause problems later on down the road.  I’ll explain why in the second frame.

The second frame of thought is when you need to use custom credentials, need to do some funky authentication, and then save save the associated identity token somewhere.  This becomes important when other applications need your identity.

You can accomplish this via what’s called an Authentication Package.

IC200673

When a custom authentication package is created, it has to be designed in such a way that applications cannot access stored credentials directly.  The applications must go through the pre-canned MSV1_0 package to receive a token.

Earlier when I asked about using Windows Live for authentication we would need to develop two things: a Credential Provider, and a custom Authentication Package.

The logon process would work something like this:

  • Select Live ID Credential Provider
  • Type in Live ID and Password and submit
  • Credential Provider passes serialized credential structure to Winlogon
  • Winlogon passes credentials to LSA
  • LSA passes credential to Custom Authentication Package
  • Package connects to Live ID STS and requests a token with given credentials
  • Token is returned
  • Authentication Package validated token and saves it to local cache
  • Package returns authentication result back up call stack to Winlogon
  • Winlogon initializes user’s profile and desktop

I asked before: What would this solve?

This isn’t really a ground-breaking idea.  I’ve just described a domain environment similar to what half a million companies have already done with Active Directory, except the credential store is Live ID.

On it’s own we’ve just simplified the authentication process for every home user out there.  No more disparate accounts across multiple machines.  Passwords are in sync, and identity information is always up to date.

What if Live ID sets up a new service that lets you create access groups for things like home and friends and you can create file shares as appropriate.  Then you can extend the Windows 7 Homegroup sharing based on those access groups.

Wait, they already have something like that with Skydrive (sans Homegroup stuff anyway).

Maybe they want to use a different token service.

Imagine if the user was able to select the “Federated User” credential provider that would give you a drop down box listing a few Security Token Services.  Azure ACS can hook you up.

Imagine if one of these STS’s was something everyone used *cough* Facebook *cough*.

Imagine the STS was one that a lot of sites on the internet use *cough* Facebook *cough*.

Imagine if the associated protocol used by the STS and websites were modified slightly to add a custom set of headers sent to the browser.  Maybe it looked like this:

Relying-Party-Accepting-Token-Type: urn:sometokentype:www.somests.com
Relying-Party-Token-Reply-Url: https://login.myawesomesite.com/auth

Finally, imagine if your browser was smart enough to intercept those headers and look up the user’s token, check if they matched the header ”Relying-Party-Accepting-Token-Type” and then POST the token to the given reply URL.

Hmm.  We’ve just made the internet SSO capable.

Now to just move everyone’s cheese to get this done.

Patent Pending. Winking smile

Salesforce.com Single Sign On using ADFS v2

For the last few years ObjectSharp has been using Salesforce.com to help manage parts of the business.  As business increased, our reliance on Salesforce increased.  More and more users started getting added, and as all stories go, these accounts became one more burden to manage.

This is the universal identity problem – too many user accounts for the same person.  As such, one of my internal goals here is to simplify identity at ObjectSharp.

While working on another internal project with Salesforce i got to thinking about how it manages users.  It turns out Salesforce allows you to set it up as a SAML relying party.  ADFS v2 supports being a SAML IdP.  Theoretically we have both sides of the puzzle, but how does it work?

Well, first things first.  I checked out the security section of the configuration portal:

image

There was a Single Sign-On section, so I followed that and was given a pretty simple screen:

image

There isn’t much here to setup.  Going down the options, here is what I came up with:

SAML Version

I know from previous experience that ADFS supports version 2 of the SAML Protocol.

Issuer

What is the URI of the IdP, which in this case is going to be ADFS?  Within the ADFS MMC snap-in, if you right click the Service node you can access the properties:

image

In the properties dialog there is a textbox allowing you to change the Federation Service Identifier:

image

We want that URI.

Within Salesforce we set the Issuer to the identifier URI.

Identity Provider Certificate

Salesforce can’t just go and accept any token.  It needs to only be able to accept a token from my organization.  Therefore I upload the public key used to sign my tokens from ADFS.  You can access that token by going to ADFS and selecting the Certificates node:

image

Once in there you can select the signing certificate:

image

Just export the certificate and upload to Salesforce.

Custom Error URL

If the login fails for some reason, what URL should it go to?  If you leave it blank, it redirects to a generic Salesforce error page.

SAML User ID Type

This option is asking what information we are giving to Salesforce, so it can correlate that information to one of their internal ID’s.  Since for this demo I was just using my email address, I will leave it with Assertion contains User’s salesforce.com username.

SAML User ID Location

This option is asking where the above ID is located within the SAML token.  By default it will accept the nameidentifier but I don’t really want to pass my email as a name so I will select user ID is in an Attribute element.

Now I have to specify what claim type the email address is.  In this case I will go with the default for ADFS, which is http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

On to Active Directory Federation Services

We are about half way done.  Now we just need to tell ADFS about Salesforce.  It’s surprisingly simple.

Once you’ve saved the Salesforce settings, you are given a button to download the metadata:

image

Selecting that will let you download an XML document containing metadata about Salesforce as a relying party.

Telling ADFS about a relying party is pretty straightforward, and you can find the detailed steps in a previous post I wrote about halfway through the article.

Once you’ve added the relying party, all you need to do is create a rule that returns the user’s email address as the above claim type:

image

Everything should be properly configured at this point.  Now we need to test it.

When I first started out with ADFS and SAML early last year, I couldn’t figure out how to get ADFS to post the token to a relying party.  SAML is not a protocol that I’m very familiar with, so I felt kinda dumb when I realized there is an ADFS URL you need to hit.  In this case it’s https://[adfs.fqdn]/adfs/ls/IdpInitiatedSignOn.aspx.

It brings you to a form page to select which application to post a token to:

image

Select your relying party and then go.

It will POST back to an ADFS endpoint, and then POST the information to the URL within the metadata provided earlier.  Once the POST’ing has quieted down, you end up on your Salesforce dashboard:

image

All in all, it took about 10 minutes to get everything working.

Authentication in an Active Claims Model

When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive.  However, there are a few cases where things differ… a lot.  The biggest of course being how a Request for Security Token (RST) is authenticated.  In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled.  Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.

Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS.  To hand off to the STS, we can just call:

FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request, 
HttpContext.Current.User, 
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(), 
HttpContext.Current.Response); 

However, it’s a little different with the active model.

Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials.  The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS.  The STS deserializes the token and passes it off to a SecurityTokenHandler.  This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.

Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider.  90% of the time, this is probably just fine.  Other times you may need to create your own SecurityTokenHandler.  It’s actually not that hard to do.

First you need to know what sort of token is being passed across the wire.  The big three are:

  • UserNameSecurityToken – Has a username and password pair
  • WindowsSecurityToken – Used for Windows authentication using NTLM or Kerberos
  • X509SecurityToken – Uses x509 certificate for authentication

Each is pretty self explanatory.

Some others out of the box are:

image

Reflector is an awesome tool.  Just sayin’.

Now that we know what type of token we are expecting we can build the token handler.  For the sake of simplicity let’s create one for the UserNameSecurityToken.

To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler.  We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working.  Suffice to say it’s mostly boilerplate code.

We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.

TokenType is used later on to tell what kind of token the handler can actually validate.  More on that in a minute.

Overriding ValidateToken is fairly trivial*.  This is where we actually handle the authentication.  However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception.  I would recommend the SecurityTokenValidationException.  Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection.  We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection.

public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
{
    UserNameSecurityToken userToken = token as UserNameSecurityToken;

    if (userToken == null)
        throw new ArgumentNullException("token");

    string username = userToken.UserName;
    string pass = userToken.Password;

    if (!Membership.ValidateUser(username, pass))
        throw new SecurityTokenValidationException("Username or password is wrong.");

    IClaimsIdentity ident = new ClaimsIdentity();
    ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

    return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
}

Next we need set the TokenType:

public override Type TokenType
{
    get
    {
        return typeof(UserNameSecurityToken);
    }
}

This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns.  The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.

To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:

securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())

To add it via configuration you first need to remove any other handlers that can validate the same type of token:

<microsoft.identityModel>
<service>
<securityTokenHandlers>
<remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" />
</securityTokenHandlers>

That’s pretty much all there is to it.  Here is the class for the sake of completeness:

using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;

namespace Syfuhs.IdentityModel.Tokens
{
    public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
    {
        public override bool CanValidateToken { get { return true; } }

        public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
        {
            UserNameSecurityToken userToken = token as UserNameSecurityToken;

            if (userToken == null)
                throw new ArgumentNullException("token");

            string username = userToken.UserName;
            string pass = userToken.Password;

            if (!Membership.ValidateUser(username, pass))
                throw new SecurityTokenValidationException("Username or password is wrong.");

            IClaimsIdentity ident = new ClaimsIdentity();
            ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

            return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
        }
    }
}

* Trivial in the development sense, not trivial in the security sense.

Generating Federation Metadata Dynamically

In a previous post we looked at what it takes to actually write a Security Token Service.  If we knew what the STS offered and required already, we could set up a relying party relatively easily with that setup.  However, we don’t always know what is going on.  That’s the purpose of federation metadata.  It gives us a basic breakdown of the STS so we can interact with it.

Now, if we are building a custom STS we don’t have anything that is creating this metadata.  We could do it manually by hardcoding stuff in an xml file and then signing it, but that gets ridiculously tedious after you have to make changes for the third or fourth time – which will happen.  A lot.  The better approach is to generate the metadata automatically.  So in this post we will do just that.

The first thing you need to do is create a endpoint.  There is a well known path of /FederationMetadata/2007-06/FederationMetadata.xml that is generally used, so let’s use that.  There are a lot of options to generate dynamic content and in Programming Windows Identity Foundation, Vitorrio uses a WCF Service:

[ServiceContract]
public interface IFederationMetadata
{
    [ServiceBehavior]
    [webGet(UriTemplate = "2007-06/FederationMetadata.xml")]
    XElement FederationMetadata();
}

It’s a great approach, but for some reason I prefer the way that Dominick Baier creates the endpoint in StarterSTS.  He uses an IHttpHandler and a web.config entry to create a handler:

<location path="FederationMetadata/2007-06">
<system.webServer>
<handlers>
<add
        name="MetadataGenerator"
        path="FederationMetadata.xml"
        verb="GET"
        type="Syfuhs.TokenService.WSTrust.FederationMetadataHandler" />
</handlers>
</system.webServer>
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>

As such, I’m going to go that route.  Let’s take a look at the implementation for the handler:

using System.Web;

namespace Syfuhs.TokenService.WSTrust
{
    public class FederationMetadataHandler : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            context.Response.ClearHeaders();

            context.Response.Clear();
            context.Response.ContentType = "text/xml";

            MyAwesomeTokenServiceConfiguration
.Current.SerializeMetadata(context.Response.OutputStream);
        }

        public bool IsReusable { get { return false; } }
    }
}

All the handler is doing is writing metadata out to a stream, which in this case is the response stream.  You can see that it is doing this through the MyAwesomeTokenServiceConfiguration class which we created in the previous article.  The SeriaizeMetadata method creates an instance of a MetadataSerializer and writes an entity to the stream:

public void SerializeMetadata(Stream stream)
{
    MetadataSerializer serializer = new MetadataSerializer();
    serializer.WriteMetadata(stream, GenerateEntities());
}

The entities are generated through a collection of tasks:

private EntityDescriptor GenerateEntities()
{
    if (entity != null)
        return entity;

    SecurityTokenServiceDescriptor sts = new SecurityTokenServiceDescriptor();

    FillOfferedClaimTypes(sts.ClaimTypesOffered);

    FillEndpoints(sts);
    FillSupportedProtocols(sts);
    FillSigningKey(sts);

    entity = new EntityDescriptor(new EntityId(string.Format("https://{0}", host)))
    {
        SigningCredentials = this.SigningCredentials
    };

    entity.RoleDescriptors.Add(sts);

    return entity;
}

The entity is generated, and an object is created to describe the STS called a SecurityTokenServiceDescriptor.  At this point it’s just a matter of sticking in the data and defining the credentials used to sign the metadata:

private void FillSigningKey(SecurityTokenServiceDescriptor sts)
{
    KeyDescriptor signingKey
= new KeyDescriptor(this.SigningCredentials.SigningKeyIdentifier)
{
Use = KeyType.Signing
};

    sts.Keys.Add(signingKey);
}

private void FillSupportedProtocols(SecurityTokenServiceDescriptor sts)
{
    sts.ProtocolsSupported.Add(new System.Uri(WSFederationConstants.Namespace));
}

private void FillEndpoints(SecurityTokenServiceDescriptor sts)
{
    EndpointAddress activeEndpoint
= new EndpointAddress(string.Format("https://{0}/TokenService/activeSTS.svc", host));
    sts.SecurityTokenServiceEndpoints.Add(activeEndpoint);
    sts.TargetScopes.Add(activeEndpoint);
}


private void FillOfferedClaimTypes(ICollection<DisplayClaim> claimTypes)
{
    claimTypes.Add(new DisplayClaim(ClaimTypes.Name, "Name", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Email, "Email", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Role, "Role", ""));
}

That in a nutshell is how to create a basic metadata document as well as sign it.  There is a lot more information you can put into this, and you can find more things to work with in the Microsoft.IdentityModel.Protocols.WSFederation.Metadata namespace.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Whitepaper on ADFS 2 Federation with Shibboleth and the InCommon Federation

Over on the Claims-Based Identity blog, they announced a whitepaper was just released on using ADFS 2 to federate with Shibboleth 2 and the InCommon Federation.  I just started reading through it, but it looks really well written.

Here is the abstract of the paper itself:

Through its support for the WS-Federation and Security Assertion Markup Language (SAML) 2.0 protocols, Microsoft® Active Directory® Federation Services 2.0 (AD FS 2.0) provides claims-based, cross-domain, Web single sign-on (SSO) interoperability with non-Microsoft federation solutions. Shibboleth® 2, through its support for SAML 2.0, enables cross-domain, federated SSO between environments that are running Microsoft and Shibboleth 2 federation infrastructures.

You can download the whitepaper in .docx format.

What is Shibboleth?

The Shibboleth System is a standards based, open source software package for web single sign-on across or within organizational boundaries. It allows sites to make informed authorization decisions for individual access of protected online resources in a privacy-preserving manner.

What is InCommon?

InCommon eliminates the need for researchers, students, and educators to maintain multiple passwords and usernames. Online service providers no longer need to maintain user accounts. Identity providers manage the levels of their users' privacy and information exchange. InCommon uses SAML-based authentication and authorization systems (such as Shibboleth®) to enable scalable, trusted collaborations among its community of participants.

Managing Identity in SharePoint

Yet another presentation on the docket!  I submitted an abstract to SharePoint Summit 2011 and they accepted!  I will be presenting on SharePoint and how it manages Identity.  More specifically, how SharePoint 2010 uses WIF to handle Claims based authentication and Federation.

Here are the details

Event: SharePoint Summit 2011, January 31st 2011 – February 2nd, 2011

When: 11:30 a.m. - 12:45 p.m. February 1st, 2011

Where: Four Seasons Hotel, Toronto

Abstract: Managing identities within an organization is relatively easy. However, as business changes, we need to be able to adapt quickly. Identity is something that often gets overlooked in adaptation. In this session we will discuss the Windows Identity Foundation and how SharePoint uses it to adapt easily to change.

Link: http://www.sharepointsummit2011.com/Toronto/conference_day2.htm#session_7_3

Presentation: Changing the Identity Game with the Windows Identity Foundation

Rob Windsor and TVBUG are letting me present on November 8th on Claims-Based Authentication and Identification.  Here are the details:

Location: Room 1, Library, 2nd floor, North York Public Library

Date: Monday, November 8, 2010

Time:
6:30 to 6:50 (Pizza - Meet and Greet)
6:50 to 7:00 (Group Business)
7:00 to 9:00 (Presentation)

Topic: Changing the Identity Game with the Windows Identity Foundation

Abstract: Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as keep a record of their accounts. With the Windows Identity Foundation, there is a fundamental shift in the way we manage these users and their accounts. In this presentation we will take a look at the why's and dig into the how's of the Windows Identity Foundation by building an Identity aware application from scratch.

Directions

All TVBUG meetings are held at the North York Public Library or North York Memorial Hall. Both are located in the same building at Yonge Street and Park Home Avenue (North of the 401 between Sheppard and Finch across from Empress Walk). If you are taking the Subway get off at the North York Centre Station. The library meeting rooms are on the 2nd Floor. Memorial Hall meeting rooms are on the Concourse Level near the food court.

Using Claims Based Identities with SharePoint 2010

When SharePoint 2010 was developed, Microsoft took extra care to include support for a claims-based identity model.  There are quite a few benefits to doing it this way, one of which is that it simplifies managing identities across organizational structures.  So lets take a look at adding a Secure Token Service as an Authentication Provider to SharePoint 2010.

First, Some Prerequisites

  • You have to use PowerShell for most of this.  You wouldn’t/shouldn’t be adding too many Providers to SharePoint all that often so there isn’t a GUI for this.
  • The claims that SharePoint will know about must be known during setup.  This isn’t that big a deal, but…

Telling SharePoint about the STS

Once you’ve collected all the information you need, open up PowerShell as an Administrator and add the SharePoint snap-in on the server.

Add-PSSnapin Microsoft.SharePoint.PowerShell

Next we need to create the certificate and claim mapping objects:

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("d:\path\to\adfsCert.cer")

$claim1 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" –SameAsIncoming

$claim2 = New-SPClaimTypeMapping "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" –SameAsIncoming

There should be three lines.  They will be word-wrapped.

The certificate is pretty straightforward.  It is the public key of the STS.  The claims are also pretty straightforward.  There are two claims: the roles of the identity, and the email address of the identity.  You can add as many as the STS will support.

Next is to define the realm of the Relying Party; i.e. the SharePoint server.

$realm = "urn:" + $env:ComputerName + ":adfs"

By using a URN value you can mitigate future changes to addresses.  This becomes especially useful in an intranet/extranet scenario.

Then we define the sign-in URL for the STS.  In this case, we are using ADFS:

$signinurl = https://[myAdfsServer.fullyqualified.domainname]/adfs/ls/

Mind the SSL.

And finally we put it all together:

New-SPTrustedIdentityTokenIssuer -Name "MyDomainADFS2" -Description "ADFS 2 Federated Server for MyDomain" -Realm $realm -ImportTrustCertificate $cert -ClaimsMappings $claim1,$claim2 -SignInUrl $signinurl -IdentifierClaim $claim2.InputClaimType

This should be a single line, word wrapped.  If you wanted to you could just call New-SPTrustedIdentityTokenIssuer and then fill in the values one at a time.  This might be useful for debugging.

At this point SharePoint now knows about the STS but none of the sites are set up to use it.

Authenticating SharePoint sites using the STS

For a good measure restart SharePoint/IIS.  Go into SharePoint Administration and create a new website and select Claims Based Authentication at the top:

image

Fill out the rest of the details and then when you get to Claims Authentication Types select Trusted Identity Provider and then select your STS.  In this case it is my ADFS Server:

image

Save the site and you are done.  Try navigating to the site and it should redirect you to your STS.  You can then manage users as you would normally with Active Directory accounts.

Claims Transformation and Custom Attribute Stores in Active Directory Federation Services 2

Active Directory Federation Services 2 has an amazing amount of power when it comes to claims transformation.  To understand how it works lets take a look at a set of claims rules and the flow of data from ADFS to the Relying Party:

image

We can have multiple rules to transform claims, and each one takes precedence via an Order:

image

In the case above, Transform Rule 2 transformed the claims that Rule 1 requested from the attribute store, which in this case was Active Directory.  This becomes extremely useful because there are times when some of the data you need to pull out of Active Directory isn’t in a useable format.  There are a couple options to fix this:

  • Make the receiving application deal with it
  • Modify it before sending it off
  • Ignore it

Lets take a look at the second option (imagine an entire blog post on just ignoring it…).  ADFS allows us to transform claims before they are sent off in the token by way of the Claims Rule Language.  It follows the pattern: "If a set of conditions is true, issue one or more claims."  As such, it’s a big Boolean system.  Syntactically, it’s pretty straightforward.

To issue a claim by implicitly passing true:

=> issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "Awesome Employee");

What that did was ignored the fact that there weren’t any conditions and will always pass a value.

To check a condition:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", Value == "SomeRole"]
    => issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "AwesomeRole");

Breaking down the query, we are checking that a claim created in a previous step has a specific type; in this case role and the claim’s value is SomeRole.  Based on that we are going to append a new claim to the outgoing list with a new type and a new value.

That’s pretty useful in it’s own right, but ADFS can actually go even further by allowing you to pull data out of custom data stores by way of Custom Attribute Stores.  There are four options to choose from when getting data:

  1. Active Directory (default)
  2. LDAP (Any directory that you can query via LDAP)
  3. SQL Server (awesome)
  4. Custom built store via custom .NET assembly

Let’s get some data from a SQL database.  First we need to create the attribute store.  Go to Trust Relationships/Attribute Stores in the ADFS MMC Console (or you could also use PowerShell):

image

Then add an attribute store:

image

All you need is a connection string to the database in question:

image

The next step is to create the query to pull data from the database.  It’s surprisingly straightforward.  This is a bit of a contrived example, but lets grab the certificate name and the certificate hash from a database table where the certificate name is equal to the value of the http://MyCertUri/UserCertName claim type:

c:[type == http://MyCertUri/UserCertName]
   => issue(store = "MyAttributeStore",
         types = ("http://test/CertName", "http://test/CertHash"),
         query = "SELECT CertificateName, CertificateHash FROM UserCertificates WHERE CertificateName='{0}'", param = c.value);

For each column you request in the SQL query, you need a claim type as well.  Also, unlike most SQL queries, to use parameters we need to use a format similar to String.Format instead of using @MyVariable syntaxes.

In a nutshell this is how you deal with claims transformation.  For a more in depth article on how to do this check out TechNet: http://technet.microsoft.com/en-us/library/dd807118(WS.10).aspx.