The Importance of Elevating Privilege

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:

https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method

Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;
}

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
{
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
    DoSomethingRequiringElevation();
}

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
{
    FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived 
        += new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived); } void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e) { if (e.SessionToken.ClaimsPrincipal.IsElevated()) { SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15)); e.SessionToken = token; } }

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Windows Azure Access Control Services Federation with Facebook

Sometime in the last few years Facebook has gotten stupidly popular.  Given the massive user base, it actually makes a little bit of sense to take advantage of the fact that you can use them as an identity provider.  Everyone has a Facebook account (except… me), and you can get a fair bit of information out of it on the user.

The problem though is that it uses OpenAuth, and I, of course, don't like OpenAuth.  This makes it very unlikely for me to spend any amount time working with the protocol, and as such wouldn't jump at the chance to add it into an application.  Luckily ACS supports Facebook natively – AND it's easy to setup.

First things first, we need to log into our ACS management portal, and select Identity Providers under Trust Relationships.  Then we need to add a new Identity Provider:

image

Then we need to select Facebook as the type we want to add:

image

Once we start filling out the details for the federation we need to get some things from Facebook directly.

image

There are three fields we need to worry about, Application ID, Application secret, and Application permissions.  We can get the first two from the settings page of our Facebook application, which you can get to at www.facebook.com/developers/.

You should create a separate application for each instance you create, and I'll explain why in a minute.

You then need the Application permissions.  This is a list of claims to request access to from Facebook.  The full list can be found here: http://developers.facebook.com/docs/authentication/permissions/, but for now email will suffice.

Once you have saved this identity provider you need to create a rule for each relying party.  This will define how the claims are transformed before being sent to your relying party. If you already have rules set up you can modify one:

image

I'm pretty content with just using the default rules, which is to just pass everything, but you need to generate them first:

image

image

Once the rules have been generated you can save the rule.

Now you can test the federation.

It should fail.

If you watched everything in Fiddler you will see a chunk of JSON returned that looks something like:

{
   "error": {
      "type": "OAuthException",
      "message": "Invalid redirect_uri: Given URL is not allowed by the Application configuration."
   }
}

This is about my warning earlier about creating a separate application for each ACS namespace.  Basically, Facebook doesn't like the request for authentication because it has no idea who the requestor is.  Therefore I need to tell Facebook about my application.

To do this you need to get into the Web site settings for your application Facebook:

image

You will need to set the Site URL property to the ACS namespace:

image

Given the requirement for the FQDN, you need to create an application for each namespace you decide to create.

At this point federation with Facebook should now work.  If you are using the default login page you should see something like this:

image

And if you sign-in you should get a token from Facebook which ACS will normalize, and then return to your relying party.  Based on the permissions request you set above you should see something this:

image

** UPDATE **

Some of you may be wondering about this AccessToken claim.  Part of the ACS configuration asks for a set of permissions to request, and these permissions are tied to this access token.  Instead of receiving everything within claims, you need to make a separate call to Facebook to get these details by using the access token.

Dominick Baier has a good article explaining how to accomplish this: http://www.leastprivilege.com/AccessControlServiceV2AndFacebookIntegration.aspx.

** END UPDATE **

For those of you who want to federate with Facebook but don't like the idea of writing OpenAuth goo, ACS easily simplifies the process.

Authentication in an Active Claims Model

When working with Claims Based Authentication a lot of things are similar between the two different models, Active and Passive.  However, there are a few cases where things differ… a lot.  The biggest of course being how a Request for Security Token (RST) is authenticated.  In a passive model the user is given a web page where they can essentially have full reign over how credentials are handled.  Once the credentials have been received and authenticated by the web server, the server generates an identity and passes it off to SecurityTokenService.Issue(…) and does it’s thing by gathering claims, packaging them up into a token, and POST’ing the token back to the Relying Party.

Basically we are handling authentication any other way an ASP.NET application would, by using the Membership provider and funnelling all anonymous users to the login page, and then redirecting back to the STS.  To hand off to the STS, we can just call:

FederatedPassiveSecurityTokenServiceOperations.ProcessRequest(
HttpContext.Current.Request, 
HttpContext.Current.User, 
MyTokenServiceConfiguration.Current.CreateSecurityTokenService(), 
HttpContext.Current.Response); 

However, it’s a little different with the active model.

Web services manage identity via tokens but they differ from passive models because everything is passed via tokens including credentials.  The client consumes the credentials and packages them into a SecurityToken object which is serialized and passed to the STS.  The STS deserializes the token and passes it off to a SecurityTokenHandler.  This security token handler validates the credentials and generates an identity and pushes it up the call stack to the STS.

Much like with ASP.NET, there is a built in Membership Provider for username/password combinations, but you are limited to the basic functionality of the provider.  90% of the time, this is probably just fine.  Other times you may need to create your own SecurityTokenHandler.  It’s actually not that hard to do.

First you need to know what sort of token is being passed across the wire.  The big three are:

  • UserNameSecurityToken – Has a username and password pair
  • WindowsSecurityToken – Used for Windows authentication using NTLM or Kerberos
  • X509SecurityToken – Uses x509 certificate for authentication

Each is pretty self explanatory.

Some others out of the box are:

image

Reflector is an awesome tool.  Just sayin’.

Now that we know what type of token we are expecting we can build the token handler.  For the sake of simplicity let’s create one for the UserNameSecurityToken.

To do that we create a new class derived from Microsoft.IdentityModel.Tokens.UserNameSecurityTokenHandler.  We could start at SecurityTokenHandler, but it’s an abstract class and requires a lot to get it working.  Suffice to say it’s mostly boilerplate code.

We now need to override a method and property: ValidateToken(SecurityToken token) and TokenType.

TokenType is used later on to tell what kind of token the handler can actually validate.  More on that in a minute.

Overriding ValidateToken is fairly trivial*.  This is where we actually handle the authentication.  However, it returns a ClaimsIdentityCollection instead of bool, so if the credentials are invalid we need to throw an exception.  I would recommend the SecurityTokenValidationException.  Once the authentication is done we get the identity for the credentials and bundle them up into a ClaimsIdentityCollection.  We can do that by creating an IClaimsIdentity and passing it into the constructor of a ClaimsIdentityCollection.

public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
{
    UserNameSecurityToken userToken = token as UserNameSecurityToken;

    if (userToken == null)
        throw new ArgumentNullException("token");

    string username = userToken.UserName;
    string pass = userToken.Password;

    if (!Membership.ValidateUser(username, pass))
        throw new SecurityTokenValidationException("Username or password is wrong.");

    IClaimsIdentity ident = new ClaimsIdentity();
    ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

    return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
}

Next we need set the TokenType:

public override Type TokenType
{
    get
    {
        return typeof(UserNameSecurityToken);
    }
}

This property is used as a way to tell it’s calling parent that it can validate/authenticate any tokens of the type it returns.  The web service that acts as the STS loads a collection SecurityTokenHandler’s as part of it’s initialization and when it receives a token it iterates through the collection looking for one that can handle it.

To add the handler to the collection you add it via configuration or if you are crazy doing a lot of low level work you can add it to the SecurityTokenServiceConfiguration in the HostFactory for the service:

securityTokenServiceConfiguration.SecurityTokenHandlers.Add(new MyAwesomeUserNameSecurityTokenHandler())

To add it via configuration you first need to remove any other handlers that can validate the same type of token:

<microsoft.identityModel>
<service>
<securityTokenHandlers>
<remove type="Microsoft.IdentityModel.Tokens.WindowsUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<remove type="Microsoft.IdentityModel.Tokens.MembershipUserNameSecurityTokenHandler,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<add type="Syfuhs.IdentityModel.Tokens.MyAwesomeUserNameSecurityTokenHandler, Syfuhs.IdentityModel" />
</securityTokenHandlers>

That’s pretty much all there is to it.  Here is the class for the sake of completeness:

using System;
using System.IdentityModel.Tokens;
using System.Web.Security;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Protocols.WSIdentity;
using Microsoft.IdentityModel.Tokens;

namespace Syfuhs.IdentityModel.Tokens
{
    public class MyAwesomeUserNameSecurityTokenHandler : UserNameSecurityTokenHandler
    {
        public override bool CanValidateToken { get { return true; } }

        public override ClaimsIdentityCollection ValidateToken(SecurityToken token)
        {
            UserNameSecurityToken userToken = token as UserNameSecurityToken;

            if (userToken == null)
                throw new ArgumentNullException("token");

            string username = userToken.UserName;
            string pass = userToken.Password;

            if (!Membership.ValidateUser(username, pass))
                throw new SecurityTokenValidationException("Username or password is wrong.");

            IClaimsIdentity ident = new ClaimsIdentity();
            ident.Claims.Add(new Claim(WSIdentityConstants.ClaimTypes.Name, username));

            return new ClaimsIdentityCollection(new IClaimsIdentity[] { ident });
        }
    }
}

* Trivial in the development sense, not trivial in the security sense.

Generating Federation Metadata Dynamically

In a previous post we looked at what it takes to actually write a Security Token Service.  If we knew what the STS offered and required already, we could set up a relying party relatively easily with that setup.  However, we don’t always know what is going on.  That’s the purpose of federation metadata.  It gives us a basic breakdown of the STS so we can interact with it.

Now, if we are building a custom STS we don’t have anything that is creating this metadata.  We could do it manually by hardcoding stuff in an xml file and then signing it, but that gets ridiculously tedious after you have to make changes for the third or fourth time – which will happen.  A lot.  The better approach is to generate the metadata automatically.  So in this post we will do just that.

The first thing you need to do is create a endpoint.  There is a well known path of /FederationMetadata/2007-06/FederationMetadata.xml that is generally used, so let’s use that.  There are a lot of options to generate dynamic content and in Programming Windows Identity Foundation, Vitorrio uses a WCF Service:

[ServiceContract]
public interface IFederationMetadata
{
    [ServiceBehavior]
    [webGet(UriTemplate = "2007-06/FederationMetadata.xml")]
    XElement FederationMetadata();
}

It’s a great approach, but for some reason I prefer the way that Dominick Baier creates the endpoint in StarterSTS.  He uses an IHttpHandler and a web.config entry to create a handler:

<location path="FederationMetadata/2007-06">
<system.webServer>
<handlers>
<add
        name="MetadataGenerator"
        path="FederationMetadata.xml"
        verb="GET"
        type="Syfuhs.TokenService.WSTrust.FederationMetadataHandler" />
</handlers>
</system.webServer>
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>

As such, I’m going to go that route.  Let’s take a look at the implementation for the handler:

using System.Web;

namespace Syfuhs.TokenService.WSTrust
{
    public class FederationMetadataHandler : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            context.Response.ClearHeaders();

            context.Response.Clear();
            context.Response.ContentType = "text/xml";

            MyAwesomeTokenServiceConfiguration
.Current.SerializeMetadata(context.Response.OutputStream);
        }

        public bool IsReusable { get { return false; } }
    }
}

All the handler is doing is writing metadata out to a stream, which in this case is the response stream.  You can see that it is doing this through the MyAwesomeTokenServiceConfiguration class which we created in the previous article.  The SeriaizeMetadata method creates an instance of a MetadataSerializer and writes an entity to the stream:

public void SerializeMetadata(Stream stream)
{
    MetadataSerializer serializer = new MetadataSerializer();
    serializer.WriteMetadata(stream, GenerateEntities());
}

The entities are generated through a collection of tasks:

private EntityDescriptor GenerateEntities()
{
    if (entity != null)
        return entity;

    SecurityTokenServiceDescriptor sts = new SecurityTokenServiceDescriptor();

    FillOfferedClaimTypes(sts.ClaimTypesOffered);

    FillEndpoints(sts);
    FillSupportedProtocols(sts);
    FillSigningKey(sts);

    entity = new EntityDescriptor(new EntityId(string.Format("https://{0}", host)))
    {
        SigningCredentials = this.SigningCredentials
    };

    entity.RoleDescriptors.Add(sts);

    return entity;
}

The entity is generated, and an object is created to describe the STS called a SecurityTokenServiceDescriptor.  At this point it’s just a matter of sticking in the data and defining the credentials used to sign the metadata:

private void FillSigningKey(SecurityTokenServiceDescriptor sts)
{
    KeyDescriptor signingKey
= new KeyDescriptor(this.SigningCredentials.SigningKeyIdentifier)
{
Use = KeyType.Signing
};

    sts.Keys.Add(signingKey);
}

private void FillSupportedProtocols(SecurityTokenServiceDescriptor sts)
{
    sts.ProtocolsSupported.Add(new System.Uri(WSFederationConstants.Namespace));
}

private void FillEndpoints(SecurityTokenServiceDescriptor sts)
{
    EndpointAddress activeEndpoint
= new EndpointAddress(string.Format("https://{0}/TokenService/activeSTS.svc", host));
    sts.SecurityTokenServiceEndpoints.Add(activeEndpoint);
    sts.TargetScopes.Add(activeEndpoint);
}


private void FillOfferedClaimTypes(ICollection<DisplayClaim> claimTypes)
{
    claimTypes.Add(new DisplayClaim(ClaimTypes.Name, "Name", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Email, "Email", ""));
    claimTypes.Add(new DisplayClaim(ClaimTypes.Role, "Role", ""));
}

That in a nutshell is how to create a basic metadata document as well as sign it.  There is a lot more information you can put into this, and you can find more things to work with in the Microsoft.IdentityModel.Protocols.WSFederation.Metadata namespace.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Token Request Validation in ASP.NET

Earlier this week during my TechDays presentation on Windows Identity Foundation, there was a part during the demo that I said would fail miserably after the user was authenticated and the token was POST’ed back to the relying party.  Out of the box, ASP.NET does request validation.  If a user has submitted content through request parameters it goes through a validation step, and by default this step is to break on anything funky such as angle brackets.  This helps to deter things like cross site scripting attacks.  However, we were passing XML so we needed to turn off this validation.  There are two approaches to doing this.

The first approach, which is what I did in the demo, was to set the validation mode to “2.0”.  All this did was tell ASP.NET to use a less strict validation scheme.  To do that you need to add a line to the web.config file:

<system.web>
<httpRuntime requestValidationMode=”2.0” />
</system.web>

This is not the best way to do things though.  It creates a new vector for attack, as you’ve just allowed an endpoint to accept trivial data.  What is more preferred is to create a custom request validator.  You can find a great example in the Fabrikam Shipping demo.

It’s pretty straightforward to create a validator.  First you create a class that inherits System.Web.Util.RequestValidator, and then you override the method IsValidRequestString(…).  At that point you can do anything you want to validate, but the demo code tries to build a SignInResponseMessage object from the wresult parameter.  If it creates the object successfully the request is valid.  Otherwise it passes the request to the base implementation of IsValidRequestString(…).

The code to handle this validation is pretty straightforward:

    public class WSFederationRequestValidator : RequestValidator
    {
        protected override bool IsValidRequestString(HttpContext context,
            string value, RequestValidationSource requestValidationSource, 
            string collectionKey, out int validationFailureIndex)
        {
            validationFailureIndex = 0;

            if (requestValidationSource == RequestValidationSource.Form
                && collectionKey.Equals(WSFederationConstants.Parameters.Result, 
                   StringComparison.Ordinal))
            {
                SignInResponseMessage message =
                     WSFederationMessage.CreateFromFormPost(context.Request) 
                     as SignInResponseMessage;

                if (message != null)
                {
                    return true;
                }
            }

            return base.IsValidRequestString(context, value, requestValidationSource,
                   collectionKey, out validationFailureIndex);
        }
    }

Once you’ve created your request validator, you need to update the web.config file to tell .NET to use the validator.  You can do that by adding the following xml:

<system.web>
<httpRuntime requestValidationType="Microsoft.Samples.DPE.FabrikamShipping.Web.Security.WSFederationRequestValidator" />
</system.web>

You can find the validation code in FabrikamShipping.Web\Security\WSFederationRequestValidator.cs within the FabrikamShipping solution.

Using Claims Based Identities with SharePoint 2010

When SharePoint 2010 was developed, Microsoft took extra care to include support for a claims-based identity model.  There are quite a few benefits to doing it this way, one of which is that it simplifies managing identities across organizational structures.  So lets take a look at adding a Secure Token Service as an Authentication Provider to SharePoint 2010.

First, Some Prerequisites

  • You have to use PowerShell for most of this.  You wouldn’t/shouldn’t be adding too many Providers to SharePoint all that often so there isn’t a GUI for this.
  • The claims that SharePoint will know about must be known during setup.  This isn’t that big a deal, but…

Telling SharePoint about the STS

Once you’ve collected all the information you need, open up PowerShell as an Administrator and add the SharePoint snap-in on the server.

Add-PSSnapin Microsoft.SharePoint.PowerShell

Next we need to create the certificate and claim mapping objects:

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("d:\path\to\adfsCert.cer")

$claim1 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" –SameAsIncoming

$claim2 = New-SPClaimTypeMapping "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" –SameAsIncoming

There should be three lines.  They will be word-wrapped.

The certificate is pretty straightforward.  It is the public key of the STS.  The claims are also pretty straightforward.  There are two claims: the roles of the identity, and the email address of the identity.  You can add as many as the STS will support.

Next is to define the realm of the Relying Party; i.e. the SharePoint server.

$realm = "urn:" + $env:ComputerName + ":adfs"

By using a URN value you can mitigate future changes to addresses.  This becomes especially useful in an intranet/extranet scenario.

Then we define the sign-in URL for the STS.  In this case, we are using ADFS:

$signinurl = https://[myAdfsServer.fullyqualified.domainname]/adfs/ls/

Mind the SSL.

And finally we put it all together:

New-SPTrustedIdentityTokenIssuer -Name "MyDomainADFS2" -Description "ADFS 2 Federated Server for MyDomain" -Realm $realm -ImportTrustCertificate $cert -ClaimsMappings $claim1,$claim2 -SignInUrl $signinurl -IdentifierClaim $claim2.InputClaimType

This should be a single line, word wrapped.  If you wanted to you could just call New-SPTrustedIdentityTokenIssuer and then fill in the values one at a time.  This might be useful for debugging.

At this point SharePoint now knows about the STS but none of the sites are set up to use it.

Authenticating SharePoint sites using the STS

For a good measure restart SharePoint/IIS.  Go into SharePoint Administration and create a new website and select Claims Based Authentication at the top:

image

Fill out the rest of the details and then when you get to Claims Authentication Types select Trusted Identity Provider and then select your STS.  In this case it is my ADFS Server:

image

Save the site and you are done.  Try navigating to the site and it should redirect you to your STS.  You can then manage users as you would normally with Active Directory accounts.

Modifying and Securing the ADFS 2 Web Application

When you install an instance of Active Directory Federation Services v2, amongst other things it will create a website within IIS to use as it’s Secure Token Service.  This is sort of fundamental to the whole design.  There are some interesting things to note about the situation though.

When Microsoft (or any ISV really) releases a new application or server that has a website attached to it, they usually deliver it in a precompiled form, so all we do is point IIS to the binaries and config files and we go from there.  This serves a number of purposes usually along the lines of performance, Intellectual Property protection, defense in depth protection, etc.  Interestingly though, when the installer creates the application for us in IIS, it drops source code instead of a bunch of assemblies.

There is a valid reason for this.

It gives us the opportunity to do a couple things.  First, we can inspect the code.  Second, we can easily modify the code.  Annoyingly, they don’t give us a Visual Studio project to do so.  Let’s create one then.

First off, lets take a look at what was created by the installer.  By default it drops the files in c:\inetpub\adfs\ls.  We are given a few files and folders:

image

There isn’t much to it.  These files only contain a few lines of code.  Next we create the actual project.

DISCLAIMER:  I will not be held responsible if things break or the server steals your soul.  Please do NOT (I REPEAT) do NOT do this with production servers please!  (Notice I said please twice?)

Since we want to create a Visual Studio project, and since ADFS cannot be installed on a workstation, we have two options:

  1. Install Visual Studio on the server running ADFS
  2. Copy the files to your local machine

Each options have their tradeoffs.  The first requires a bit of a major overhaul of your development environment.  It’s very similar to SharePoint 2007 development.  The second option makes developing a lot easier, but testing is a pain because the thing won’t actually work properly without the Windows Services running.  You would need to deploy the code to a test server with ADFS installed.

Since I have little interest in rebuilding my development box, I went with the second option.

Okay, back to Visual Studio.  The assemblies referenced were all built on Framework 3.5, so for the sake of simplicity lets create a 3.5 Web Application:

image

I haven’t tested 4.0 yet.

Since this is a Web Application and not a Web Site within Visual Studio, we need to generate the *.designer.cs files for all the *.aspx pages.  Right-click your project and select Convert to Web Application:

image

At this point if you tried to compile the application it wouldn’t work.  We are missing a few assembly references.  First, add Microsoft.IdentityModel.  This should be in the GAC or the Reference Assemblies folder in Program Files.  Next, go back to the ADFS server and navigate to C:\Program Files\Active Directory Federation Services 2.0 and copy the following files:

  • Microsoft.IdentityServer.dll
  • Microsoft.IdentityServer.Compression.dll

Add these assemblies as references.  The web application should compile successfully.

Next we need to sign the web application’s assemblies.  If you have internal policies on assembly signing, follow those.  Otherwise double-click the properties section in Solution Explorer and navigate to Signing:

image

Choose a key file or create a new one.  Rebuild the web application.

So far we haven’t touched a line of code.  This is all general deployment stuff.  You can deploy the web application back to the ADFS server and it should work as if nothing had changed.  You have a few options for this.  The Publishing Features in Visual Studio 2010 are awesome.  Right click the project and Publish it:

image

Since I set up a test box for ADFS development, I’m just going to overwrite the files on the server:

image

Pro Tip: If you do something terrible and need to revert back to original code (what part of don’t do this on a production box didn’t make sense? Winking smile) you can access the original files from C:\Program Files\Active Directory Federation Services 2.0\WSFederationPassive.Web.

At this point we haven’t done much, but we now have a stepping point to modify the default behavior of ADFS.  This could range from simple theme changes to better suit corporate policy, or to completely redefine the authentication workflow.

This also gives us the ability to better protect our code in the event that IIS craps out and shows contents of files, not to mention the (albeit minor) performance boost we get because the website doesn’t need to be recompiled.

Have fun!

Converting Bootstrap Tokens to SAML Tokens

there comes a point where using an eavesdropping application to catch packets as they fly between Secure Token Services and Relying Parties becomes tiresome.  For me it came when I decided to give up on creating a man-in-the-middle between SSL sessions between ADFS and applications.  Mainly because ADFS doesn’t like that.  At all.

Needless to say I wanted to see the tokens.  Luckily, Windows Identity Foundation has the solution by way of the Bootstrap token.  To understand what it is, consider how this whole process works.  Once you’ve authenticated, the STS will POST a chunk of XML (the SAML Token) back to the RP.  WIF will interpret it as necessary and do it’s magic generating a new principal with the payload.  However, in some instances you need to keep this token intact.  This would be the case if you were creating a web service and needed to forward the token.  What WIF does is generate a bootstrap token from the SAML token, in the event you needed to forward it off to somewhere.

Before taking a look at it, let's add in some useful using statements:

using System;
using System.IdentityModel.Tokens;
using System.Text;
using System.Threading;
using System.Xml;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Tokens.Saml11;

The bootstrap token is attached to IClaimsPrincipal identity:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

However if you do this out of the box, BootstrapToken will be null.  By default, WIF will not save the token.  We need to explicitly enable this in the web.config file.  Add this line under <microsoft.IdentityModel><service><securityTokenHandlers>:

<securityTokenHandlerConfiguration saveBootstrapTokens="true" />

Once you’ve done that, WIF will load the token.

The properties are fairly straightforward, but you can’t just get a blob from it:

image

Luckily we have some code to convert from the bootstrap token to a chunk of XML:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

StringBuilder sb = new StringBuilder();

using (var writer = XmlWriter.Create(sb))
{
     new Saml11SecurityTokenHandler(new SamlSecurityTokenRequirement()).WriteToken(writer, bootstrapToken);
}

string theXml = sb.ToString();

We get a proper XML document:

image

That’s all there is to it.

Installing ADFS 2 and Federating an Application

From Microsoft Marketing, ADFS 2.0 is:

Active Directory Federation Services 2.0 helps IT enable users to collaborate across organizational boundaries and easily access applications on-premises and in the cloud, while maintaining application security. Through a claims-based infrastructure, IT can enable a single sign-on experience for end-users to applications without requiring a separate account or password, whether applications are located in partner organizations or hosted in the cloud.

So, it’s a Token Service plus some.  In a previous post I had said:

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

The membership store in this case being Active Directory.

I thought it would be a good idea to run through how to install ADFS and set up an application to use it.  Since we already discussed how to federate an application using FedUtil.exe, I will let you go through the steps in the previous post.  I will provide information on where to find the Metadata later on in this post.

But First: The Prerequisites

  1. Join the Server to the Domain. (I’ve started the installation of ADFS three times on non-domain joined systems.  Doh!)
  2. Install the latest .NET Framework.  I’m kinda partial to using SmallestDotNet.com created by Scott Hanselman.  It’s easy.
  3. Install IIS.  If you are running Server 2008 R2 you can follow these steps in another post, or just go through the wizards.  FYI: The post installs EVERY feature.  Just remember that when you move to production.  Surface Area and what not…
  4. Install PowerShell.
  5. Install the Windows Identity Foundation: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en
  6. Install SQL Server.  This is NOT required.  You only need to install it if you want to use a SQL Database to get custom Claims data.  You could also use a SQL Server on another server…
  7. Download ADFS 2.0 RTW: http://www.microsoft.com/downloads/details.aspx?familyid=118c3588-9070-426a-b655-6cec0a92c10b&displaylang=en

The Installation

image

Read the terms and accept them.  If you notice, you only have to read half of what you see because the rest is in French.  Maybe the lawyers are listening…these things are getting more readable.

image

Select Federation Server.  A Server Proxy allows you to use ADFS on a web server not joined to the domain.

image

We already installed all of these things.  When you click next it will check for latest hotfixes and ask if you want to open the configuration MMC snap-in.  Start it.

image

We want to start the configuration Wizard and then create a new Federation Service:

image

Next we want to create a Stand-alone federation server:

image

We need to select a certificate for ADFS to use.  By default it uses the SSL certificate of the default site in IIS.  So lets add one.  In the IIS Manager select the server and then select Server Certificates:

image

We have a couple options when it comes to adding a certificate.  For the sake of this post I’ll just create a self-signed certificate, but if you have a domain Certificate Authority you could go that route, or if this is a public facing service create a request and get a certificate from a 3rd party CA.

image

Once we’ve created the certificate we assign it to the web site.  Go to the website and select Bindings…

image

Add a site binding for https:

image

Now that we’ve done that we can go back to the Configuration Wizard:

image

Click next and it will install the service.  It will stop IIS so be aware of that.

image

You may receive this error if you are installing on Server 2008:

image

The fix for this is here: http://www.syfuhs.net/2010/07/23/ADFS20WindowsServiceNotStartingOnServer2008.aspx

You will need to re-run the configuration wizard if you do this.  It may complain about the virtual applications already existing.  You two options: 1) delete the applications in IIS as well as the folder C:\inetpub\adfs; 2) Ignore the warning.

Back to the installation, it will create two new Virtual Applications in IIS:

image

Once the wizard finishes you can go back to the MMC snap-in and fiddle around.  The first thing we need to do is create an entry for a Relying Party.  This will allow us to create a web application to work with it.

image

When creating an RP we have a couple options to provide configuration data.

image

Since we are going to create a web application from scratch we will enter in manual data.  If you already have the application built and have Federation Metadata available for it, by all means just use that.

We need a name:

image

Very original, eh?

Next we need to decide on what profile we will be using.  Since we are building an application from scratch we can take advantage of the 2.0 profile, but if we needed backwards compatibility for a legacy application we should select the 1.0/1.1 profile.

image

Next we specify the certificate to encrypt our claims sent to the application.  We only need the public key of the certificate.  When we run FedUtil.exe we can specify which certificate we want to use to decrypt the incoming tokens.  This will be the private key of the same certificate.  For the sake of this, we’ll skip it.

image

The next step gets a little confusing.  It asks which protocols we want to use if we are federating with a separate STS.  In this case since we aren’t doing anything that crazy we can ignore them and continue:

image

We next need to specify the RP’s identifying URI.

image

Allow anyone and everyone, or deny everyone and add specific users later?  Allow everyone…

image

When we finish we want to edit the claim rules:

image

This dialog will allow us to add mappings between claims and the data within Active Directory:

image

So lets add a rule.  We want to Send LDAP Attributes as Claims

image

First we specify what data in Active Directory we want to provide:

image

Then we specify which claim type to use:

image

And ADFS is configured!  Lets create our Relying Party.  You can follow these steps: Making an ASP.NET Website Claims Aware with the Windows Identity Foundation.  To get the Federation Metadata for ADFS navigate to the URL that the default website is mapped to + /FederationMetadata/2007-06/FederationMetadata.xml.  In my case it’s https://web1.nexus.internal.test/FederationMetadata/2007-06/FederationMetadata.xml.

Once you finish the utility it’s important that we tell ADFS that our new RP has Metadata available.  Double click on the RP to get to the properties.  Select Monitoring:

image

Add the URL for the Metadata and select Monitor relying party.  This will periodically call up the URL and download the metadata in the event that it changes.

At this point we can test.  Hit F5 and we will redirect to the ADFS page.  It will ask for domain credentials and redirect back to our page.  Since I tested it with a domain admin account I got this back:

image

It works!

For more information on ADFS 2.0 check out http://www.microsoft.com/windowsserver2008/en/us/ad-fs-2-overview.aspx or the WIF Blog at http://blogs.msdn.com/b/card/

Happy coding!