Salesforce.com Single Sign On using ADFS v2

For the last few years ObjectSharp has been using Salesforce.com to help manage parts of the business.  As business increased, our reliance on Salesforce increased.  More and more users started getting added, and as all stories go, these accounts became one more burden to manage.

This is the universal identity problem – too many user accounts for the same person.  As such, one of my internal goals here is to simplify identity at ObjectSharp.

While working on another internal project with Salesforce i got to thinking about how it manages users.  It turns out Salesforce allows you to set it up as a SAML relying party.  ADFS v2 supports being a SAML IdP.  Theoretically we have both sides of the puzzle, but how does it work?

Well, first things first.  I checked out the security section of the configuration portal:

image

There was a Single Sign-On section, so I followed that and was given a pretty simple screen:

image

There isn’t much here to setup.  Going down the options, here is what I came up with:

SAML Version

I know from previous experience that ADFS supports version 2 of the SAML Protocol.

Issuer

What is the URI of the IdP, which in this case is going to be ADFS?  Within the ADFS MMC snap-in, if you right click the Service node you can access the properties:

image

In the properties dialog there is a textbox allowing you to change the Federation Service Identifier:

image

We want that URI.

Within Salesforce we set the Issuer to the identifier URI.

Identity Provider Certificate

Salesforce can’t just go and accept any token.  It needs to only be able to accept a token from my organization.  Therefore I upload the public key used to sign my tokens from ADFS.  You can access that token by going to ADFS and selecting the Certificates node:

image

Once in there you can select the signing certificate:

image

Just export the certificate and upload to Salesforce.

Custom Error URL

If the login fails for some reason, what URL should it go to?  If you leave it blank, it redirects to a generic Salesforce error page.

SAML User ID Type

This option is asking what information we are giving to Salesforce, so it can correlate that information to one of their internal ID’s.  Since for this demo I was just using my email address, I will leave it with Assertion contains User’s salesforce.com username.

SAML User ID Location

This option is asking where the above ID is located within the SAML token.  By default it will accept the nameidentifier but I don’t really want to pass my email as a name so I will select user ID is in an Attribute element.

Now I have to specify what claim type the email address is.  In this case I will go with the default for ADFS, which is http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress.

On to Active Directory Federation Services

We are about half way done.  Now we just need to tell ADFS about Salesforce.  It’s surprisingly simple.

Once you’ve saved the Salesforce settings, you are given a button to download the metadata:

image

Selecting that will let you download an XML document containing metadata about Salesforce as a relying party.

Telling ADFS about a relying party is pretty straightforward, and you can find the detailed steps in a previous post I wrote about halfway through the article.

Once you’ve added the relying party, all you need to do is create a rule that returns the user’s email address as the above claim type:

image

Everything should be properly configured at this point.  Now we need to test it.

When I first started out with ADFS and SAML early last year, I couldn’t figure out how to get ADFS to post the token to a relying party.  SAML is not a protocol that I’m very familiar with, so I felt kinda dumb when I realized there is an ADFS URL you need to hit.  In this case it’s https://[adfs.fqdn]/adfs/ls/IdpInitiatedSignOn.aspx.

It brings you to a form page to select which application to post a token to:

image

Select your relying party and then go.

It will POST back to an ADFS endpoint, and then POST the information to the URL within the metadata provided earlier.  Once the POST’ing has quieted down, you end up on your Salesforce dashboard:

image

All in all, it took about 10 minutes to get everything working.

The Anatomy of a Security Breach

Without going into too much detail, there is a guy who the security industry collectively hates.  When you hear a statement like that, the happy parts of our brains think this guy must be an underdog.  He must be awesome at what he does, and the big corporations hate him for it.  Or maybe he’s a world-renowned  hacker that nobody can catch.

Suffice to say, neither are the case.

Earlier today it appears that Greg Evans of LIGATT was, for lack of a better word, pwned.  His twitter account was broken into, his email was ransacked, and by the looks of it, his reputation has been ruined.  I think it’s time to look into how and why this happened.

Side Note: I have zero affiliation with whoever broke into his accounts and stole his data.

The Impetus

Before going into the how, it might help to explain why this happened.

[My opinion doesn’t necessarily reflect that of the attackers, nor does the opinion of my employer.  This is strictly an interpretation of the messages]

Good people get hacked.  It’s a fact of life.  Bad people get hacked.  It’s a fact of life. 

Part of this attack left a note explaining why all of this happened, and explained the contents of the data.  You can find the original post to this from his twitter account.

As it happens, the people who did this see it as retribution for all the things Evans has done in the name of the InfoSec industry.

"Do not meddle in the affairs of hackers, for they are subtle and quick to anger"

The first argument being made is that he tries really hard to be a major representative of the industry.

He's been on TV, he's been on radio, he's trying to draw as much attention to himself as possible.

This I would argue isn’t too much of a problem.  The industry needs spokespeople.  However, it goes on:

This man in no way represents this industry. […] He's gone after people at their home to intimidate them and their family. He's gone after them at their work to discredit them with their employer. And as everyone knows, he recklessly sues anyone who speaks negatively of him on the internet.

Nobody likes it when someone says something mean about you, or when they correct you in public.  However, sometimes it happens.  We are all sometimes wrong.  Evans doesn’t appear to agree with that statement, and will try to sue or slander anyone who disagrees with him. 

Don’t poke the bear.  It pisses the bear off, and gets you attacked.

Especially when you have a secret or two to hide:

Finally, to Gregory D Evans: it is done. All your lies are out in the open. Your investors will know. Your lawyers will know. Your employees will know. Your mother will know. Your lovers will know. Just step away and move on. Stop the stock scams. Stop the lawsuits. Stop the harassment. Stop robbing your employees. Stop embezzling. Stop deceiving every person in your life.

If you were someone who wanted to take justice into your own hands, I guess this is reason enough.

So how did this breach happen?

The Attack

It looks like an inside job:

To the brave soul who helped make this possible: thank you. You took great personal risk to bring this information forward, and none of it would be possible without you. It's unclear how you tolerate his lies day after day, but you've redeemed yourself by supporting this cause.

i can only speculate, but there are two-and-a-half basic ways this could have gone down.

  • Insider has administrator access to email servers and downloads the contents of Evans’ inbox
  • Insider gets Evans’ password and downloads the contents of the inbox
  • Insider gives administrative access to attacker and does one of the first two things

Once they had the contents of the email they had to get access to his twitter account.  This leads me to believe that the insider had administrative access, because they could then reset the twitter account’s password and catch the reset email before it got to the inbox.  Seems like the simplest approach.

Either that, or Evans used really weak passwords.

Once the attacker had their package, they just needed to distribute it and the message explaining their actions.  From his twitter account they posted the anonymous message to pastebin.com:

[…]

Enough is enough. He must be stopped by any means necessary. To that end, at the end of this message is a torrent of the inbox of [Evans’ redacted email address]; the only condition of receipt is that you not talk about the spool or this email release on twitter until after you have the full copy and are seeding it. He may be an idiot but his staff watch twitter for any mention of him, and it's imperative that this file be distributed as much as possible before takedown begins.

[…]

The message had a final, succinct message:

Happy Birthday Mr. Evans
[Redacted link to torrent]
archive password will be released shortly

I haven’t downloaded the torrent, so I don’t know what’s in the package.  I suspect the contents will be publicly disclosed shortly on a number of anonymous sites once there are enough seeders.

This could potentially just be a hoax.

The Fallout

Lots of people get hurt when security is breached.  In this case, quite a number of people will have some of their most private information disclosed.

Contained within his inbox is personal information of many, many people. Social security numbers, bank account routing numbers, credit reports, and other reports by private investigators. It was completely impractical to redact all of this information in any effective manner […].

Some people say that justice comes at the price of people’s privacy.  The attackers feel guilty about this:

This release immediately follows with a small regret. Apologies much be given to all the bystanders, innocent or otherwise. […] and for that: sadness. If in your search through this release you find personal information, please contact the person and notify them.

They also don’t have much faith in the likelihood of Evans properly disclosing the breach:

Even when GDE finds out of this breach, it's quite unlikely that he will follow proper breach notification procedures.

Once enough people have downloaded the torrent and started seeding the content, there isn’t any real way to remove the data from public access.  That means every one of those SSN, bank numbers, credit reports, and whatever else is in the archive will be publicly available for the foreseeable future.

Conclusion

Breaches occur all the time for reasons of profit.  This particular breach on the other hand was done in the name of justice and retribution.  While the motives may be different, the moving pieces work the same way, and there are still three basic parts to a breach: the motive to do it, the attack itself, and the fallout after the attack.

Hopefully everyone learns a little something from this particular breach.

My guess is that Evans will.

PrairieDevCon Identity and Security Presentations on June 13th and 14th

Sometime last week I got confirmation that my sessions were accepted for PrairieDevCon!  The schedule has not yet been announced, but here are the two sessions I will be presenting:

Changing the Identity Game with the Windows Identity Foundation

Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as requiring the application to keep record of the accounts. There is a fundamental shift in the way we manage these users and their accounts in a Claims Based world. The Windows Identity Foundation builds on top of a Claim based architecture and helps solve some real world problems. This session will be a discussion on Claims as well as how WIF fits into the mix.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

Building a Security Token Service In the Time It Takes to Brew a Pot of Coffee

One of the fundamental pieces of a Claims Based Authentication model is the Security Token Service. Using the Windows Identity Framework it is deceivingly simple to build one, so in this session we will.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

What is PrairieDevCon?

The Prairie Developer Conference is the conference event for software professionals in the Canadian prairies!

Featuring more than 30 presenters, over 60 sessions, and including session styles such as hands-on coding, panel discussions, and lectures, Prairie Developer Conference is an exceptional learning opportunity!
Register for our June 2011 event today!

Okay, how much $$$?

Register early and take advantage of Early Bird pricing!
Get 50% off the post-conference price when you bundle it with your conference registration!

Conference Conference +
Post-Conf Workshop
Bundle
Until February 28 $299.99 $449.99
Until March 31 $399.99 $549.99
Until April 30 $499.99 $649.99
May and June $599.99 $749.99
Post-Conference Workshop Only $299.99

For more information check out the registration section.

Vote for my Mix 2011 Session on Identity!

Mix 2011 has opened voting for public session submissions, and I submitted one!  Here is the abstract:

Identity Bests – Managing User Identity in the new Decade

Presenter: Steve Syfuhs

Identity is a tricky thing to manage. These days every website requires some knowledge of the user, which inevitably requires users to log in to identify themselves. Over the next few years we will start seeing a shift toward a centralized identity model removing the need to manage users and their credentials for each website. This session will cover the fundamentals of Claims Based Authentication using the Windows Identity Foundation and how you can easily manage user identities across multiple websites as well across organizational boundaries.

If you think this session should be presented please vote: http://live.visitmix.com/OpenCall/Vote/Session/182.

(Please vote even if you don’t! Winking smile)

Find my Windows Phone 7

For the last month and a half I’ve been playing around with my new Windows Phone 7.  Needless to say, I really like it.  There are a few things that are still a little rough – side-loading application is a good example, but overall I’m really impressed with this platform.  It may be version 7 technically, but realistically its a v1 product.  I say that in a good way though – Microsoft reinvented the product.

Part of this reinvention is a cloud-oriented platform.  Today’s Dilbert cartoon was a perfect tongue-in-cheek explanation of the evolution of computing, and the mobile market makes no exception.  Actually, when you think about it, mobile phones and the cloud go together like peanut butter and chocolate.  If you have to ask, they go together really well.  Also, if you have to ask, are you living under a rock?

This whole cloud/phone comingling is central to the Windows Phone 7, and you can realize the potential immediately.

When you start syncing your phone via the Zune software, you will eventually get to the sync page for the phone.  The first thing I noticed was the link “do more with windows live”.

image

What does that do?

Well, once you have set up your phone with your Live ID, a new application is added to your Windows Live home.  This app is for all devices, and when you click on the above link in Zune, it will take you to the section for the particular phone you are syncing.

image

The first thing that caught my attention was the “Find my Phone” feature.  It brings up a list of actions for when you have lost your phone.

image

Each action is progressively bolder than the previous – and each action is very straightforward.

Map it

If the device is on, use the Location services on the phone to find it and display on a Bing Map.

Ring it

If you have a basic idea of where the phone is and the phone is on, ringing it will make the phone ring with a distinct tone even if you have it set to silent or vibrate.  Use this wisely. Smile

Lock it

Now it gets a little more complicated.  When you lock the phone you are given an option to provide a message on the lock screen:

image

If someone comes across your phone, you can set a message telling them what they can do with it.  Word of advice though: if you leave a phone number, don’t leave your mobile number. Winking smile

Erase it

Finally we have the last option.  The nuclear option if you will.  Once you set the phone to be erased, the next time the phone is turned on and tries to connect to the Live Network, the phone will be wiped and set to factory defaults.

A side effect of wiping your phone is that the next time you set it up and sync with the same Live ID, most settings will remain intact.  You will have to add your email and Facebook accounts, and set all the device settings, but once you sync with Zune, all of your apps will be reinstalled.  Now that is a useful little feature.

Finally

Overall I’m really happy with how the phone turned out.  It’s a strong platform and it’s growing quickly.  The Find my Phone feature is a relatively small thing, but it showcases the potential of a phone/cloud mash up and adds so much value to consumers for when the lose their phone.

In a previous post I talked about the security of the Windows Phone 7.  This post was all about how consumers can quickly mitigate any risks from losing their phone.  For more information on using this phone in the enterprise, check out the Windows Phone 7 Guides for IT Professionals.

Security Leaks, Software Updates

[My opinions are probably different than those of my employer.  Have a grain of salt handy, please.]

Sorry, but when a twitter client starts making fun of a politically charged security issue (WikiLeaks), the problem is a bit more trivial than most people are willing to admit.

metroTwitUpdate

At least, from the technology perspective.  When you get right down to it, technology is not what caused the breach of classified diplomatic cables.  People had to leak the stuff.  Period.

That’s what I find so funny about the screenshot.  So many companies are bound to find an in because they can now market their product as a way to deter these types of leaks, and they’ll market it as a full solution meaning it will solve every problem under the sun.

MetroTwit obviously had their tongue's planted firmly in cheek when they labelled their update, but they made fun of what so many companies will do in the future: label their product/feature/service as a security solution.

There is no practical solution to security when people are involved.  Just lots of little fixes and maybe a little bit of planning. 

Actually, a lot more planning might solve a number of those little fixes.

Preventing Frame Exploits in a Passive Claims Model

At a presentation a few weeks ago someone asked me about capturing session details during authentication at an STS by way of frames and JavaScript.  To paraphrase the question: “What prevents a malicious developer from sticking an RP within an iframe, cause a redirect to an STS, get some user to log in, and then capture the details through JavaScript from the parent page?”  There are a couple of ways this problem can be solved.  It’s a defense-in-depth problem where on their own, each piece won’t close every attack vector, but when used together you end up with a pretty solid solution.

  • First, a lot of new browsers will actually prevent cross-frame JavaScript calls when SSL is involved.  Depending on the browser, the JavaScript will throw the equivalent of an Access Denied exception.  This is not the case with all browser versions though.  Older browsers may not do this.
  • Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL.  The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
  • Third, you could write some JavaScript for the STS to bust out of the frame.  It would look something like this:

if (top != self)
{
    try
    {
        top.location.replace(self.location.href);
    }
    catch (e)
    {
    }
}

The problem with this is that it wouldn’t work if the browser has JavaScript disabled.

  • Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request.  Safari and Chrome support it natively, and Firefox supports it with the NoScript add on.  The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page.  E.g. the parent is somesite.com/page and the framed page is somesite.com/page.

There are a couple of ways to add this header to your page.  First you can add it via ASP.NET:

Context.Response.AddHeader("x-frame-options", "DENY");

Or you could add it to all pages via IIS.  To do this open the IIS Manager and select the site in question.  Then select the Feature “HTTP Response Headers”:

image

Select Add… and then set the name to x-frame-options and the value to DENY:

image

By keeping in mind these options you can do a lot to prevent any exploits that use frames.

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Kerberos: Very Claims-y

I’ve always found Kerberos to be an interesting protocol.  It works by way of a trusted third party which issues secured tickets based on an authentication or previous session.   These tickets are used as proof of identity by asserting that the subject is who they claim to be. Claims authentication works on a similar principle, except instead of a ticket you have a token.  There are some major differences in implementation, but the theory is the same.  One of the reasons I find it interesting is that Kerberos was originally developed in 1983, and the underlying protocol called the Needham-Schroeder protocol, was originally published in 1978.

There have been major updates over the years, as well as a change to fix a man-in-the-middle attack in the Needham-Schroeder protocol in 1995, but the theory is still sound.  Kerberos is the main protocol used in Windows networks to authenticate against Active Directory.

The reason I bring it up is because of a comment I made in a previous post.  I made an assertion that we don’t necessarily abstract out the identity portion of our applications and services. 

Well, It occurred to me that up until a certain period of time, we did.  In many environments there was only one trusted authority for identity.  Whether it was at a school, in a business, or within the government there was no concept of federation.  The walls we created were for a very good reason.  The applications and websites we created were siloed and the information didn’t need to be shared.  As such, we created our own identity stores in databases and LDAP directories.

This isn’t necessarily a problem because we built these applications on top of a foundation that wasn’t designed for identity.  The internet was for all intents and purposes designed for anonymity.  But here is where the foundation became tricky: it boomed.

People wanted to share information between websites and applications, but the data couldn’t be correlated back to the user across applications.  We are starting to catch up, but it’s a slow process.

So here is the question: we started with a relatively abstract process of authentication by way of the Kerberos third party, and then moved to siloed identity data.  Why did we lose the abstraction?  Or more precisely, during this boom, why did we allow our applications to lose this abstraction?

Food for thought on this early Monday.

What makes Claims Based Authentication Secure?

Update: I should have mentioned this when I first posted, but some of these thoughts are the result of me reading Programming Windows Identity Foundation.  While I hope I haven’t copied the ideas outright, I believe the interpretation is unique-ish. Smile

One of the main reasons we as developers shy away from new technologies is because we are afraid of it.  As we learned in elementary school, the reason we are afraid usually boils down to not having enough information about the topic.  I’ve found this especially true with anything security related.  So, lets think about something for a minute.

I’m not entirely sure how valid a method this is for measure, but I like to think that as developers we measure our understanding of something by how much we abstract away the problems it creates.  Now let me ask you this question:

How much of an abstraction layer do we create for identity?

Arguably very little because in most cases we half-ass it.

I say this knowing full well I’m extremely guilty of it.  Sure, I’d create a User class and populate with application specific data, but to populate the object I would call Active Directory or SQL directly.  That created a tightly coupled dependency between the application and the user store.  That works perfectly up until you need to migrate those users in a SQL database to Active Directory.  Oops.

So why do we do this?

My reason for doing this is pretty simple.  I didn’t know any better.  The reason I didn’t know better was also pretty simple.  Of the available options to abstract away the identity I didn’t understand how the technology worked, or more likely, I didn’t trust it.  Claims based authentication is a perfect example of this.  I thought to myself when I first came across this: “are you nuts?  You want me to hand over authentication to someone else and then I have to trust them that what they give me is valid?  I don’t think so.”

Well, yes actually.

Authentication, identification, and authorization are simply processes in the grand scheme of an application lifecycle.  They are privileged, but that just means we need to be careful about it.  Fear, as it turns out, is the number one reason why we don’t abstract this part out.*

With that, I thought it would be a perfect opportunity to take a look at a few of the reasons why Claims based authentication is reasonably secure.  I would also like to take this time to compare some of these reasons to why our current methods of user authentication are usually done wrong.

Source

First and foremost we trust the source.  Obviously a bank isn’t going to accept a handwritten piece of paper with my name on it as proof that I am me.  It stands to reason that you aren’t going to accept an identity from some random 3rd party provider for important proof of identity.

Encryption + SSL

The connection between RP and STS is over SSL.  Therefore no man in the middle attacks.  Then you encrypt the token.  Much like the SSL connection, the STS encrypts the payload with the RP’s public key, which only the RP can decrypt with its private key.  If you don’t use SSL anyone eavesdropping on the connection still can’t read the payload.  Also, the STS usually keeps a local copy of the certificate for token encryption.

How many of us encrypt our SQL connections when verifying  the user’s password?  How many of us use secured LDAP queries to Active Directory?  How many of us encrypt our web services?  I usually forget to.

Audience whitelist

Most commercial STS applications require that each request come from an approved Relying Party.  Moreover, most of those applications require that the endpoint that it responds to also be on an approved list.  You could probably fake it through DNS poisoning, but the certificates used for encryption and SSL would prevent you from doing anything meaningful since you couldn’t decrypt the token.

Do we verify the identity of the application requesting information from the SQL database?  Not usually the application.  However, we could do it via Kerberos impersonation.  E.g. lock down the specific data to the currently logged in/impersonated user.

Expiration and Duplication Prevention

All tokens have authentication timestamps.  They also normally have expiration timestamps.  Therefore they have a window of time that defines how long they are valid.  It is up to the application accepting the token to make sure the window is still acceptable, but it is still an opportunity for verification.  This also gives us the opportunity to prevent replay attacks.  All we have to do is keep track of all incoming tokens within the valid time window and see if the tokens repeat.  If so, we reject them.

There isn’t much we can do in a traditional setting to prevent this from happening.  If someone eavesdrops on the connection and grabs the username/password between the browser and your application, game over.  They don’t need to spoof anything.  They have the credentials.  SSL can fix this problem pretty easily though.

Integrity

Once the token has been created by the STS, it will be signed by the STS’s private key.  If the token is modified in any way the signature wont match.  Since it is being signed by the private key of the STS, only the STS can resign it, however anyone can verify the signature through the STS’s public key.  And since it’s a certificate for the STS, we can use it as strong proof that the STS is who they say they are.  For a good primer on public key/private key stuff check out Wikipedia.

It's pretty tricky to modify payloads between SQL and an application, but it is certainly possible.  Since we don’t usually encrypt the connections (I am guilty of this daily – It’s something I need to work on Winking smile), intercepting packets and modifying them on the fly is possible.  There isn’t really a way to verify if the payload has been tampered with.

Sure, there is a level of trust between the data source and the application if they are both within the same datacenter, but what if it’s being hosted offsite by a 3rd party?  There is always going to be a situation where integrity can become an issue.  The question at that point then is: how much do you trust the source, as well as the connection to the source?

Authentication Level

Finally, if we are willing to accept that each item above increases the security and validity of the identity, there is really only one thing left to make sure is acceptable.  How was the user authenticated?  Username/password, Kerberos, smart card/certificates, etc.  If we aren’t happy with how they were authenticated, we don’t accept the token.

So now that we have a pretty strong basis for what makes the tokens containing claims as well as the relationship between the RP’s and STS’s secure, we don’t really need to fear the Claims model.

Now we just need to figure out how to replace our old code with the identity abstraction. Smile

* Strictly anecdotal evidence, mind you.