So. I guess I wasn't the only one with this idea: http://blogs.objectsharp.com/cs/blogs/steve/archive/2011/02/28/making-the-internet-single-sign-on-capable.aspx
Announced earlier today at the Build conference, Microsoft is creating a tighter integration between Windows 8 and Windows Live. More details to come when I download the bits later tonight.
The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.
- You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
- You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
- At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
- So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.
There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.
The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.
This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.
The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.
As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.
Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.
There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.
First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:
When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.
All you have to do is redirect to the STS with the wauth parameter:
Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:
Just add the claim to the output identity:
protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
// finish filling claims...
At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:
public static bool IsElevated(this IClaimsPrincipal principal)
return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
And then have a bit of code to check:
var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP. In Global.asax we could do something like:
void Application_Start(object sender, EventArgs e)
+= new EventHandler<SessionSecurityTokenReceivedEventArgs>
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken = token;
This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.
There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.
As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.
Every couple of weeks I start up Autoruns to see what new stuff has added itself to Windows startup and what not (screw you Adobe – you as a software company make me want to swear endlessly). Anyway, a few months ago around the time the latest version of Windows Live Messenger and it’s suite RTM’ed I poked around to see if anything new was added. Turns out there was:
A new credential provider was added!
Not only that, it turns out a couple Winsock providers were added too:
I started poking around the DLL’s and noticed that they don’t do much. Apparently you can use smart cards for WLID authentication. I suspect that’s what the credential provider and associated Winsock Provider is for, as well as part of WLID’s sign-on helper so credentials can be managed via the Credential Manager:
Ah well, nothing too exciting here.
Skip a few months and something occurred to me. Microsoft was able to solve part of the Claims puzzle. How do you bridge the gap between desktop application identities and web application identities? They did part of what CardSpace was unable to do because CardSpace as a whole didn’t really solve a problem people were facing. The problem Windows Live ran into was how do you share credentials between desktop and web applications without constantly asking for the credentials? I.e. how do you do Single Sign On…
This got me thinking.
What if I wanted to step this up a smidge and instead of logging into Windows Live Messenger with my credentials, why not log into Windows with my Windows Live Credentials?
Yes, Windows. I want to change this:
Question: What would this solve?
Answer: At present, nothing ground-breakingly new. For the sake of argument, lets look at how this would be done, and I’ll (hopefully) get to my point.
First off, we need to know how to modify the Windows logon screen. In older versions of Windows (versions older than 2003 R2) you had to do a lot of heavy lifting to make any changes to the screen. You had to write your own GINA which involved essentially creating your own UI. Talk about painful.
With the introduction of Vista, Microsoft changed the game when it came to custom credentials. Their reasoning was simple: they didn’t want you to muck up the basic look and feel. You had to follow their guidelines.
As a result we are left with something along the lines of these controls to play with:
The logon screen is now controlled by Credential Providers instead of the GINA. There are two providers built into Windows by default, one for Kerberos or NTLM authentication, and one for Smart Card authentication.
The architecture looks like:
When the Secure Attention Sequence (CTRL + ALT + DEL / SAS) is called, Winlogon switches to a different desktop and instantiates a new instance of LogonUI.exe. LogonUI enumerates all the credential provider DLL’s from registry and displays their controls on the desktop.
When I enter in my credentials they are serialized and supposed to be passed to the LSA.
Once the LSA has these credentials it can then do the authentication.
I say “supposed” to be passed to the LSA because there are two frames of thought here. The first frame is to handle authentication within the Credential Provider itself. This can cause problems later on down the road. I’ll explain why in the second frame.
The second frame of thought is when you need to use custom credentials, need to do some funky authentication, and then save save the associated identity token somewhere. This becomes important when other applications need your identity.
You can accomplish this via what’s called an Authentication Package.
When a custom authentication package is created, it has to be designed in such a way that applications cannot access stored credentials directly. The applications must go through the pre-canned MSV1_0 package to receive a token.
Earlier when I asked about using Windows Live for authentication we would need to develop two things: a Credential Provider, and a custom Authentication Package.
The logon process would work something like this:
- Select Live ID Credential Provider
- Type in Live ID and Password and submit
- Credential Provider passes serialized credential structure to Winlogon
- Winlogon passes credentials to LSA
- LSA passes credential to Custom Authentication Package
- Package connects to Live ID STS and requests a token with given credentials
- Token is returned
- Authentication Package validated token and saves it to local cache
- Package returns authentication result back up call stack to Winlogon
- Winlogon initializes user’s profile and desktop
I asked before: What would this solve?
This isn’t really a ground-breaking idea. I’ve just described a domain environment similar to what half a million companies have already done with Active Directory, except the credential store is Live ID.
On it’s own we’ve just simplified the authentication process for every home user out there. No more disparate accounts across multiple machines. Passwords are in sync, and identity information is always up to date.
What if Live ID sets up a new service that lets you create access groups for things like home and friends and you can create file shares as appropriate. Then you can extend the Windows 7 Homegroup sharing based on those access groups.
Wait, they already have something like that with Skydrive (sans Homegroup stuff anyway).
Maybe they want to use a different token service.
Imagine if the user was able to select the “Federated User” credential provider that would give you a drop down box listing a few Security Token Services. Azure ACS can hook you up.
Imagine if one of these STS’s was something everyone used *cough* Facebook *cough*.
Imagine the STS was one that a lot of sites on the internet use *cough* Facebook *cough*.
Imagine if the associated protocol used by the STS and websites were modified slightly to add a custom set of headers sent to the browser. Maybe it looked like this:
Finally, imagine if your browser was smart enough to intercept those headers and look up the user’s token, check if they matched the header ”Relying-Party-Accepting-Token-Type” and then POST the token to the given reply URL.
Hmm. We’ve just made the internet SSO capable.
Now to just move everyone’s cheese to get this done.
Just came across this on Alik Levin's blog. I just started reading it, but so far so good! Here is the abstract:
This paper contains step-by-step instructions for using Windows® Identity Foundation, Windows Azure, and Active Directory Federation Services (AD FS) 2.0 for achieving SSO across web applications that are deployed both on premises and in the cloud. Previous knowledge of these products is not required for completing the proof of concept (POC) configuration. This document is meant to be an introductory document, and it ties together examples from each component into a single, end-to-end example.
You can download it here in either docx or pdf: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=1296e52c-d869-4f73-a112-8a37314a1632
- Second, some browsers will not allow you to host an SSL page in a frame if the parent page is not using SSL. The easy fix for the malicious developer is to simply use SSL for the parent site, but that could be problematic as the CA’s theoretically verify the sites requesting certificates.
if (top != self)
- Fourth, there is a new HTTP header that Microsoft introduced in IE 8 that tells the browser that if the requested page is hosted in a frame to simply stop processing the request. Safari and Chrome support it natively, and Firefox supports it with the NoScript add on. The header is called X-Frame-Options and it can have two values: “DENY” which prevents all requests, and “SAMEORIGIN” which allows a page to be rendered if the parent page is the same page. E.g. the parent is somesite.com/page and the framed page is somesite.com/page.
There are a couple of ways to add this header to your page. First you can add it via ASP.NET:
Or you could add it to all pages via IIS. To do this open the IIS Manager and select the site in question. Then select the Feature “HTTP Response Headers”:
Select Add… and then set the name to x-frame-options and the value to DENY:
By keeping in mind these options you can do a lot to prevent any exploits that use frames.
Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine. He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation. His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment. Yes. Absolutely. You will essentially be adding a new tier to the application. Hmm. I’m not sure if I can get away with that analogy. It will certainly feel like you are adding a new tier anyway.
What strikes me as the main investment is the Security Token Service. When you break it down, there are a lot of moving parts in an STS. In a previous post I asked what it would take to create something similar to ADFS 2. I said it would be fairly straightforward, and broke down the parts as well as what would be required of them. I listed:
- Token Services
- A Windows Authentication end-point
- An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
- An application management tool (MMC snap-in and PowerShell cmdlets)
- Proxy Services (Allows requests to pass NAT’ed zones)
These aren’t all that hard to develop. With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application. We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET. We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc. We even have an application management tool: anything you used to manage users in the first place. This certainly doesn’t get us all the way there, but they are good starting points.
Going back to my first point, the STS is probably the biggest investment. However, it’s kind of trivial to create an STS using WIF. I say that with a big warning though: an STS is a security system. Securing such a system is NOT trivial. Writing your own STS probably isn’t the best way to approach this. You would probably be better off to use an STS like ADFS. With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.
For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS. The fun bits are in the Fabrikam.IPSts project under the Identity folder. The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file. I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.
The process is pretty simple. A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().
The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration. The code for is simple:
internal class CustomSecurityTokenServiceConfiguration
private static CustomSecurityTokenServiceConfiguration current;
this.SecurityTokenService = typeof(CustomSecurityTokenService);
this.TokenIssuerName = "https://ipsts.fabrikam.com/";
public static CustomSecurityTokenServiceConfiguration Current
if (current == null)
current = new CustomSecurityTokenServiceConfiguration();
It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name. It then lets the base class handle the rest. Then there is the STS itself. It’s dead simple. The custom class has a base type of SecurityTokenService and overrides a couple methods. The important method it overrides is GetOutputClaimsIdentity():
protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
var inputIdentity = (IClaimsIdentity)principal.Identity;
Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
Claim email = new Claim(ClaimTypes.Email,
string roles = Roles.Provider.GetRolesForUser(name.Value);
var issuedIdentity = new ClaimsIdentity();
foreach (var role in roles)
var roleClaim = new Claim(ClaimTypes.Role, role);
It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity. Pretty simple.
At this point you’ve just moved the authentication and Roles stuff away from the application. Nothing has really changed data-wise. If you only cared about roles, name, and email you are done. If you needed something more you could easily add in the logic to grab the values you needed.
By no means is this production ready, but it is a good basis for how the STS creates claims.
Update: I should have mentioned this when I first posted, but some of these thoughts are the result of me reading Programming Windows Identity Foundation. While I hope I haven’t copied the ideas outright, I believe the interpretation is unique-ish.
One of the main reasons we as developers shy away from new technologies is because we are afraid of it. As we learned in elementary school, the reason we are afraid usually boils down to not having enough information about the topic. I’ve found this especially true with anything security related. So, lets think about something for a minute.
I’m not entirely sure how valid a method this is for measure, but I like to think that as developers we measure our understanding of something by how much we abstract away the problems it creates. Now let me ask you this question:
How much of an abstraction layer do we create for identity?
Arguably very little because in most cases we half-ass it.
I say this knowing full well I’m extremely guilty of it. Sure, I’d create a User class and populate with application specific data, but to populate the object I would call Active Directory or SQL directly. That created a tightly coupled dependency between the application and the user store. That works perfectly up until you need to migrate those users in a SQL database to Active Directory. Oops.
So why do we do this?
My reason for doing this is pretty simple. I didn’t know any better. The reason I didn’t know better was also pretty simple. Of the available options to abstract away the identity I didn’t understand how the technology worked, or more likely, I didn’t trust it. Claims based authentication is a perfect example of this. I thought to myself when I first came across this: “are you nuts? You want me to hand over authentication to someone else and then I have to trust them that what they give me is valid? I don’t think so.”
Well, yes actually.
Authentication, identification, and authorization are simply processes in the grand scheme of an application lifecycle. They are privileged, but that just means we need to be careful about it. Fear, as it turns out, is the number one reason why we don’t abstract this part out.*
With that, I thought it would be a perfect opportunity to take a look at a few of the reasons why Claims based authentication is reasonably secure. I would also like to take this time to compare some of these reasons to why our current methods of user authentication are usually done wrong.
First and foremost we trust the source. Obviously a bank isn’t going to accept a handwritten piece of paper with my name on it as proof that I am me. It stands to reason that you aren’t going to accept an identity from some random 3rd party provider for important proof of identity.
Encryption + SSL
The connection between RP and STS is over SSL. Therefore no man in the middle attacks. Then you encrypt the token. Much like the SSL connection, the STS encrypts the payload with the RP’s public key, which only the RP can decrypt with its private key. If you don’t use SSL anyone eavesdropping on the connection still can’t read the payload. Also, the STS usually keeps a local copy of the certificate for token encryption.
How many of us encrypt our SQL connections when verifying the user’s password? How many of us use secured LDAP queries to Active Directory? How many of us encrypt our web services? I usually forget to.
Most commercial STS applications require that each request come from an approved Relying Party. Moreover, most of those applications require that the endpoint that it responds to also be on an approved list. You could probably fake it through DNS poisoning, but the certificates used for encryption and SSL would prevent you from doing anything meaningful since you couldn’t decrypt the token.
Do we verify the identity of the application requesting information from the SQL database? Not usually the application. However, we could do it via Kerberos impersonation. E.g. lock down the specific data to the currently logged in/impersonated user.
Expiration and Duplication Prevention
All tokens have authentication timestamps. They also normally have expiration timestamps. Therefore they have a window of time that defines how long they are valid. It is up to the application accepting the token to make sure the window is still acceptable, but it is still an opportunity for verification. This also gives us the opportunity to prevent replay attacks. All we have to do is keep track of all incoming tokens within the valid time window and see if the tokens repeat. If so, we reject them.
There isn’t much we can do in a traditional setting to prevent this from happening. If someone eavesdrops on the connection and grabs the username/password between the browser and your application, game over. They don’t need to spoof anything. They have the credentials. SSL can fix this problem pretty easily though.
Once the token has been created by the STS, it will be signed by the STS’s private key. If the token is modified in any way the signature wont match. Since it is being signed by the private key of the STS, only the STS can resign it, however anyone can verify the signature through the STS’s public key. And since it’s a certificate for the STS, we can use it as strong proof that the STS is who they say they are. For a good primer on public key/private key stuff check out Wikipedia.
It's pretty tricky to modify payloads between SQL and an application, but it is certainly possible. Since we don’t usually encrypt the connections (I am guilty of this daily – It’s something I need to work on ), intercepting packets and modifying them on the fly is possible. There isn’t really a way to verify if the payload has been tampered with.
Sure, there is a level of trust between the data source and the application if they are both within the same datacenter, but what if it’s being hosted offsite by a 3rd party? There is always going to be a situation where integrity can become an issue. The question at that point then is: how much do you trust the source, as well as the connection to the source?
Finally, if we are willing to accept that each item above increases the security and validity of the identity, there is really only one thing left to make sure is acceptable. How was the user authenticated? Username/password, Kerberos, smart card/certificates, etc. If we aren’t happy with how they were authenticated, we don’t accept the token.
So now that we have a pretty strong basis for what makes the tokens containing claims as well as the relationship between the RP’s and STS’s secure, we don’t really need to fear the Claims model.
Now we just need to figure out how to replace our old code with the identity abstraction.
* Strictly anecdotal evidence, mind you.
Yet another presentation on the docket! I submitted an abstract to SharePoint Summit 2011 and they accepted! I will be presenting on SharePoint and how it manages Identity. More specifically, how SharePoint 2010 uses WIF to handle Claims based authentication and Federation.
Here are the details
Event: SharePoint Summit 2011, January 31st 2011 – February 2nd, 2011
When: 11:30 a.m. - 12:45 p.m. February 1st, 2011
Where: Four Seasons Hotel, Toronto
Abstract: Managing identities within an organization is relatively easy. However, as business changes, we need to be able to adapt quickly. Identity is something that often gets overlooked in adaptation. In this session we will discuss the Windows Identity Foundation and how SharePoint uses it to adapt easily to change.
Similar to the TVBUG presentation, I will be presenting on the Windows Identity Foundation to the Metro Toronto .NET User Group.
Here are the details:
When: November 10th, 2010
Where: KPMG, 333 Bay Street, 10th Floor, Toronto
Abstract: Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as requiring the application to keep record of the accounts. With the Windows Identity Foundation, built on top of a Claims-based architecture, there is a fundamental shift in the way we manage these users and their accounts. In this presentation we will take a look at the why's and dig into the how's of the Windows Identity Foundation by building an Identity aware application from scratch.
Earlier this morning I got an email from John
Bristowe congratulating me on being selected to present a session for the local
flavours track at TechDays in Toronto!
up my count to 2. Needless to say I am REALLY excited.
I was a little disappointed to find out there weren’t any sessions on the Windows
Identity Foundation, so that just meant I had to submit my own to the local flavours
track…and they accepted it! Here are the details:
October 27, 3:40 PM to 4:45 PM
Breakout | LFT330: Windows Identity Foundation Simplified: All the Scary Things
The Windows Identity Foundation helps simplify user access for developers by externalizing
user access from applications via claims and reducing development effort with pre-built
security logic and integrated .NET tools. This presentation is an intimate discussion
on the basics of the Windows Identity Foundation and its claims model. In this session,
you’ll learn how to refactor an existing sample set of applications to use WIF, to
connect identities to the Cloud, and to remove the burden of managing multiple disparate
Location: Metro Toronto Convention
Centre - South Building (255 Front Street West, Toronto)