The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.
- You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
- You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
- At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
- So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.
There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.
The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.
This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.
The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.
As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.
Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.
There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.
First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:
Authentication Type/Credential |
Wauth Value |
Password |
urn:oasis:names:tc:SAML:1.0:am:password |
Kerberos |
urn:ietf:rfc:1510 |
TLS |
urn:ietf:rfc:2246 |
PKI/X509 |
urn:oasis:names:tc:SAML:1.0:am:X509-PKI |
Default |
urn:oasis:names:tc:SAML:1.0:am:unspecified |
When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.
All you have to do is redirect to the STS with the wauth parameter:
https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method
Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:
http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod
Just add the claim to the output identity:
protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
// finish filling claims...
return ident;
}
At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:
public static bool IsElevated(this IClaimsPrincipal principal)
{
return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}
And then have a bit of code to check:
var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
DoSomethingRequiringElevation();
}
This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP. In Global.asax we could do something like:
void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived
+= new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e)
{
if (e.SessionToken.ClaimsPrincipal.IsElevated())
{
SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
e.SessionToken = token;
}
}
This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.
There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.
As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.
In a previous post we looked at what it takes to actually write a Security Token Service. If we knew what the STS offered and required already, we could set up a relying party relatively easily with that setup. However, we don’t always know what is going on. That’s the purpose of federation metadata. It gives us a basic breakdown of the STS so we can interact with it.
Now, if we are building a custom STS we don’t have anything that is creating this metadata. We could do it manually by hardcoding stuff in an xml file and then signing it, but that gets ridiculously tedious after you have to make changes for the third or fourth time – which will happen. A lot. The better approach is to generate the metadata automatically. So in this post we will do just that.
The first thing you need to do is create a endpoint. There is a well known path of /FederationMetadata/2007-06/FederationMetadata.xml that is generally used, so let’s use that. There are a lot of options to generate dynamic content and in Programming Windows Identity Foundation, Vitorrio uses a WCF Service:
[ServiceContract]
public interface IFederationMetadata
{
[ServiceBehavior]
[webGet(UriTemplate = "2007-06/FederationMetadata.xml")]
XElement FederationMetadata();
}
It’s a great approach, but for some reason I prefer the way that Dominick Baier creates the endpoint in StarterSTS. He uses an IHttpHandler and a web.config entry to create a handler:
<location path="FederationMetadata/2007-06">
<system.webServer>
<handlers>
<add
name="MetadataGenerator"
path="FederationMetadata.xml"
verb="GET"
type="Syfuhs.TokenService.WSTrust.FederationMetadataHandler" />
</handlers>
</system.webServer>
<system.web>
<authorization>
<allow users="*" />
</authorization>
</system.web>
</location>
As such, I’m going to go that route. Let’s take a look at the implementation for the handler:
using System.Web;
namespace Syfuhs.TokenService.WSTrust
{
public class FederationMetadataHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
context.Response.ClearHeaders();
context.Response.Clear();
context.Response.ContentType = "text/xml";
MyAwesomeTokenServiceConfiguration
.Current.SerializeMetadata(context.Response.OutputStream);
}
public bool IsReusable { get { return false; } }
}
}
All the handler is doing is writing metadata out to a stream, which in this case is the response stream. You can see that it is doing this through the MyAwesomeTokenServiceConfiguration class which we created in the previous article. The SeriaizeMetadata method creates an instance of a MetadataSerializer and writes an entity to the stream:
public void SerializeMetadata(Stream stream)
{
MetadataSerializer serializer = new MetadataSerializer();
serializer.WriteMetadata(stream, GenerateEntities());
}
The entities are generated through a collection of tasks:
private EntityDescriptor GenerateEntities()
{
if (entity != null)
return entity;
SecurityTokenServiceDescriptor sts = new SecurityTokenServiceDescriptor();
FillOfferedClaimTypes(sts.ClaimTypesOffered);
FillEndpoints(sts);
FillSupportedProtocols(sts);
FillSigningKey(sts);
entity = new EntityDescriptor(new EntityId(string.Format("https://{0}", host)))
{
SigningCredentials = this.SigningCredentials
};
entity.RoleDescriptors.Add(sts);
return entity;
}
The entity is generated, and an object is created to describe the STS called a SecurityTokenServiceDescriptor. At this point it’s just a matter of sticking in the data and defining the credentials used to sign the metadata:
private void FillSigningKey(SecurityTokenServiceDescriptor sts)
{
KeyDescriptor signingKey
= new KeyDescriptor(this.SigningCredentials.SigningKeyIdentifier)
{
Use = KeyType.Signing
};
sts.Keys.Add(signingKey);
}
private void FillSupportedProtocols(SecurityTokenServiceDescriptor sts)
{
sts.ProtocolsSupported.Add(new System.Uri(WSFederationConstants.Namespace));
}
private void FillEndpoints(SecurityTokenServiceDescriptor sts)
{
EndpointAddress activeEndpoint
= new EndpointAddress(string.Format("https://{0}/TokenService/activeSTS.svc", host));
sts.SecurityTokenServiceEndpoints.Add(activeEndpoint);
sts.TargetScopes.Add(activeEndpoint);
}
private void FillOfferedClaimTypes(ICollection<DisplayClaim> claimTypes)
{
claimTypes.Add(new DisplayClaim(ClaimTypes.Name, "Name", ""));
claimTypes.Add(new DisplayClaim(ClaimTypes.Email, "Email", ""));
claimTypes.Add(new DisplayClaim(ClaimTypes.Role, "Role", ""));
}
That in a nutshell is how to create a basic metadata document as well as sign it. There is a lot more information you can put into this, and you can find more things to work with in the Microsoft.IdentityModel.Protocols.WSFederation.Metadata namespace.
Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine. He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation. His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment. Yes. Absolutely. You will essentially be adding a new tier to the application. Hmm. I’m not sure if I can get away with that analogy. It will certainly feel like you are adding a new tier anyway.
What strikes me as the main investment is the Security Token Service. When you break it down, there are a lot of moving parts in an STS. In a previous post I asked what it would take to create something similar to ADFS 2. I said it would be fairly straightforward, and broke down the parts as well as what would be required of them. I listed:
- Token Services
- A Windows Authentication end-point
- An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
- An application management tool (MMC snap-in and PowerShell cmdlets)
- Proxy Services (Allows requests to pass NAT’ed zones)
These aren’t all that hard to develop. With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application. We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET. We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc. We even have an application management tool: anything you used to manage users in the first place. This certainly doesn’t get us all the way there, but they are good starting points.
Going back to my first point, the STS is probably the biggest investment. However, it’s kind of trivial to create an STS using WIF. I say that with a big warning though: an STS is a security system. Securing such a system is NOT trivial. Writing your own STS probably isn’t the best way to approach this. You would probably be better off to use an STS like ADFS. With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.
For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS. The fun bits are in the Fabrikam.IPSts project under the Identity folder. The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file. I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

The process is pretty simple. A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().
The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration. The code for is simple:
namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
using Microsoft.IdentityModel.Configuration;
using Microsoft.IdentityModel.SecurityTokenService;
internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
{
private static CustomSecurityTokenServiceConfiguration current;
private CustomSecurityTokenServiceConfiguration()
{
this.SecurityTokenService = typeof(CustomSecurityTokenService);
this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
this.TokenIssuerName = "https://ipsts.fabrikam.com/";
}
public static CustomSecurityTokenServiceConfiguration Current
{
get
{
if (current == null)
{
current = new CustomSecurityTokenServiceConfiguration();
}
return current;
}
}
}
}
It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name. It then lets the base class handle the rest. Then there is the STS itself. It’s dead simple. The custom class has a base type of SecurityTokenService and overrides a couple methods. The important method it overrides is GetOutputClaimsIdentity():
protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
var inputIdentity = (IClaimsIdentity)principal.Identity;
Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
string[] roles = Roles.Provider.GetRolesForUser(name.Value);
var issuedIdentity = new ClaimsIdentity();
issuedIdentity.Claims.Add(name);
issuedIdentity.Claims.Add(email);
foreach (var role in roles)
{
var roleClaim = new Claim(ClaimTypes.Role, role);
issuedIdentity.Claims.Add(roleClaim);
}
return issuedIdentity;
}
It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity. Pretty simple.
At this point you’ve just moved the authentication and Roles stuff away from the application. Nothing has really changed data-wise. If you only cared about roles, name, and email you are done. If you needed something more you could easily add in the logic to grab the values you needed.
By no means is this production ready, but it is a good basis for how the STS creates claims.
Active Directory Federation Services 2 has an amazing amount of power when it comes
to claims transformation. To understand how it works lets take a look at a set
of claims rules and the flow of data from ADFS to the Relying Party:
We can have multiple rules to transform claims, and each one takes precedence via
an Order:
In the case above, Transform Rule 2 transformed the claims that Rule
1 requested from the attribute store, which in this case was Active Directory.
This becomes extremely useful because there are times when some of the data you need
to pull out of Active Directory isn’t in a useable format. There are a couple
options to fix this:
-
Make the receiving application deal with it
-
Modify it before sending it off
-
Ignore it
Lets take a look at the second option (imagine an entire blog post on just ignoring
it…). ADFS allows us to transform claims before they are sent off in the token
by way of the Claims
Rule Language. It follows the pattern: "If a set of conditions is true,
issue one or more claims." As such, it’s a big Boolean system. Syntactically,
it’s pretty straightforward.
To issue a claim by implicitly passing true:
=> issue(Type = "http://MyAwesomeUri/claims/AwesomeRole", Value = "Awesome
Employee");
What that did was ignored the fact that there weren’t any conditions and will always
pass a value.
To check a condition:
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role",
Value == "SomeRole"]
=> issue(Type = "http://MyAwesomeUri/claims/AwesomeRole",
Value = "AwesomeRole");
Breaking down the query, we are checking that a claim created in a previous step has
a specific type; in this case role and the claim’s value is SomeRole.
Based on that we are going to append a new claim to the outgoing list with a new type
and a new value.
That’s pretty useful in it’s own right, but ADFS can actually go even further by allowing
you to pull data out of custom data stores by way of Custom Attribute Stores.
There are four options to choose from when getting data:
-
Active Directory (default)
-
LDAP (Any directory that you can query via LDAP)
-
SQL Server (awesome)
-
Custom built store via custom .NET assembly
Let’s get some data from a SQL database. First we need to create the attribute
store. Go to Trust Relationships/Attribute Stores in the ADFS MMC Console (or
you could also use PowerShell):
Then add an attribute store:
All you need is a connection string to the database in question:
The next step is to create the query to pull data from the database. It’s surprisingly
straightforward. This is a bit of a contrived example, but lets grab the certificate
name and the certificate hash from a database table where the certificate name is
equal to the value of the http://MyCertUri/UserCertName claim type:
c:[type == http://MyCertUri/UserCertName]
=> issue(store = "MyAttributeStore",
types = ("http://test/CertName",
"http://test/CertHash"),
query = "SELECT CertificateName,
CertificateHash FROM UserCertificates WHERE CertificateName='{0}'", param = c.value);
For each column you request in the SQL query, you need a claim type as well.
Also, unlike most SQL queries, to use parameters we need to use a format similar to
String.Format instead of using @MyVariable syntaxes.
In a nutshell this is how you deal with claims transformation. For a more in
depth article on how to do this check out TechNet: http://technet.microsoft.com/en-us/library/dd807118(WS.10).aspx.
there comes a point where using an eavesdropping application to catch packets as they
fly between Secure Token Services and Relying Parties becomes tiresome. For
me it came when I decided to give up on creating a man-in-the-middle between SSL sessions
between ADFS and applications. Mainly because ADFS doesn’t like that.
At all.
Needless to say I wanted to see the tokens. Luckily, Windows Identity Foundation
has the solution by way of the Bootstrap token. To understand what it is, consider
how this whole process works. Once you’ve authenticated, the STS will POST a
chunk of XML (the SAML Token) back to the RP. WIF will interpret it as necessary
and do it’s magic generating a new principal with the payload. However, in some
instances you need to keep this token intact. This would be the case if you
were creating a web service and needed to forward the token. What WIF does is
generate a bootstrap token from the SAML token, in the event you needed to forward
it off to somewhere.
Before taking a look at it, let's add in some useful using statements:
using System;
using System.IdentityModel.Tokens;
using System.Text;
using System.Threading;
using System.Xml;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Tokens.Saml11;
The bootstrap token is attached to IClaimsPrincipal identity:
SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;
However if you do this out of the box, BootstrapToken will be null. By default,
WIF will not save the token. We need to explicitly enable this in the web.config
file. Add this line under <microsoft.IdentityModel><service><securityTokenHandlers>:
<securityTokenHandlerConfiguration saveBootstrapTokens="true" />
Once you’ve done that, WIF will load the token.
The properties are fairly straightforward, but you can’t just get a blob from it:
Luckily we have some code to convert from the bootstrap token to a chunk of XML:
SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;
StringBuilder sb = new StringBuilder();
using (var writer = XmlWriter.Create(sb))
{
new Saml11SecurityTokenHandler(new SamlSecurityTokenRequirement()).WriteToken(writer, bootstrapToken);
}
string theXml = sb.ToString();
We get a proper XML document:
That’s all there is to it.
From Microsoft Marketing, ADFS 2.0 is:
Active Directory Federation Services 2.0 helps IT enable users to collaborate across
organizational boundaries and easily access applications on-premises and in the cloud,
while maintaining application security. Through a claims-based
infrastructure, IT can enable a single sign-on experience for end-users to
applications without requiring a separate account or password, whether applications
are located in partner organizations or hosted in the cloud.
So, it’s a Token Service plus some. In a previous post I had said:
In other words it is a method for centralizing
user Identity information, very much like how the Windows Live and OpenID systems
work. The system is reasonably simple. I have a Membership data store
that contains user information. I want (n) number of websites to use that membership
store, EXCEPT I don’t want each application to have direct access to membership data
such as passwords. The way around it is through claims.
The membership store in this case being Active Directory.
I thought it would be a good idea to run through how to install ADFS and set up an
application to use it. Since we already discussed how to federate an application
using FedUtil.exe, I will let you go through the steps
in the previous post. I will provide information on where to find the Metadata
later on in this post.
But First: The Prerequisites
-
Join the Server to the Domain. (I’ve started the installation of ADFS three times
on non-domain joined systems. Doh!)
-
Install the latest .NET Framework. I’m kinda partial to using SmallestDotNet.com created
by Scott Hanselman. It’s easy.
-
Install IIS. If you are running Server 2008 R2 you can follow these
steps in another post, or just go through the wizards. FYI: The post installs
EVERY feature. Just remember that when you move to production. Surface
Area and what not…
-
Install PowerShell.
-
Install the Windows Identity Foundation: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en
-
Install SQL Server. This is NOT required. You only need
to install it if you want to use a SQL Database to get custom Claims data. You
could also use a SQL Server on another server…
-
Download ADFS 2.0 RTW: http://www.microsoft.com/downloads/details.aspx?familyid=118c3588-9070-426a-b655-6cec0a92c10b&displaylang=en
The Installation
Read the terms and accept them. If you notice, you only have to read half of
what you see because the rest is in French. Maybe the lawyers are listening…these
things are getting more readable.
Select Federation Server. A Server Proxy allows you to use ADFS on
a web server not joined to the domain.
We already installed all of these things. When you click next it will check
for latest hotfixes and ask if you want to open the configuration MMC snap-in.
Start it.
We want to start the configuration Wizard and then create a new Federation Service:
Next we want to create a Stand-alone federation server:
We need to select a certificate for ADFS to use. By default it uses the SSL
certificate of the default site in IIS. So lets add one. In the IIS Manager
select the server and then select Server Certificates:
We have a couple options when it comes to adding a certificate. For the sake
of this post I’ll just create a self-signed certificate, but if you have a domain
Certificate Authority you could go that route, or if this is a public facing service
create a request and get a certificate from a 3rd party CA.
Once we’ve created the certificate we assign it to the web site. Go to the website
and select Bindings…
Add a site binding for https:
Now that we’ve done that we can go back to the Configuration Wizard:
Click next and it will install the service. It will stop IIS so be aware of
that.
You may receive this error if you are installing on Server 2008:
The fix for this is here: http://www.syfuhs.net/2010/07/23/ADFS20WindowsServiceNotStartingOnServer2008.aspx
You will need to re-run the configuration wizard if you do this. It may complain
about the virtual applications already existing. You two options: 1) delete
the applications in IIS as well as the folder C:\inetpub\adfs; 2) Ignore the warning.
Back to the installation, it will create two new Virtual Applications in IIS:
Once the wizard finishes you can go back to the MMC snap-in and fiddle around.
The first thing we need to do is create an entry for a Relying Party. This will
allow us to create a web application to work with it.
When creating an RP we have a couple options to provide configuration data.
Since we are going to create a web application from scratch we will enter in manual
data. If you already have the application built and have Federation Metadata
available for it, by all means just use that.
We need a name:
Very original, eh?
Next we need to decide on what profile we will be using. Since we are building
an application from scratch we can take advantage of the 2.0 profile, but if we needed
backwards compatibility for a legacy application we should select the 1.0/1.1 profile.
Next we specify the certificate to encrypt our claims sent to the application.
We only need the public key of the certificate. When we run FedUtil.exe we can
specify which certificate we want to use to decrypt the incoming tokens. This
will be the private key of the same certificate. For the sake of this, we’ll
skip it.
The next step gets a little confusing. It asks which protocols we want to use
if we are federating with a separate STS. In this case since we aren’t doing
anything that crazy we can ignore them and continue:
We next need to specify the RP’s identifying URI.
Allow anyone and everyone, or deny everyone and add specific users later? Allow
everyone…
When we finish we want to edit the claim rules:
This dialog will allow us to add mappings between claims and the data within Active
Directory:
So lets add a rule. We want to Send LDAP Attributes as Claims
First we specify what data in Active Directory we want to provide:
Then we specify which claim type to use:
And ADFS is configured! Lets create our Relying Party. You can follow
these steps: Making
an ASP.NET Website Claims Aware with the Windows Identity Foundation. To
get the Federation Metadata for ADFS navigate to the URL that the default website
is mapped to + /FederationMetadata/2007-06/FederationMetadata.xml. In my case
it’s https://web1.nexus.internal.test/FederationMetadata/2007-06/FederationMetadata.xml.
Once you finish the utility it’s important that we tell ADFS that our new RP has Metadata
available. Double click on the RP to get to the properties. Select Monitoring:
Add the URL for the Metadata and select Monitor relying party. This will periodically
call up the URL and download the metadata in the event that it changes.
At this point we can test. Hit F5 and we will redirect to the ADFS page.
It will ask for domain credentials and redirect back to our page. Since I tested
it with a domain admin account I got this back:
It works!
For more information on ADFS 2.0 check out http://www.microsoft.com/windowsserver2008/en/us/ad-fs-2-overview.aspx or
the WIF Blog at http://blogs.msdn.com/b/card/
Happy coding!