A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications to a configuration file for an application [while the user has administrative rights on the machine]?
There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.
In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).
We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: http://www.w3.org/2000/09/xmldsig#. WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.
Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.
Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.
- If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
- If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
- The encryption/decryption keys need to be shared in web farms
Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case. Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.
Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.
There are three possible solutions to this:
- Create a custom ConfigurationSection class for the signature
- Create a custom configuration file and handler, and intercept all calls to web.config
- Stick the signature of the configuration file into a different file
The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a stupid idea in almost all cases, and I’m not entirely sure you can even intercept all calls to the configuration classes.
I went with option three.
The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.
This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.
Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.
Yep.
There are two ways around this:
- Add the validation call into Global.asax
- Hard code the addition of the HTTP Module
It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.
In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:
[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]
Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:
public abstract class DynamicallyLoadedHttpModule : IHttpModule
{
public void Register()
{
DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });
}
public abstract void Init(HttpApplication context);
public abstract void Dispose();
}
The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:
using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;
public abstract class DynamicHttpApplication : HttpApplication
{
private static readonly Collection<HttpModuleFactory> Factories
= new Collection<HttpModuleFactory>();
private static object _sync = new object();
private static bool IsInitialized = false;
private List<IHttpModule> modules;
public override void Init()
{
base.Init();
if (Factories.Count == 0)
return;
List<IHttpModule> dynamicModules = new List<IHttpModule>();
lock (_sync)
{
if (Factories.Count == 0)
return;
foreach (HttpModuleFactory factory in Factories)
{
IHttpModule m = factory(this);
if (m != null)
{
m.Init(this);
dynamicModules.Add(m);
}
}
}
if (dynamicModules.Count != 0)
modules = dynamicModules;
IsInitialized = true;
}
public static void RegisterModule(HttpModuleFactory factory)
{
if (IsInitialized)
throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate);
if (factory == null)
throw new ArgumentNullException("factory");
Factories.Add(factory);
}
public override void Dispose()
{
if (modules != null)
modules.ForEach(m => m.Dispose());
modules = null;
base.Dispose();
GC.SuppressFinalize(this);
}
Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:
public class Global : DynamicHttpApplication
{ … }
Like I said, you could just add the validation code into Global (but where’s the fun in that?)…
So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:
public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
{
public override void Init(HttpApplication context)
{
if (context == null)
throw new ArgumentNullException("context");
context.BeginRequest += new EventHandler(context_BeginRequest);
context.Error += new EventHandler(context_Error);
}
private void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication app = (HttpApplication)sender;
SignatureValidator validator
= new SignatureValidator(app.Request.PhysicalApplicationPath);
validator.ValidateConfigurationSignatures(
CertificateLocator.LocateSigningCertificate());
}
private void context_Error(object sender, EventArgs e)
{
HttpApplication app = (HttpApplication)sender;
foreach (var exception in app.Context.AllErrors)
{
if (exception is XmlSignatureValidationFailedException)
{
// Maybe do something
// Or don't...
break;
}
}
}
public override void Dispose() { }
}
Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.
The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.
The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.
The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?
Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.
Here is the validator:
internal sealed class SignatureValidator
{
public SignatureValidator(string physicalApplicationPath)
{
this.physicalApplicationPath = physicalApplicationPath;
this.signatureFilePath = Path.Combine(this.physicalApplicationPath,
"App_Data\\Signature.xml");
}
private string physicalApplicationPath;
private string signatureFilePath;
public void ValidateConfigurationSignatures(X509Certificate2 cert)
{
Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath);
if (cert == null)
throw new ArgumentNullException("cert");
if (cert.HasPrivateKey)
throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey);
if (!File.Exists(signatureFilePath))
throw new SecurityException(Exceptions.CouldNotLoadSignatureFile);
XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
doc.Load(signatureFilePath);
ValidateXmlSchema(doc);
CheckForUnsignedConfig(doc);
if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc)))
throw new XmlSignatureValidationFailedException(
Exceptions.SignatureFileNotSignedByExpectedCertificate);
List<XmlSignature> signatures = ParseSignatures(doc);
ValidateSignatures(signatures, cert);
}
private void CheckForUnsignedConfig(XmlDocument doc)
{
List<string> signedFiles = new List<string>();
foreach (XmlElement file in doc.GetElementsByTagName("File"))
{
string fileName = Path.Combine(this.physicalApplicationPath,
file["FileName"].InnerText);
signedFiles.Add(fileName.ToUpperInvariant());
}
CheckConfigFiles(signedFiles);
}
private void CheckConfigFiles(List<string> signedFiles)
{
foreach (string file in Directory.EnumerateFiles(
this.physicalApplicationPath, "*.config", SearchOption.AllDirectories))
{
string path = Path.Combine(this.physicalApplicationPath, file);
if (!signedFiles.Contains(path.ToUpperInvariant()))
throw new XmlSignatureValidationFailedException(
string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path));
}
}
private void ValidateXmlSchema(XmlDocument doc)
{
using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema))
using (StringReader signatureReader = new StringReader(Constants.SignatureSchema))
{
XmlSchema fileSchema = XmlSchema.Read(fileReader, null);
XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null);
doc.Schemas.Add(fileSchema);
doc.Schemas.Add(signatureSchema);
doc.Validate(Schemas_ValidationEventHandler);
}
}
void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e)
{
throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception);
}
public static X509Certificate2 ValidateSignature(XmlDocument xml)
{
if (xml == null)
throw new ArgumentNullException("xml");
XmlElement signature = ExtractSignature(xml.DocumentElement);
return ValidateSignature(xml, signature);
}
public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature)
{
if (doc == null)
throw new ArgumentNullException("doc");
if (signature == null)
throw new ArgumentNullException("signature");
X509Certificate2 signingCert = null;
SignedXml signed = new SignedXml(doc);
signed.LoadXml(signature);
foreach (KeyInfoClause clause in signed.KeyInfo)
{
KeyInfoX509Data key = clause as KeyInfoX509Data;
if (key == null || key.Certificates.Count != 1)
continue;
signingCert = (X509Certificate2)key.Certificates[0];
}
if (signingCert == null)
throw new CryptographicException(Exceptions.SigningKeyNotFound);
if (!signed.CheckSignature())
throw new CryptographicException(Exceptions.SignatureValidationFailed);
return signingCert;
}
private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert)
{
foreach (XmlSignature signature in signatures)
{
X509Certificate2 signingCert
= ValidateSignature(signature.Document, signature.Signature);
if (!X509CertificateCompare.Compare(cert, signingCert))
throw new XmlSignatureValidationFailedException(
string.Format(CultureInfo.CurrentCulture,
Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName));
}
}
private List<XmlSignature> ParseSignatures(XmlDocument doc)
{
List<XmlSignature> signatures = new List<XmlSignature>();
foreach (XmlElement file in doc.GetElementsByTagName("File"))
{
string fileName
= Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText);
Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName);
if (!File.Exists(fileName))
throw new FileNotFoundException(
string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName));
XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true };
fileDoc.Load(fileName);
XmlElement sig = file["FileSignature"] as XmlElement;
signatures.Add(new XmlSignature()
{
FileName = fileName,
Document = fileDoc,
Signature = ExtractSignature(sig)
});
}
return signatures;
}
private static XmlElement ExtractSignature(XmlElement xml)
{
XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature");
if (xmlSignatureNode.Count <= 0)
throw new CryptographicException(Exceptions.SignatureNotFound);
return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement;
}
}
You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?
So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.
There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.
Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:
public sealed class XmlSigner
{
public XmlSigner(string appPath)
{
this.physicalApplicationPath = appPath;
}
string physicalApplicationPath;
public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
{
if (paths == null || paths.Length == 0)
throw new ArgumentNullException("paths");
if (cert == null || !cert.HasPrivateKey)
throw new ArgumentNullException("cert");
XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
StringBuilder sb = new StringBuilder();
sb.Append("<Configuration>");
sb.Append("<Files>");
foreach (string p in paths)
{
sb.Append("<File>");
sb.AppendFormat("<FileName>{0}</FileName>",
p.Replace(this.physicalApplicationPath, ""));
sb.AppendFormat("<FileSignature><Signature xmlns=\"http://www.w3.org/2000/09/xmldsig#\">{0}</Signature></FileSignature>",
SignFile(p, cert).InnerXml);
sb.Append("</File>");
}
sb.Append("</Files>");
sb.Append("</Configuration>");
doc.LoadXml(sb.ToString());
doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true));
return doc;
}
public static XmlElement SignFile(string path, X509Certificate2 cert)
{
if (string.IsNullOrWhiteSpace(path))
throw new ArgumentNullException("path");
if (cert == null || !cert.HasPrivateKey)
throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);
Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path);
XmlDocument doc = new XmlDocument();
doc.PreserveWhitespace = true;
doc.Load(path);
return SignXmlDocument(doc, cert);
}
public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert)
{
if (doc == null)
throw new ArgumentNullException("doc");
if (cert == null || !cert.HasPrivateKey)
throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey);
SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey };
Reference reference = new Reference() { Uri = "" };
XmlDsigC14NTransform transform = new XmlDsigC14NTransform();
reference.AddTransform(transform);
XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform();
reference.AddTransform(envelope);
signed.AddReference(reference);
KeyInfo keyInfo = new KeyInfo();
keyInfo.AddClause(new KeyInfoX509Data(cert));
signed.KeyInfo = keyInfo;
signed.ComputeSignature();
XmlElement xmlSignature = signed.GetXml();
return xmlSignature;
}
}
To write this to a file you can call it like this:
XmlWriter writer = XmlWriter.Create(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml");
XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath);
XmlDocument xml = signer.SignFiles(new string[] {
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config",
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config"
},
new X509Certificate2(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1"));
xml.WriteTo(writer);
writer.Flush();
Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.
These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:
1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.
There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.
Concluding Bits
What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.
This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.
Is it necessary to use on every deployment? No, probably not.
Does it go a little overboard with regard to complexity? Yeah, a little.
Does it protect against a real problem? Absolutely.
Unfortunately it also requires full trust.
Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.
And of course, it works with both WebForms and MVC.
You can download the full source here.
The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.
- You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
- You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
- At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
- So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.
There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.
The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.
This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.
The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.
As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.
Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.
There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.
First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:
Authentication Type/Credential |
Wauth Value |
Password |
urn:oasis:names:tc:SAML:1.0:am:password |
Kerberos |
urn:ietf:rfc:1510 |
TLS |
urn:ietf:rfc:2246 |
PKI/X509 |
urn:oasis:names:tc:SAML:1.0:am:X509-PKI |
Default |
urn:oasis:names:tc:SAML:1.0:am:unspecified |
When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.
All you have to do is redirect to the STS with the wauth parameter:
https://yoursts/authenticate?wa=wsignin1.0&wtrealm=uri:myrp&wauth=urn:super:secure:elevation:method
Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:
http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod
Just add the claim to the output identity:
protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
// finish filling claims...
return ident;
}
At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:
public static bool IsElevated(this IClaimsPrincipal principal)
{
return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";
}
And then have a bit of code to check:
var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())
{
DoSomethingRequiringElevation();
}
This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP. In Global.asax we could do something like:
void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.SessionAuthenticationModule.SessionSecurityTokenReceived
+= new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived);
}
void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e)
{
if (e.SessionToken.ClaimsPrincipal.IsElevated())
{
SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15));
e.SessionToken = token;
}
}
This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.
There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.
As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.
Earlier this morning the Geneva (WIF/ADFS) Product Team announced a CTP for supporting the SAML protocol within WIF. WIF has supported SAML tokens since it's inception, however it hasn't supported the SAML protocol until now. According to the team:
This WIF extension allows .NET developers to easily create claims-based SP-Lite compliant Service Provider applications that use SAML 2.0 conformant identity providers such as AD FS 2.0.
This is the first I've seen this CTP, so I decided to jump into the Quick Start solution to get a feel for what's going on. Here is the solution hierarchy:

There isn't much to it. We have the sample identity provider that generates a token for us, a relying party application (service provider), and a utilities project to help with some sample-related duties.
In most cases, we really only need to worry about the Service Provider, as the IdP probably already exists. I think creating an IdP using this framework is for a different post.
If we consider that WIF mostly works via configuration changes to the web.config, it stands to reason that the SAML extensions will too. Lets take a look at the web.config file.
There are three new things in the web.config that are different from a default-configured WIF application.
First we see a new configSection declaration:
<section name="microsoft.identityModel.saml" type="Microsoft.IdentityModel.Web.Configuration.MicrosoftIdentityModelSamlSection, Microsoft.IdentityModel.Protocols"/>
This creates a new configuration section called microsoft.identityModel.saml.
Interestingly, this doesn't actually contain much. Just pointers to metadata:
<microsoft.identityModel.saml metadata="bin\App_Data\serviceprovider.xml">
<identityProviders>
<metadata file="bin\App_Data\identityprovider.xml"/>
</identityProviders>
</microsoft.identityModel.saml>
Now this is a step away from WIF-ness. These metadata documents are consumed by the extension. They contain certificates and endpoint references:
<SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://localhost:6010/IdentityProvider/saml/redirect/sso"/>
I can see some extensibility options here.
Finally, an HTTP Module is added to handle the token response:
<add name="Saml2AuthenticationModule" type="Microsoft.IdentityModel.Web.Saml2AuthenticationModule"/>
This module works similarly to the WSFederationAuthenticationModule used by WIF out of the box.
It then uses the SessionAuthenticationModule to handle session creation and management, which is the same module used by WIF.
As you start digging through the rest of the project, there isn't actually anything too surprising to see. The default.aspx page just grabs a claim from the IClaimsidentity object and adds a control used by the sample to display SAML data. There is a signout button though which calls the following line of code:
Saml2AuthenticationModule.Current.SignOut( "~/Login.aspx" );
In the Login.aspx page there is a sign in button that calls a similar line of code:
Saml2AuthenticationModule.Current.SignIn( "~/Default.aspx" );
All in all, this SAML protocol extension seems to making federating with a SAML IdP fairly simple and straightforward.
When you start working with Windows Azure in your spare time there are quite a few things that you miss.
I knew that it was possible to manage Windows Azure with multiple accounts, but since I was the only one logging into my instance, I never bothered to look into it. Well as it turns out, I needed to be able to manage Azure from a separate Live ID. It's pretty simple to do. You get into your subscription, navigate to User Management under the Hosted Services tab, and then you add a new Co-Admin.
Turns out that you can't manage ACS this way though. You don't have access to the namespaces as the Co-Admin. Crap. That's really what I wanted to manage with the separate account. After a minute of swearing at the control panel, I logged into ACS with my original account and looked around.
Portal Administrators
Aha! It was staring me right in the face:

There is a full MSDN article on how to deal with Portal Administrators.
Upon clicking the link you are given a list of current administrators. I wanted to add one.
When you add an administrator you are given a list Identity Providers to choose from. Interesting.

This means that I can manage this ACS namespace using any IdP that I want. I already have ADFS created as an IdP, so I'm going to use it. Getting Single Sign-On is always a bonus.
It asks for a claim type. When the ACS management portal receives a token, it will look for this claim type and compare it's value to the Identity claim value. If it matches the value, you are authorized to manage the namespace. I chose email address. It seemed simple enough. To log in I just navigate to https://syfuhs2.accesscontrol.windows.net/ and then gives me the default Home Realm Discovery page:

I've already preconfigured ACS to redirect any email addresses with the objectsharp.com domain to our ADFS instance. Once I click submit it redirects to ADFS, I authenticate using Windows Authentication, and then I'm back at the ACS Control Panel. The next time I go to log in, a cookie will be there and the Home Realm Discovery page will see that I logged in with ADFS last time, so it will list that option first:

It just so happens that ObjectSharp is Awesome.
Now how cool is that?
One of the cornerstones of ADFS is the concept of federation (one would hope anyway, given the name), which is defined as a user's authentication process across applications, organizations, or companies. Or simply put, my company Contoso is a partner with Fabrikam. Fabrikam employees need access to one of my applications, so we create a federated trust between my application and their user store, so they can log into my application using their internal Active Directory. In this case, via ADFS.
So lets break this down into manageable bits.
First we have our application. This application is a relying party to my ADFS instance. By now hopefully this is relatively routine.
Next we have the trust between our ADFS and our partner company's STS. If the company had ADFS installed, we could just create a trust between the two, but lets go one step further and give anyone with a Live ID access to this application. Therefore we need to create a trust between the Live ID STS and our ADFS server.
This is easier than most people may think. We can just use Windows Azure Access Control Services (v2). ACS can be set up very easily to federate with Live ID (or Google, Yahoo, Facebook, etc), so we just need to federate with ACS, and ACS needs to federate with Live ID.
Creating a trust between ADFS and ACS requires two parts. First we need to tell ADFS about ACS, and second we need to tell ACS about ADFS.
To explain a bit further, we need to make ACS a Claims Provider to ADFS, so ADFS can call on ACS for authentication. Then we need to make ADFS a relying party to ACS, so ADFS can consume the token from ACS. Or rather, so ACS doesn't freak out when it see's a request for a token for ADFS.
This may seem a bit confusing at first, but it will become clearer when we walk through the process.
First we need to get the Federation Metadata for our ACS instance. In this case I've created an ACS namespace called "syfuhs2". The metadata can be found here: https://syfuhs2.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml.
Next I need to create a relying party in ACS, telling it about ADFS. To do that browse to the Relying party applications section within the ACS management portal and create a new relying party:

Because ADFS natively supports trusts, I can just pass in the metadata for ADFS to ACS, and it will pull out the requisite pieces:

Once that is saved you can create a rule for the transform under the Rule Groups section:

For this I'm just going to generate a default set of rules.

This should take care of the ACS side of things. Next we move into ADFS.
Within ADFS we want to browse to the Claims Provider Trusts section:

And then we right-click > Add Claims Provider Trust
This should open a Wizard:

Follow through the wizard and fill in the metadata field:

Having Token Services that properly generate metadata is a godsend. Just sayin'.
Once the wizard has finished, it will open a Claims Transform wizard for incoming claims. This is just a set of claims rules that get applied to any tokens received by ADFS. In other words, what should happen to the claims within the token we receive from ACS?
In this case I'm just going to pass any claims through:

In practice, you should write a rule that filters out any extraneous claims that you don't necessarily trust. For instance, if I were to receive a role claim with a value "Administrator" I may not want to let it through because that could give administrative access to the user, even though it wasn't explicitly set by someone managing the application.
Once all is said and done, you can browse to the RP, redirect for authentication and will be presenting with this screen:

After you've made your first selection, a cookie will be generated and you won't be redirected to this screen again. If you select ACS, you then get redirected to the ACS Home Realm selection page (or directly to Live ID if you only have Live ID).
Sometime in the last few years Facebook has gotten stupidly popular. Given the massive user base, it actually makes a little bit of sense to take advantage of the fact that you can use them as an identity provider. Everyone has a Facebook account (except… me), and you can get a fair bit of information out of it on the user.
The problem though is that it uses OpenAuth, and I, of course, don't like OpenAuth. This makes it very unlikely for me to spend any amount time working with the protocol, and as such wouldn't jump at the chance to add it into an application. Luckily ACS supports Facebook natively – AND it's easy to setup.
First things first, we need to log into our ACS management portal, and select Identity Providers under Trust Relationships. Then we need to add a new Identity Provider:

Then we need to select Facebook as the type we want to add:

Once we start filling out the details for the federation we need to get some things from Facebook directly.

There are three fields we need to worry about, Application ID, Application secret, and Application permissions. We can get the first two from the settings page of our Facebook application, which you can get to at www.facebook.com/developers/.
You should create a separate application for each instance you create, and I'll explain why in a minute.
You then need the Application permissions. This is a list of claims to request access to from Facebook. The full list can be found here: http://developers.facebook.com/docs/authentication/permissions/, but for now email will suffice.
Once you have saved this identity provider you need to create a rule for each relying party. This will define how the claims are transformed before being sent to your relying party. If you already have rules set up you can modify one:

I'm pretty content with just using the default rules, which is to just pass everything, but you need to generate them first:


Once the rules have been generated you can save the rule.
Now you can test the federation.
It should fail.
If you watched everything in Fiddler you will see a chunk of JSON returned that looks something like:
{
"error": {
"type": "OAuthException",
"message": "Invalid redirect_uri: Given URL is not allowed by the Application configuration."
}
}
This is about my warning earlier about creating a separate application for each ACS namespace. Basically, Facebook doesn't like the request for authentication because it has no idea who the requestor is. Therefore I need to tell Facebook about my application.
To do this you need to get into the Web site settings for your application Facebook:

You will need to set the Site URL property to the ACS namespace:

Given the requirement for the FQDN, you need to create an application for each namespace you decide to create.
At this point federation with Facebook should now work. If you are using the default login page you should see something like this:

And if you sign-in you should get a token from Facebook which ACS will normalize, and then return to your relying party. Based on the permissions request you set above you should see something this:

** UPDATE **
Some of you may be wondering about this AccessToken claim. Part of the ACS configuration asks for a set of permissions to request, and these permissions are tied to this access token. Instead of receiving everything within claims, you need to make a separate call to Facebook to get these details by using the access token.
Dominick Baier has a good article explaining how to accomplish this: http://www.leastprivilege.com/AccessControlServiceV2AndFacebookIntegration.aspx.
** END UPDATE **
For those of you who want to federate with Facebook but don't like the idea of writing OpenAuth goo, ACS easily simplifies the process.
Part of the Mix11 announcement was that ACS v2 was released to production. It was actually released last Thursday but we were told to keep as quiet as possible so they could announce it at Mix. Here is the marketing speak:
The new ACS includes a plethora of new features that customers and partners have been asking with enthusiasm: single sign on from business and web identity providers, easy integration with our development tools, support for both enterprise-grade and web friendly protocols, out of the box integration with Facebook, Windows Live ID, Google and Yahoo, and many others.
Those features respond to such fundamental needs in modern cloud based systems that ACS has already become a key asset in many of our own offerings.
There is a substantial difference between v1 and v2. In v2, we now see:
Federation provider and Security Token Service (FINALLY!)
- Out of box federation with Active Directory Federation Services 2.0, Windows Live ID, Google, Yahoo, Facebook
New authorization scenarios
- Delegation using OAuth 2.0
Improved developer experience
- New web-based management portal
- Fully programmatic management using OData
- Works with Windows Identity Foundation
Additional protocol support
- WS-Federation, WS-Trust, OpenID 2.0, OAuth 2.0 (Draft 13)
That's a lot of stuff to keep up with, but luckily Microsoft has made it easier for us by giving us a whole whack of content to learn from.
First off, all of the training kits have now been updated to support v2:
Second, there are a bunch of new Channel9 videos just released:
Third, and finally, the Claims Based Identity and Access Control Guide was updated!
Talk about a bunch of awesome stuff.
When you set up ADFS as an IdP for SAML relying parties, you are given a page that allows you to log into the relying parties. There is nothing particularly interesting about this fact, except that it could be argued that the page allows for information leakage. Take a look at it:

There are two important things to note:
- I'm not signed in
- I can see every application that uses this IdP
I'm on the fence about this one. To some degree I don't care that people know we use ADFS to log into Salesforce. Frankly, I blogged about it. However, this could potentially be bad because it can tell an attacker about the applications you use, and the mechanisms you use to authenticate into them.
This is definitely something you should consider when developing your threat models.
Luckily, if you do decide that you don't want the applications to be visible, you can make a quick modification to the IdpInitiatedSignOn.aspx.cs page.
There is a method called SetRpListState:
protected void SetRpListState( object sender, EventArgs e )
{
RelyingPartyDropDownList.Enabled = OtherRpRadioButton.Checked;
ConsentDropDownList.Enabled = OtherRpRadioButton.Checked;
}
To get things working I made two quick modifications. First I added the following line of code to that method:
OtherRpPanel.Visible = this.IsAuthenticated;
Then I added a line to the Page_Init method:
SetRpListState(null, null);
Now unauthenticated users just see this:

And authenticated users see everything as expected:

You could extend this further and add some logic to look into the App Settings in the web.config to quickly and easily switch between modes.
One of the projects that’s been kicking around in the back of my head is how to make Windows Phone 7 applications able to authenticate against a Windows domain. This is a must have for enterprise developers if they want to use the new platform.
There were a couple ways I could do this, but keeping with my Claims-shtick I figured I would use an STS. Given that ADFS is designed specifically for Active Directory authentication, I figured it would work nicely. It should work like this:

Nothing too spectacularly interesting about the process. In order to use ADFS though, I need the correct endpoint. In this case I’m using
https://[external.exampledomain.com]/adfs/services/Trust/13/usernamemixed
That takes care of half of the problem. Now I actually need to make my application call that web service endpoint.
This is kind of a pain because WP7/Silverlight don’t support the underlying protocol, WS-Federation.
Theoretically I could just add that endpoint as a service reference and build up all the pieces, but that is a nightmare scenario because of all the boiler-plating around security. It would be nice if there was a library that supported WS-Federation for the phone.
As it turns out Dominick Baier came across a solution. He converted the project that came from the Identity training kit initially designed for Silverlight. As he mentions there were a few gotchas, but overall it worked nicely. You can download his source code and play around.
I decided to take it a step further though. I didn’t really like the basic flow of token requests, and I didn’t like how I couldn’t work with IPrincipal/IIdentity objects.
First things first though. I wanted to start from scratch, so I opened the identity training kit and looked for the Silverlight project. You can find it here: [wherever you installed the kit]\IdentityTrainingKitVS2010\Labs\SilverlightAndIdentity\Source\Assets\SL.IdentityModel.
Initially I thought I could just add it to a phone project, but that was a bad idea; there were too many build errors. I could convert the project file to a phone library, but frankly I was lazy, so I just created a new phone library and copied the source files between projects.
There were a couple references missing, so I added System.Runtime.Serialization, System.ServiceModel, and System.Xml.Linq.
This got the project built, but will it work?
I copied Dominick’s code:
WSTrustClient _client;
private void button1_Click(object sender, RoutedEventArgs e)
{
_client = GetWSTrustClient(
https://[...]/adfs/services/Trust/13/usernamemixed,
new UsernameCredentials("username", "password"));
var rst = new RequestSecurityToken(WSTrust13Constants.KeyTypes.Bearer)
{
AppliesTo = new EndpointAddress("[…]")
};
_client.IssueCompleted += client_IssueCompleted;
_client.IssueAsync(rst);
}
void client_IssueCompleted(object sender, IssueCompletedEventArgs e)
{
_client.IssueCompleted -= client_IssueCompleted;
if (e.Error != null)
throw e.Error;
var token = e.Result;
button2.IsEnabled = true;
}
private WSTrustClient
GetWSTrustClient(string stsEndpoint, IRequestCredentials credentials)
{
var client = new WSTrustClient(new WSTrustBindingUsernameMixed(),
new EndpointAddress(stsEndpoint), credentials);
return client;
}
To my surprise it worked. Sweet.
This left me wanting more though. In order to access any of the claims within the token I had to do something with the RequestSecurityTokenResponse (RSTR) object. Also, how do I make this identity stick around within the application?
The next thing I decided to do was figure out how to convert the RSTR object to an IClaimsIdentity. Unfortunately this requires a bit of XML parsing. Talk about a pain. Helper class it is:
public static class TokenHandler
{
private static XNamespace ASSERTION_NAMESPACE
= "urn:oasis:names:tc:SAML:1.0:assertion";
private const string CLAIM_VALUE_TYPE
= "http://www.w3.org/2001/XMLSchema#string"; // bit of a hack
public static IClaimsPrincipal Convert(RequestSecurityTokenResponse rstr)
{
return new ClaimsPrincipal(GetClaimsIdentity(rstr));
}
private static ClaimsIdentity GetClaimsIdentity(RequestSecurityTokenResponse rstr)
{
XDocument responseDoc = XDocument.Parse(rstr.RequestedSecurityToken.RawToken);
XElement attStatement = responseDoc.Element(ASSERTION_NAMESPACE + "Assertion")
.Element(ASSERTION_NAMESPACE + "AttributeStatement");
var issuer = responseDoc.Root.Attribute("Issuer").Value;
ClaimCollection claims = new ClaimCollection();
foreach (var c in attStatement.Elements(ASSERTION_NAMESPACE + "Attribute"))
{
string attrName = c.Attribute("AttributeName").Value;
string attrNamespace = c.Attribute("AttributeNamespace").Value;
string claimType = attrNamespace + "/" + attrName;
foreach (var val in c.Elements(ASSERTION_NAMESPACE + "AttributeValue"))
{
claims.Add(new Claim(issuer, issuer, claimType,
val.Value, CLAIM_VALUE_TYPE));
}
}
return new ClaimsIdentity(claims);
}
}
Most of this is just breaking apart the SAML-goo. Once I got all the SAML assertions I generated a claim for each one and created a ClaimsIdentity object. This gets me a step closer to how I wanted things, but keeping the identity around within the application is still up in the air. How can I keep the identity for the lifetime of the application? I wanted something like Thread.CurrentPrincipal but the phone platform doesn’t let you access it.
There was a class, TokenCache, that was part of the original Silverlight project. This sounded useful. it turns out it’s Get/Add wrapper for a Dictionary<>. It’s almost useful, but I want to be able to access this cache at any time. A singleton sort of solves the problem, so lets try that. I added this within the TokenCache class:
public static TokenCache Cache
{
get
{
if (_cache != null)
return _cache;
lock (_sync)
{
_cache = new TokenCache();
}
return _cache;
}
}
private static TokenCache _cache;
private static object _sync = new object();
now I can theoretically get access to the tokens at any time, but I want to make the access part of the base Application object. I created a static class called ApplicationExtensions:
public static class ApplicationExtensions
{
public static IClaimsPrincipal
GetPrincipal(this Application app, string appliesTo)
{
if (!TokenCache.Cache.HasTokenInCache(appliesTo))
throw new ArgumentException("Token cannot be found to generate principal.");
return TokenHandler.Convert(TokenCache.Cache.GetTokenFromCache(appliesTo));
}
public static RequestSecurityTokenResponse
GetPrincipalToken(this Application app, string appliesTo)
{
return TokenCache.Cache.GetTokenFromCache(appliesTo);
}
public static void
SetPrincipal(this Application app, RequestSecurityTokenResponse rstr)
{
TokenCache.Cache.AddTokenToCache(rstr.AppliesTo.ToString(), rstr);
}
}
It adds three extension methods to the base Application object. Now it’s sort of like Thread.CurrentPrincipal.
How does this work? When the RSTR is returned I can call:
Application.Current.SetPrincipal(rstr);
Accessing the identity is two-part.
If I just want to get the identity and it’s claims I can call:
var principal = Application.Current.GetPrincipal("https://troymcclure/webapplication3/");
IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
If I want to reuse the token as part of web service call I can get the token via:
var token = Application.Current.GetPrincipalToken(https://troymcclure/webapplication3/);
There is still quite a lot to do in order for this to be production ready code, but it does a pretty good job of solving all the problems I had with domain authentication on the Windows Phone 7 platform.
Every couple of weeks I start up Autoruns to see what new stuff has added itself to Windows startup and what not (screw you Adobe – you as a software company make me want to swear endlessly). Anyway, a few months ago around the time the latest version of Windows Live Messenger and it’s suite RTM’ed I poked around to see if anything new was added. Turns out there was:

A new credential provider was added!

Interesting.
Not only that, it turns out a couple Winsock providers were added too:

I started poking around the DLL’s and noticed that they don’t do much. Apparently you can use smart cards for WLID authentication. I suspect that’s what the credential provider and associated Winsock Provider is for, as well as part of WLID’s sign-on helper so credentials can be managed via the Credential Manager:

Ah well, nothing too exciting here.
Skip a few months and something occurred to me. Microsoft was able to solve part of the Claims puzzle. How do you bridge the gap between desktop application identities and web application identities? They did part of what CardSpace was unable to do because CardSpace as a whole didn’t really solve a problem people were facing. The problem Windows Live ran into was how do you share credentials between desktop and web applications without constantly asking for the credentials? I.e. how do you do Single Sign On…
This got me thinking.
What if I wanted to step this up a smidge and instead of logging into Windows Live Messenger with my credentials, why not log into Windows with my Windows Live Credentials?
Yes, Windows. I want to change this:

Question: What would this solve?
Answer: At present, nothing ground-breakingly new. For the sake of argument, lets look at how this would be done, and I’ll (hopefully) get to my point.
First off, we need to know how to modify the Windows logon screen. In older versions of Windows (versions older than 2003 R2) you had to do a lot of heavy lifting to make any changes to the screen. You had to write your own GINA which involved essentially creating your own UI. Talk about painful.
With the introduction of Vista, Microsoft changed the game when it came to custom credentials. Their reasoning was simple: they didn’t want you to muck up the basic look and feel. You had to follow their guidelines.
As a result we are left with something along the lines of these controls to play with:

The logon screen is now controlled by Credential Providers instead of the GINA. There are two providers built into Windows by default, one for Kerberos or NTLM authentication, and one for Smart Card authentication.
The architecture looks like:

When the Secure Attention Sequence (CTRL + ALT + DEL / SAS) is called, Winlogon switches to a different desktop and instantiates a new instance of LogonUI.exe. LogonUI enumerates all the credential provider DLL’s from registry and displays their controls on the desktop.
When I enter in my credentials they are serialized and supposed to be passed to the LSA.
Once the LSA has these credentials it can then do the authentication.
I say “supposed” to be passed to the LSA because there are two frames of thought here. The first frame is to handle authentication within the Credential Provider itself. This can cause problems later on down the road. I’ll explain why in the second frame.
The second frame of thought is when you need to use custom credentials, need to do some funky authentication, and then save save the associated identity token somewhere. This becomes important when other applications need your identity.
You can accomplish this via what’s called an Authentication Package.

When a custom authentication package is created, it has to be designed in such a way that applications cannot access stored credentials directly. The applications must go through the pre-canned MSV1_0 package to receive a token.
Earlier when I asked about using Windows Live for authentication we would need to develop two things: a Credential Provider, and a custom Authentication Package.
The logon process would work something like this:
- Select Live ID Credential Provider
- Type in Live ID and Password and submit
- Credential Provider passes serialized credential structure to Winlogon
- Winlogon passes credentials to LSA
- LSA passes credential to Custom Authentication Package
- Package connects to Live ID STS and requests a token with given credentials
- Token is returned
- Authentication Package validated token and saves it to local cache
- Package returns authentication result back up call stack to Winlogon
- Winlogon initializes user’s profile and desktop
I asked before: What would this solve?
This isn’t really a ground-breaking idea. I’ve just described a domain environment similar to what half a million companies have already done with Active Directory, except the credential store is Live ID.
On it’s own we’ve just simplified the authentication process for every home user out there. No more disparate accounts across multiple machines. Passwords are in sync, and identity information is always up to date.
What if Live ID sets up a new service that lets you create access groups for things like home and friends and you can create file shares as appropriate. Then you can extend the Windows 7 Homegroup sharing based on those access groups.
Wait, they already have something like that with Skydrive (sans Homegroup stuff anyway).
Maybe they want to use a different token service.
Imagine if the user was able to select the “Federated User” credential provider that would give you a drop down box listing a few Security Token Services. Azure ACS can hook you up.
Imagine if one of these STS’s was something everyone used *cough* Facebook *cough*.
Imagine the STS was one that a lot of sites on the internet use *cough* Facebook *cough*.
Imagine if the associated protocol used by the STS and websites were modified slightly to add a custom set of headers sent to the browser. Maybe it looked like this:
Relying-Party-Accepting-Token-Type: urn:sometokentype:www.somests.com
Relying-Party-Token-Reply-Url: https://login.myawesomesite.com/auth
Finally, imagine if your browser was smart enough to intercept those headers and look up the user’s token, check if they matched the header ”Relying-Party-Accepting-Token-Type” and then POST the token to the given reply URL.
Hmm. We’ve just made the internet SSO capable.
Now to just move everyone’s cheese to get this done.
Patent Pending. 