Windows Azure Security, or How I Learned to Stop Worrying and Love the Cloud

This page has been migrated to a new location:

Tamper-Evident Configuration Files in ASP.NET

A couple weeks ago someone sent a message to one of our internal mailing lists. His message was pretty straightforward: how do you prevent modifications to a configuration file for an application [while the user has administrative rights on the machine]?

There were a couple responses including mine, which was to cryptographically sign the configuration file with an asymmetric key. For a primer on digital signing, take a look here. Asymmetric signing is one possible way of signing a file. By signing it this way the configuration file could be signed by an administrator before deploying the application, and all the application needed to validate the signature was the public key associated with the private key used to sign the file. This separated the private key from the application, preventing the configuration from being re-signed maliciously. It’s similar in theory to how code-signing works.

In the event that validation of the configuration file failed, the application would not load, or would gracefully fail and exit the next time the file was checked (or the application had an exclusive lock on the configuration file so it couldn’t be edited while running).

We are also saved the problem of figuring out the signature format because there is a well-respected XML signature schema: WCF uses this format to sign messages. For a good code-walkthrough see Barry Dorrans’ Beginning ASP.NET Security. More on the code later here though.

Technically, this won’t prevent changes to the file, but it will prevent the application from accepting those changes. It’s kind of like those tamper-evident tags manufacturers stick on the enclosures of their equipment. It doesn’t prevent someone from opening the thing, but they will get caught if someone checks it. You’ll notice I didn’t call them “tamper-resistance” tags.

Given this problem, I went one step further and asked myself: how would I do this with a web application? A well-informed ASP.NET developer might suggest using aspnet_regiis to encrypt the configuration file. Encrypting the configuration does protect against certain things, like being able to read configuration data. However, there are a couple problems with this.

  • If I’m an administrator on that server I can easily decrypt the file by calling aspnet_regiis
  • If I’ve found a way to exploit the site, I can potentially overwrite the contents of the file and make the application behave differently
  • The encryption/decryption keys need to be shared in web farms

Consider our goal. We want to prevent a user with administrative privileges from modifying the configuration. Encryption does not help us in this case.  Signing the configuration will help though (As an aside, for more protection you encrypt the file then sign it, but that’s out of the scope of this) because the web application will stop working if a change is made that invalidates the signature.

Of course, there’s one little problem. You can’t stick the signature in the configuration file, because ASP.NET will b-itch complain about the foreign XML tag. The original application in question was assumed to have a custom XML file for it’s configuration, but in reality it doesn’t, so this problem applies there too.

There are three possible solutions to this:

  • Create a custom ConfigurationSection class for the signature
  • Create a custom configuration file and handler, and intercept all calls to web.config
  • Stick the signature of the configuration file into a different file

The first option isn’t a bad idea, but I really didn’t want to muck about with the configuration classes. The second option is, well, pretty much a stupid idea in almost all cases, and I’m not entirely sure you can even intercept all calls to the configuration classes.

I went with option three.

The other file has two important parts: the signature of the web.config file, and a signature for itself. This second signature prevents someone from modifying the signature for the web.config file. Our code becomes a bit more complicated because now we need to validate both signatures.

This makes us ask the question, where is the validation handled? It needs to happen early enough in the request lifecycle, so I decided to stick it into a HTTP Module, for the sake of modularity.

Hold it, you say. If the code is in a HTTP Module, then it needs to be added to the web.config. If you are adding it to the web.config, and protecting the web.config by this module, then removing said module from the web.config will prevent the validation from occurring.


There are two ways around this:

  • Add the validation call into Global.asax
  • Hard code the addition of the HTTP Module

It’s very rare that I take the easy approach, so I’ve decided to hard code the addition of the HTTP Module, because sticking the code into a module is cleaner.

In older versions of ASP.NET you had to make some pretty ugly hacks to get the module in because it needs to happen very early in startup of the web application. With ASP.NET 4.0, an assembly attribute was added that allowed you to call code almost immediately after startup:

[assembly: PreApplicationStartMethod(typeof(Syfuhs.Security.Web.Startup), "Go")]


Within the Startup class there is a public static method called Go(). This method calls the Register() within an instance of my HttpModule. This module inherits from an abstract class called DynamicallyLoadedHttpModule, which inherits from IHttpModule. This class looks like:

public abstract class DynamicallyLoadedHttpModule : IHttpModule
    public void Register()
        DynamicHttpApplication.RegisterModule(delegate(HttpApplication app) { return this; });

    public abstract void Init(HttpApplication context);

    public abstract void Dispose();


The DynamicHttpApplication class inherits from HttpApplication and allows you to load HTTP modules in code. This code was not written by me. It was originally written by Nikhil Kothari:

using HttpModuleFactory = System.Func<System.Web.HttpApplication, System.Web.IHttpModule>;

public abstract class DynamicHttpApplication : HttpApplication
    private static readonly Collection<HttpModuleFactory> Factories 
= new Collection<HttpModuleFactory>(); private static object _sync = new object(); private static bool IsInitialized = false; private List<IHttpModule> modules; public override void Init() { base.Init(); if (Factories.Count == 0) return; List<IHttpModule> dynamicModules = new List<IHttpModule>(); lock (_sync) { if (Factories.Count == 0) return; foreach (HttpModuleFactory factory in Factories) { IHttpModule m = factory(this); if (m != null) { m.Init(this); dynamicModules.Add(m); } } } if (dynamicModules.Count != 0) modules = dynamicModules; IsInitialized = true; } public static void RegisterModule(HttpModuleFactory factory) { if (IsInitialized) throw new InvalidOperationException(Exceptions.CannotRegisterModuleLate); if (factory == null) throw new ArgumentNullException("factory"); Factories.Add(factory); } public override void Dispose() { if (modules != null) modules.ForEach(m => m.Dispose()); modules = null; base.Dispose(); GC.SuppressFinalize(this); }


Finally, to get this all wired up we modify the Global.asax to inherit from DynamicHttpApplication:

public class Global : DynamicHttpApplication

{ … }

Like I said, you could just add the validation code into Global (but where’s the fun in that?)…

So, now that we’ve made it possible to add the HTTP Module, lets actually look at the module:

public sealed class SignedConfigurationHttpModule : DynamicallyLoadedHttpModule
    public override void Init(HttpApplication context)
        if (context == null)
            throw new ArgumentNullException("context");

        context.BeginRequest += new EventHandler(context_BeginRequest);
        context.Error += new EventHandler(context_Error);

    private void context_BeginRequest(object sender, EventArgs e)
        HttpApplication app = (HttpApplication)sender;

        SignatureValidator validator 
= new SignatureValidator(app.Request.PhysicalApplicationPath); validator.ValidateConfigurationSignatures(
CertificateLocator.LocateSigningCertificate()); } private void context_Error(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; foreach (var exception in app.Context.AllErrors) { if (exception is XmlSignatureValidationFailedException) { // Maybe do something // Or don't... break; } } } public override void Dispose() { } }


Nothing special here. Just hooking into the context.BeginRequest event so validation occurs on each request. There would be some performance impact as a result.

The core validation is contained within the SignatureValidator class, and there is a public method that we call to validate the signature file, ValidateConfigurationSignatures(…). This method accepts an X509Certificate2 to compare the signature against.

The specification for the schema we are using for the signature will actually encode the public key of the private key into the signature element, however we want to go one step further and make sure it’s signed by a particular certificate. This will prevent someone from modifying the configuration file, and re-signing it with a different private key. Validation of the signature is not enough; we need to make sure it’s signed by someone we trust.

The validator first validates the schema of the signature file. Is the XML well formed? Does the signature file conform to a schema we defined (the schema is defined in a Constants class)? Following that is validates the signature of the file itself. Has the file been tampered with? Following that it validates the signature of the web.config file. Has the web.config file been tampered with?

Before it can do all of this though, it needs to check to see if the signature file exists. The variable passed into the constructor is the physical path of the web application. The validator knows that the signature file should be in the App_Data folder within the root. This file needs to be here because the folder by default will not let you access anything in it, and we don’t want anyone downloading the file. The path is also hardcoded specifically so changes to the configuration cannot bypass the signature file validation.

Here is the validator:

internal sealed class SignatureValidator
    public SignatureValidator(string physicalApplicationPath)
        this.physicalApplicationPath = physicalApplicationPath;
        this.signatureFilePath = Path.Combine(this.physicalApplicationPath, 
"App_Data\\Signature.xml"); } private string physicalApplicationPath; private string signatureFilePath; public void ValidateConfigurationSignatures(X509Certificate2 cert) { Permissions.DemandFilePermission(FileIOPermissionAccess.Read, this.signatureFilePath); if (cert == null) throw new ArgumentNullException("cert"); if (cert.HasPrivateKey) throw new SecurityException(Exceptions.ValidationCertificateHasPrivateKey); if (!File.Exists(signatureFilePath)) throw new SecurityException(Exceptions.CouldNotLoadSignatureFile); XmlDocument doc = new XmlDocument() { PreserveWhitespace = true }; doc.Load(signatureFilePath); ValidateXmlSchema(doc); CheckForUnsignedConfig(doc); if (!X509CertificateCompare.Compare(cert, ValidateSignature(doc))) throw new XmlSignatureValidationFailedException(
Exceptions.SignatureFileNotSignedByExpectedCertificate); List<XmlSignature> signatures = ParseSignatures(doc); ValidateSignatures(signatures, cert); } private void CheckForUnsignedConfig(XmlDocument doc) { List<string> signedFiles = new List<string>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName = Path.Combine(this.physicalApplicationPath,
file["FileName"].InnerText); signedFiles.Add(fileName.ToUpperInvariant()); } CheckConfigFiles(signedFiles); } private void CheckConfigFiles(List<string> signedFiles) { foreach (string file in Directory.EnumerateFiles(
this.physicalApplicationPath, "*.config", SearchOption.AllDirectories)) { string path = Path.Combine(this.physicalApplicationPath, file); if (!signedFiles.Contains(path.ToUpperInvariant())) throw new XmlSignatureValidationFailedException(
string.Format(CultureInfo.CurrentCulture, Exceptions.ConfigurationFileWithoutSignature, path)); } } private void ValidateXmlSchema(XmlDocument doc) { using (StringReader fileReader = new StringReader(Constants.SignatureFileSchema)) using (StringReader signatureReader = new StringReader(Constants.SignatureSchema)) { XmlSchema fileSchema = XmlSchema.Read(fileReader, null); XmlSchema signatureSchema = XmlSchema.Read(signatureReader, null); doc.Schemas.Add(fileSchema); doc.Schemas.Add(signatureSchema); doc.Validate(Schemas_ValidationEventHandler); } } void Schemas_ValidationEventHandler(object sender, ValidationEventArgs e) { throw new XmlSignatureValidationFailedException(Exceptions.InvalidSchema, e.Exception); } public static X509Certificate2 ValidateSignature(XmlDocument xml) { if (xml == null) throw new ArgumentNullException("xml"); XmlElement signature = ExtractSignature(xml.DocumentElement); return ValidateSignature(xml, signature); } public static X509Certificate2 ValidateSignature(XmlDocument doc, XmlElement signature) { if (doc == null) throw new ArgumentNullException("doc"); if (signature == null) throw new ArgumentNullException("signature"); X509Certificate2 signingCert = null; SignedXml signed = new SignedXml(doc); signed.LoadXml(signature); foreach (KeyInfoClause clause in signed.KeyInfo) { KeyInfoX509Data key = clause as KeyInfoX509Data; if (key == null || key.Certificates.Count != 1) continue; signingCert = (X509Certificate2)key.Certificates[0]; } if (signingCert == null) throw new CryptographicException(Exceptions.SigningKeyNotFound); if (!signed.CheckSignature()) throw new CryptographicException(Exceptions.SignatureValidationFailed); return signingCert; } private static void ValidateSignatures(List<XmlSignature> signatures, X509Certificate2 cert) { foreach (XmlSignature signature in signatures) { X509Certificate2 signingCert
= ValidateSignature(signature.Document, signature.Signature); if (!X509CertificateCompare.Compare(cert, signingCert)) throw new XmlSignatureValidationFailedException( string.Format(CultureInfo.CurrentCulture,
Exceptions.SignatureForFileNotSignedByExpectedCertificate, signature.FileName)); } } private List<XmlSignature> ParseSignatures(XmlDocument doc) { List<XmlSignature> signatures = new List<XmlSignature>(); foreach (XmlElement file in doc.GetElementsByTagName("File")) { string fileName
= Path.Combine(this.physicalApplicationPath, file["FileName"].InnerText); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, fileName); if (!File.Exists(fileName)) throw new FileNotFoundException(
string.Format(CultureInfo.CurrentCulture, Exceptions.FileNotFound, fileName)); XmlDocument fileDoc = new XmlDocument() { PreserveWhitespace = true }; fileDoc.Load(fileName); XmlElement sig = file["FileSignature"] as XmlElement; signatures.Add(new XmlSignature() { FileName = fileName, Document = fileDoc, Signature = ExtractSignature(sig) }); } return signatures; } private static XmlElement ExtractSignature(XmlElement xml) { XmlNodeList xmlSignatureNode = xml.GetElementsByTagName("Signature"); if (xmlSignatureNode.Count <= 0) throw new CryptographicException(Exceptions.SignatureNotFound); return xmlSignatureNode[xmlSignatureNode.Count - 1] as XmlElement; } }


You’ll notice there is a bit of functionality I didn’t mention. Checking that the web.config file hasn’t been modified isn’t enough. We also need to check if any *other* configuration file has been modified. It’s no good if you leave the root configuration file alone, but modify the <authorization> tag within the administration folder to allow anonymous access, right?

So there is code looks through the site for any files that have the “config” extension, and if that file isn’t in the signature file, it throws an exception.

There is also a check done at the very beginning of the validation. If you pass an X509Certificate2 with a private key it will throw an exception. This is absolutely by design. You sign the file with the private key. You validate with the public key. If the private key is present during validation that means you are not separating the keys, and all of this has been a huge waste of time because the private key is not protected. Oops.

Finally, it’s important to know how to sign the files. I’m not a fan of generating XML properly, partially because I’m lazy and partially because it’s a pain to do, so mind the StringBuilder:

public sealed class XmlSigner
    public XmlSigner(string appPath)
        this.physicalApplicationPath = appPath;

    string physicalApplicationPath;

    public XmlDocument SignFiles(string[] paths, X509Certificate2 cert)
        if (paths == null || paths.Length == 0)
            throw new ArgumentNullException("paths");

        if (cert == null || !cert.HasPrivateKey)
            throw new ArgumentNullException("cert");

        XmlDocument doc = new XmlDocument() { PreserveWhitespace = true };
        StringBuilder sb = new StringBuilder();


        foreach (string p in paths)

p.Replace(this.physicalApplicationPath, "")); sb.AppendFormat("<FileSignature><Signature xmlns=\"\">{0}</Signature></FileSignature>",
SignFile(p, cert).InnerXml); sb.Append("</File>"); } sb.Append("</Files>"); sb.Append("</Configuration>"); doc.LoadXml(sb.ToString()); doc.DocumentElement.AppendChild(doc.ImportNode(SignXmlDocument(doc, cert), true)); return doc; } public static XmlElement SignFile(string path, X509Certificate2 cert) { if (string.IsNullOrWhiteSpace(path)) throw new ArgumentNullException("path"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); Permissions.DemandFilePermission(FileIOPermissionAccess.Read, path); XmlDocument doc = new XmlDocument(); doc.PreserveWhitespace = true; doc.Load(path); return SignXmlDocument(doc, cert); } public static XmlElement SignXmlDocument(XmlDocument doc, X509Certificate2 cert) { if (doc == null) throw new ArgumentNullException("doc"); if (cert == null || !cert.HasPrivateKey) throw new ArgumentException(Exceptions.CertificateDoesNotContainPrivateKey); SignedXml signed = new SignedXml(doc) { SigningKey = cert.PrivateKey }; Reference reference = new Reference() { Uri = "" }; XmlDsigC14NTransform transform = new XmlDsigC14NTransform(); reference.AddTransform(transform); XmlDsigEnvelopedSignatureTransform envelope = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(envelope); signed.AddReference(reference); KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert)); signed.KeyInfo = keyInfo; signed.ComputeSignature(); XmlElement xmlSignature = signed.GetXml(); return xmlSignature; } }


To write this to a file you can call it like this:

XmlWriter writer = XmlWriter.Create(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\App_Data\Signature.xml"); XmlSigner signer = new XmlSigner(Request.PhysicalApplicationPath); XmlDocument xml = signer.SignFiles(new string[] { @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.debug.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Web.release.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\Account\Web.config", @"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\test.config" }, new X509Certificate2(
@"C:\Dev\Projects\Syfuhs.Security.Web\Syfuhs.Security.Web.WebTest\cert.pfx", "1")); xml.WriteTo(writer); writer.Flush();


Now within this code, you have to pass in a X509Certificate2 with a private key, otherwise you can’t sign the files.

These processes should occur on different machines. The private key should never be on the server hosting the site. The basic steps for deployment would go something like:

1. Compile web application.
2. Configure site and configuration files on staging server.
3. Run application that signs the configuration and generates the signature file.
4. Drop the signature.xml file into the App_Data folder.
5. Deploy configured and signed application to production.

There is one final note (I think I’ve made that note a few times by now…) and that is the CertificateLocator class. At the moment it just returns a X509Certificate2 from a particular path on my file system. This isn’t necessarily the best approach because it may be possible to overwrite that file. You should store that certificate in a safe place, and make a secure call to get it. For instance a web service call might make sense. If you have a Hardware Security Module (HSM) to store secret bits in, even better.

Concluding Bits

What have we accomplished by signing our configuration files? We add a degree of trust that our application hasn’t been compromised. In the event that the configuration has been modified, the application stops working. This could be from malicious intent, or careless administrators. This is a great way to prevent one-off changes to configuration files in web farms. It is also a great way to prevent customers from mucking up the configuration file you’ve deployed with your application.

This solution was designed in a way mitigate quite a few attacks. An attacker cannot modify configuration files. An attacker cannot modify the signature file. An attacker cannot view the signature file. An attacker cannot remove the signature file. An attacker cannot remove the HTTP Module that validates the signature without changing the underlying code. An attacker cannot change the underlying code because it’s been compiled before being deployed.

Is it necessary to use on every deployment? No, probably not.

Does it go a little overboard with regard to complexity? Yeah, a little.

Does it protect against a real problem? Absolutely.

Unfortunately it also requires full trust.

Overall it’s a fairly robust solution and shows how you can mitigate certain types of risks seen in the real world.

And of course, it works with both WebForms and MVC.

You can download the full source here.

Protecting Against BEAST and it's Man in the Middle Attack on SSL/TLS 1.0

Last week some researchers announced they figured out how to exploit a vulnerability in the SSL 3.0/TLS 1.0 protocol.

This particular vulnerability has been known for a few years, and our friends on the OpenSSL project had written a paper discussing it in 2004. This has been a theoretical problem for a while, but nobody was really able to weaponize the vulnerability.

In theory these researchers have.

The vulnerability is only found in the encryption algorithms that support CBC because of the nature of block ciphers (as apposed to stream ciphers). For more information read the OpenSSL paper above.  The way it works is a method called plaintext-recovery. Basically, if I know a particular value of plaintext, I can mangle the encrypted data in such a way that makes it possible to guess the rest of the plaintext. The reason this stayed theoretical was because there wasn’t wasn’t an easy way to guess the plaintext. The researchers have potentially found an easier way to guess.

Now, this vulnerability is only found under certain conditions:

  • You must be running SSL 3.0/TLS 1.0 (TLS 1.1 and 1.2 are not affected)
  • You must be using a block cipher for encryption

Without making this sound like FUD, it should be noted that this hasn’t necessarily become a problem. The security industry hasn’t really made a decision on whether this is a serious problem yet. It’s kind of a wait and see scenario.

Now, why is this problem? Most web servers are still stuck on TLS v1. And by most, I really mean most. The reason for this is because most web browsers only support TLS v1.

You can thank Mozilla Network Security Services (NSS). This is the crypto plugin that Firefox and Chrome use, and NSS doesn’t support TLS 1.1 or 1.2.

Chrome’s latest developer release now has a fix for this vulnerability, by switching to a stream cipher instead of block cipher.

Curiously Internet Explorer does support TLS 1.1 and 1.2 (and has since IE7). IIS 7 also supports it.

However, IIS only supports TLS v1 by default. You need to enable TLS 1.1 and 1.2.

It’s pretty straightforward, just add the following registry keys:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]

While you are at it, it might not be a bad idea to disable SSL v2 as well:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]

EDIT: I guess I should clarify that you should also disable TLS 1.0 as well, otherwise the browser can request that TLS v1 be used instead.

Set DisabledByDefault=1 under TLS 1.0\Server.

Next, reboot the server.

If you want to automate the procedure for multiple servers, Derek Seaman has a great little PowerShell script that sets everything up for you. He also shows you how you can test to make sure it actually worked, which is nice.

Windows Live and Windows 8

So. I guess I wasn't the only one with this idea:

Sweet. Smile

Announced earlier today at the Build conference, Microsoft is creating a tighter integration between Windows 8 and Windows Live.  More details to come when I download the bits later tonight.

The Importance of Elevating Privilege

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:


Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:

Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
        += new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived); } void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e) { if (e.SessionToken.ClaimsPrincipal.IsElevated()) { SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15)); e.SessionToken = token; } }

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Dear Recipient

Got this email just now.  I was amused (mainly because it got past the spam filters)…

Dear Email user,

This message is from Administration center Maintenance Policy verified that your mailbox exceeds (10.9GB) its limit, you may be unable to receive new email or send mails, To re- set your SPACE on our database prior to maintain your IN BOX, You must Reply to this email by Confirming your account details below.


First Name:



Date Of Birth:

Failure to do this will immediately render your Web-email address deactivated from our database.


Admin Help Desk© 2011

And my Reply…

Dear Admin Help Desk© 2011,

Thank you for the warning that my SPACE on your database prior to maintain my IN BOX exceeds its limit. Please find the following information handy for future reference.

Surname: Jobs

First Name: Steve

User-Name: Jobbers1

Password: CenterOfTheUniverse

Date Of Birth: December 25th, 0001

If for whatever reason you cannot reset my SPACE please email me back at and I will be sure to help you.

Adjusting the Home Realm Discovery page in ADFS to support Email Addresses

Over on the Geneva forums a question was asked:

Does anyone have an example of how to change the HomeRealmDiscovery Page in ADFSv2 to accept an e-mail address in a text field and based upon that (actually the domain suffix) select the correct Claims/Identity Provider?

It's pretty easy to modify the HomeRealmDiscovery page, so I thought I'd give it a go.

Based on the question, two things need to be known: the email address and the home realm URI.  Then we need to translate the email address to a home realm URI and pass it on to ADFS.

This could be done a couple ways.  First it could be done by keeping a list of email addresses and their related home realms, or a list of email domains and their related home realms.  For the sake of this being an example, lets do both.

I've created a simple SQL database with three tables:


Each entry in the EmailAddress and Domain table have a pointer to the home realm URI (you can find the schema in the zip file below).

Then I created a new ADFS web project and added a new entity model to it:


From there I modified the HomeRealmDiscovery page to do the check:

// Copyright (c) Microsoft Corporation.  All rights reserved.

using System;

using Microsoft.IdentityServer.Web.Configuration;
using Microsoft.IdentityServer.Web.UI;
using AdfsHomeRealm.Data;
using System.Linq;

public partial class HomeRealmDiscovery : Microsoft.IdentityServer.Web.UI.HomeRealmDiscoveryPage
    protected void Page_Init(object sender, EventArgs e)

    protected void PassiveSignInButton_Click(object sender, EventArgs e)
        string email = txtEmail.Text;

        if (string.IsNullOrWhiteSpace(email))
            SetError("Please enter an email address");

        catch (ApplicationException)
            SetError("Cannot find home realm based on email address");

    private string FindHomeRealmByEmail(string email)
        using (AdfsHomeRealmDiscoveryEntities en = new AdfsHomeRealmDiscoveryEntities())
            var emailRealms = from e in en.EmailAddresses where e.EmailAddress1.Equals(email) select e;

            if (emailRealms.Any()) // email address exists
                return emailRealms.First().HomeRealm.HomeRealmUri;

            // email address does not exist
            string domain = ParseDomain(email);

            var domainRealms = from d in en.Domains where d.DomainAddress.Equals(domain) select d;

            if (domainRealms.Any()) // domain exists
                return domainRealms.First().HomeRealm.HomeRealmUri;

            // neither email nor domain exist
            throw new ApplicationException();

    private string ParseDomain(string email)
        if (!email.Contains("@"))
            return email;

        return email.Substring(email.IndexOf("@") + 1);

    private void SetError(string p)
        lblError.Text = p;


If you compare the original code, there was some changes.  I removed the code that loaded the original home realm drop down list, and removed the code to choose the home realm based on the drop down list's selected value.

You can find my code here:

Part 5: Incident Response Management with Team Foundation Server

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part five of the series, unedited for all to enjoy.

There are only a few certainties in life: death, taxes, me getting this post in late, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.


There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

An Incident Response Plan (IRP - the procedure) serves a few functions:

  • It has the list of people to contact in the event of the emergency
  • It is the actual list of steps to follow when bad things happen
  • It includes references to other procedures for code written by other teams

This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

Emergency Contacts (Incident Response Team)

The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

  • Developer – Someone to comprehend and/or triage the problem
  • Tester – Someone to test and verify any changes
  • Manager – Someone to approve changes that need to be made
  • Marketing/PR – Someone to make a public announcement (if necessary)

Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

The Incident Response Plan

Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well. 

Each plan should provide the steps to answer a few questions about the vulnerability:

  • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
  • Was the vulnerability found in something you host, or an application that your customers host?
  • Is it an ongoing attack?
  • What was breached?
  • How do you notify the your customers about the vulnerability?
  • When do you notify them about the vulnerability?

And each plan should provide the steps to answer a few questions about the fix:

  • If it's an ongoing attack, how do you stop it?
  • How do you test the fix?
  • How do you deploy the fix?
  • How do you notify the public about the fix?

Some of these questions may not be answerable immediately – you may need to wait until a postmortem to answer them. 

This is the high level IRP for example:

  • The Attack – It's already happened
  • Evaluate the state of the systems or products to determine the extent of the vulnerability
    • What was breached?
    • What is the vulnerability
  • Define the first step to mitigate the threat
    • How do you stop the threat?
    • Design the bug fix
  • Isolate the vulnerabilities if possible
    • Disconnect targeted machine from network
    • Complete forensic backup of system
    • Turn off the targeted machine if hosted
  • Initiate the mitigation plan
    • Develop the bug fix
    • Test the bug fix
  • Alert the necessary people
    • Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
    • If necessary, inform the proper legal/governmental bodies
  • Deploy any fixes
    • Rebuild any affected systems
    • Deploy patch(es)
    • Reconnect to network
  • Follow up with legal/governmental bodies if prosecution of attacker is necessary
    1. Analyze forensic backups of systems
  • Do a postmortem of the attack/vulnerability
    • What went wrong?
    • Why did it go wrong?
    • What went right?
    • Why did it go right?
    • How can this class of attack be mitigated in the future?
    • Are there any other products/systems that would be affected by the same class?

Some of procedures can be done in parallel, hence the need for people to be on call.

Team Foundation Server

So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..


While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:


A bug works well to manage incident responses.


Once a bug is created we can link a new task with the bug.


And then we can assign a user to the task:


Each bug and task are now visible in the Security Exit Criteria query:


Once all the items in the Exit Criteria have been met, you can release the patch.


Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

In the last four posts we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will these posts automatically make you write secure code, but hopefully they have given you guidance to start understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

Part 4: Secure Architecture

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part four of the series, unedited for all to enjoy.

Before you start to build an application you need to start with a design of it.  In the last article I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project.  It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile verse the world discussion because no matter what at some point you still need to have a design of the application.

Before we can design an application, we need to know that there are two basic types of code/modules:

  • That which is security related; e.g. authentication, crypto, etc.
  • That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.

They can be described as privileged or unprivileged.


Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application.  Careless design here will render your application highly vulnerable.  Needless to say, we don't want that.  We want a foundation we can trust.  So, we have three options:

  • Don't write secure code, and be plagued with potential security vulnerabilities.  Less cost, but you cover the risk.
  • Write secure code, test it, have it verified by an outside source.  More cost, you still cover the risk.
  • Make someone else write the secure code.  Range of cost, but you don't cover the risk.

In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom.  This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES.  Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts.  This may sound harsh, but seriously, don't.  If you think you might need to, stop thinking.  If you still think you need to, contact a security expert. PLEASE!

This applies to both coding and architecture.  In part 2 we did not come up with a novel way of protecting our inputs, we used well known libraries or methods.  Well now we want to apply this to the application architecture. 


Let's start with authentication.

A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication.  Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users.  So why would you put that application in charge of authenticating users?  It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication.  Not only that, you are letting the application manage the entire identity for the user.  How much of that identity information is duplicated in multiple databases?  Is this really the right way to do things?  Probably not.

From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application.  Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.

Let's think about this another way.

Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.

Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink.  The bartender knows nothing about you except your date of birth because that's all the bartender needs to know.  Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.

Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service?  Frankly, it doesn't matter as it's the core function of this external service and not our application.  Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.

In developer speak, this is called Claims Based Authentication.  A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token.  A Security Token Service (STS) generates the token, and our application consumes it.  It is up to the STS to handle authentication.  Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms.  If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol.  Google, Facebook, and Twitter use Claims via the OAuth protocol.  Claims are everywhere.

Alright, less talking, more diagramming:


The process goes something like this:

  • Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
  • The STS tells the user's browser to POST the token to the application
  • The application verifies the token and verifies whether it trusts the the STS
  • The Application consumes the token and uses the claims as it sees fit

Now we get back to asking how the heck does the STS handle authentication?  The answer is that it depends (Ah, the consultants answer).  The best case scenario is that you use an STS and identity store that already exist.  If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory).  If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services.  If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF).  In fact, use WIF as the identity library in the diagram above.  Making a web application claims-aware involves a process called Federation.  With WIF it's really easy to do.

Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:

private static TimeSpan GetAge()
    IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new ApplicationException("Isn't a claims based identity");

    var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);

        throw new ApplicationException("There are no date of birth claims");

    string dob = dobClaims.First().Value;

    TimeSpan age = DateTime.Now - DateTime.Parse(dob);

    return age;

There is secondary benefit to Claims Based Authentication.  You can also use it for authorization.  WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources.  Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.

Once authentication and authorization are dealt with, the two final architectural nightmares problems revolve around privacy and cryptography.


Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?).  This can include information like SIN numbers, addresses, phone numbers, etc.  The easiest solution is to simply not use the information.  Don't ask for it and don't store it anywhere.  Since this isn't always possible, the goal should be to use (and request) as little as possible.  Once you have no more uses for the information, delete it.

This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design.  Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:

Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.

Privacy is also a measure of access to information in a system.  Authentication and authorization are a core component of proper privacy controls.  There needs to be access control on user information.  Further, access to this information needs to be audited.  Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later.  There are quite a number of logging frameworks available, such as log4net or ELMAH.


Finally there is cryptography.

For the love of all things holy, do not do any custom crypto work.  Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP.  There are special circles of hell devoted to those who try and write their own crypto algorithms for production systems.  Actually, this is true for anything security related.

Be aware of how you are storing your private keys.

Don't store them in the application as magic-strings, or store them with the application at all. 

If possible store them in a Hardware Security Module (HSM). 

Make sure you have proper access control policies for the private keys.

Centralize all crypto functions so different modules aren't using their own implementations.

Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation.  The .NET platform has made it easy to change.  You can specify a string:

public static byte[] SymmetricEncrypt(byte[] plainText, byte[] initVector, byte[] keyBytes)
    if (plainText == null || plainText.Length == 0)
        throw new ArgumentNullException("plainText");

    if (initVector == null || initVector.Length == 0)
        throw new ArgumentNullException("initVector");

    if (keyBytes == null || keyBytes.Length == 0)
        throw new ArgumentNullException("keyBytes");

    using (SymmetricAlgorithm symmetricKey 
= SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES' { return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector)); } } private static byte[] CryptoTransform(byte[] payload, ICryptoTransform transform) { using (MemoryStream memoryStream = new MemoryStream()) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write)) { cryptoStream.Write(payload, 0, payload.Length); cryptoStream.FlushFinalBlock(); return memoryStream.ToArray(); } }

Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use.

By following these design guidelines you should have a fairly secure foundation for your application.  Now lets look at unprivileged modules.


In Part 2 there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs.  Most vulnerabilities, one way or another, are the result of bad input.  This is therefore going to be a very short section.

Don't let input touch queries directly.  Build business objects around data and encode any strings.  Parse all incoming data.  Fail gracefully on bad input.

Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.

If encryption is required, call into a privileged module.

Finally, validate the need for SSL.  If necessary, force it.

Final Thoughts

Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack.

Throughout the first four articles in this series we've looked at how to develop a secure application.  In the last article, we will look at how to respond to threats and mitigate the damage done.

Part 3: Secure Design and Analysis in Visual Studio 2010

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part three of the series, unedited for all to enjoy.

Every once in a while someone sends me a message along the lines of

Hey Steve, got any recommendations on tools for testing security in our applications?

It's a pretty straightforward question, but aside from my usual response of "well, some people like to call me a tool…" I find it difficult to answer because a lot of times people are looking for something that will solve all of their security problems.  Or more likely, solve all the problems they know they don't know about because, well, as explained in the previous article, security is hard to get right.  There are a lot of things we don't necessarily think about when building applications.

Last time we only touched on four out of ten types of vulnerabilities that can be mitigated through the use of frameworks.  That means that the other six need to be dealt with another way.  Let's take a look at what we have left:

  • Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Unvalidated Redirects and Forwards

None of these can actually be fixed by tools either.  Well, that's a bit of a downer.  Let me rephrase that a bit: these vulnerabilities can not be fixed by tools, but some can be found by tools.  This brings up an interesting point. There is a major difference between identifying vulnerabilities with tools and fixing vulnerabilities with tools. We will fix vulnerabilities by writing secure code, but we can find vulnerabilities with the use of tools. 

Remember what I said in the first article (paraphrased): the SDL is about defense in depth. This is a concept of providing or creating security measures at multiple layers of an application.  If one security measure fails, there will be another one that hasn't failed to continue to protect the application or data. It stands to reason then that there is no one single thing we can do to secure our applications, and by extension, therefore there is no one single tool we can use to find all of our vulnerabilities.

There are different types of tools to use at different points in the development lifecycle. In this article we will look at tools that we can use within three of the seven stages of the lifecycle: Design, implementation, and verification:


Within these stages we will look at some of their processes, and some tools to simplify the processes.


Next to training, design is probably the most critical stage of developing a secure application because bugs that show up here are the most expensive to fix throughout the lifetime of the project.

Once we've defined our secure design specifications, e.g. the privacy or cryptographic requirements, we need to create a threat model that defines all the different ways the application can be attacked.  It's important for us to create such a model because an attacker will absolutely create one, and we don't want to be caught with our pants down.  This usually goes in-tow with architecting the application, as changes to either will usually affect the other.  Here is a simple threat model:


It shows the flow of data to and from different pieces of the overall system, with a security boundary (red dotted line) separating the user's access and the process. This boundary could, for example, show the endpoint of a web service. 

However, this is only half of it.  We need to actually define the threat.  This is done through what's called the STRIDE approach.  STRIDE is an acronym that is defined as:

Name Security Property Huh?
Spoofing Authentication Are you really who you say you are?
Tampering Integrity Has this data been compromised or modified without knowledge?
Repudiation Non-repudiation The ability to prove (or disprove) the validity of something like a transaction.
Information Disclosure Confidentiality Are you only seeing data that you are allowed to see?
Denial of Service Availability Can you access the data or service reliably?
Elevation of Privilege Authorization Are you allowed to actually do something?

We then analyze each point on the threat model for STRIDE.  For instance:

  Spoofing Tampering Repudiation Info Disclosure DoS Elevation
Data True True True True False True

It's a bit of a contrived model because it's fairly vague, but it makes us ask these questions:

  • Is it possible to spoof a user when modifying data?  Yes: There is no authentication mechanism.
  • Is it possible to tamper with the data? Yes: the data can be modified directly.
  • Can you prove the change was done by someone in particular? No: there is no audit trail.
  • Can information be leaked out of the configuration? True: you can read the data directly.
  • Can you disrupt service of the application? No: the data is always available (actually, this isn't well described – a big problem with threat models which is a discussion for another time).
  • Can you access more than you are supposed to? Yes: there is no authorization mechanism.

For more information you can check out the Patterns and Practices Team article on creating a threat model.

This can get tiresome very quickly.  Thankfully Microsoft came out with a tool called the SDL Threat Modeling tool.  You start with a drawing board to design your model (like shown above) and then you define it's STRIDE characteristics:


Which is basically just a grid of fill-in-the-blanks which opens up into a form:


Once you create the model, the tool will auto-generate a few well known characteristics of the types of interactions between elements as well as provides a bunch of useful questions to ask for less common interactions.  It's at this point that we can start to get a feel for where the application weaknesses exist, or will exist.  If we compare this to our six vulnerabilities above, we've done a good job of finding a couple.  We've already come across an authentication problem and potentially a problem with direct access to data.  In the next article we will look at ways of fixing these problems.

After a model has been created we need to define the attack surface of the application, and then we need to figure out how to reduce the attack surface.  This is a fancy way of saying that we need to know all the different ways that other things can interact with our application.  This could be a set of API's, or generic endpoints like port 80 for HTTP:



Attack surface could also include changes in permissions to core Windows components.  Essentially, we need to figure out the changes our application introduces to the known state of the computer it's running on.  Aside from analysis of the threat model, there is no easy way to do this before the application is built.  Don't fret just yet though, because there are quite a few tools available to us during the verification phase which we will discuss in a moment.  Before we do that though, we need to actually write the code.


Visual Studio 2010 has some pretty useful features that help with writing secure code. First is the analysis tools:


There is a rule set specifically for security:


If you open it you are given a list of 51 rules to validate whenever you run the test. This encompasses a few of the OWASP top 10, as well as some .NET specifics.

When we run the analysis we are given a set of warnings:


They are a good sanity check whenever you build the application, or check it into source control.

In prior versions of Visual Studio you had to run FxCop to analyze your application, but Visual Studio 2010 calls into FxCop directly.  These rules were migrated from CAT.NET, a plugin for Visual Studio 2008. There is a V2 version of CAT.NET in beta hopefully to be released shortly.

If you are writing unmanaged code, there are a couple of extras that are available to you too.

One major source of security bugs in unmanaged code comes from a well known set of functions that manipulate memory in ways that are usually called poorly.  These are collectively called the banned functions, and Microsoft has released a header file that deprecates them:

#            pragma deprecated (_mbscpy, _mbccpy)
#            pragma deprecated (strcatA, strcatW, _mbscat, StrCatBuff, StrCatBuffA, StrCatBuffW, StrCatChainW, _tccat, _mbccat)

Microsoft also released an ATL template that allows you to restrict which domains or security zones your ActiveX control can execute.

Finally, there is also a set of rules that you can run for unmanaged code too.  You can enable these to run on build:


These tools should be run often, and the warnings should be fixed as they appear because when we get to verification, things can get ugly.


In theory, if we have followed the guidelines for the phases of the SDL, verification of the security of the application should be fairly tame.  In the previous section I said things can get ugly; what gives?  Well, verification is sort of the Rinse-and-Repeat phase.  It's testing.  We write code, we test it, we get someone else to test it, we fix the bugs, and repeat.

The SDL has certain requirements for testing.  If we don't meet these requirements, it gets ugly.  Therefore we want to get as close as possible to secure code during the implementation phase.  For instance, if you are writing a file format parser you have to run it through 100,000 different files of varying integrity.  In the event that something catastrophically bad happens on one of these malformed files (which equates to it doing anything other than failing gracefully), you need to fix the bug and re-run the 100,000 files – preferably with a mix of new files as well.  This may not seem too bad, until you realize how long it takes to process 100,000 files.  Imagine it takes 1 second to process a single file:

100,000 files * 1 second = 100,000 seconds

100,000 seconds / 60 seconds = ~1667 minutes

1667 minutes / 60 minutes = ~27 hours

This is a time consuming process.  Luckily Microsoft has provided some tools to help.  First up is MiniFuzz


Fuzzing is the process of taking a good copy of a file and manipulating the bits in different ways; some random, some specific.  This file is then passed through your custom parser… 100,000 times.  To use this tool, set the path of your application in the process to fuzz path, and then a set of files to fuzz in the template files path.  MiniFuzz will go through each file in the templates folder and randomly manipulate them based on aggressiveness, and then pass them to the process.

Regular expressions also run into a similar testing requirement, so there is a Regex Fuzzer too.

An often overlooked part of the development process is the compiler.  Sometimes we compile our applications with the wrong settings.  BinScope Binary Analyzer can help us as it will verify a number of things like compiler versions, compiler/linker flags, proper ATL headers, use of strongly named assemblies, etc.  You can find more information on these tools on the SDL Tools site.

Finally, we get back to attack surface analysis.  Remember how I said there aren't really any tools to help with analysis during the design phase?  Luckily, there is a pretty useful tool available to us during the verification phase, aptly named the Attack Surface Analyzer.  It's still in beta, but it works well.

The analyzer goes through a set of tests and collects data on different aspects of the operating system:


The analyzer is run a couple times; it's run before your application is installed or deployed, and then it's run after it's installed or deployed.  The analyzer will then do a delta on all of the data it collected and return a report showing the changes.

The goal is to reduce attack surface. Therefore we can use the report as a baseline to modify the default settings to reduce the number of changes.


It turns out that our friends on the SDL team have released a new tool just minutes ago.  It's the Web Application Configuration Analyzer v2 – so while not technically a new tool, it's a new version of a tool.  Cool!  Lets take a look:


It's essentially a rules engine that checks for certain conditions in the web.config files for your applications.  It works by looking at all the web sites and web applications in IIS and scans through the web.config files for each one.

The best part is that you can scan multiple machines at once, so you know your entire farm is running with the same configuration files.  It doesn't take very long to do the scan of a single system, and when you are done you can view the results:


It's interesting to note that this was scan of my development machine (don't ask why it's called TroyMcClure), and there are quite a number of failed tests.  All developers should run this tool on their development machines, not only for their own security, but so that your application can be developed in the most secure environment possible.


Final Thoughts

It's important to remember that tools will not secure our applications.  They can definitely help find vulnerabilities, but we cannot rely on them to solve all of our problems.