A workaround for Windows Store app download issue on Windows 8

Every time I try to purchase or download something from the Windows Store I get the following error: "Your purchase couldn't be completed. Something happened and your purchase can't be completed." This error does not really tell us why it failed, but that things have failed. Pretty useless. Don't you hate such errors. L

Anyway, I searched online for a solution to this problem, but none of the suggested solutions helped me resolve the issue. Then I remembered when this problem started happening. I have started experiencing this problem after I have associated my Live ID with my Windows 8 installation and started logging to my laptop in using my Live ID, instead of a Windows use account. So, I thought it is worth a shot to reverse that change. After I have switched back to Local Account, the problem disappeared. I am now able to download apps from Windows Store without any issues. To me, this seems like a bug in Windows 8 Release Preview. So, until Microsoft fixes this bug, I am going to stick with Local Accounts. ;)

Oh yes, almost forgot. To switch to Local Account:

  • Go to PC Settings
  • Select Users
  • Click Switch to Local Account button. That's all.

The Importance of Elevating Privilege

The biggest detractor to Single Sign On is the same thing that makes it so appealing – you only need to prove your identity once. This scares the hell out of some people because if you can compromise a users session in one application it's possible to affect other applications. Congratulations: checking your Facebook profile just caused your online store to delete all it's orders. Let's break that attack down a little.

  • You just signed into Facebook and checked your [insert something to check here] from some friend. That contained a link to something malicious.
  • You click the link, and it opens a page that contains an iframe. The iframe points to a URL for your administration portal of the online store with a couple parameters in the query string telling the store to delete all the incoming orders.
  • At this point you don't have a session with the administration portal and in a pre-SSO world it would redirect you to a login page. This would stop most attacks because either a) the iframe is too small to show the page, or b) (hopefully) the user is smart enough to realize that a link from a friend on Facebook shouldn't redirect you to your online store's administration portal. In a post-SSO world, the portal would redirect you to the STS of choice and that STS already has you signed in (imagine what else could happen in this situation if you were using Facebook as your identity provider).
  • So you've signed into the STS already, and it doesn't prompt for credentials. It redirects you to the administration page you were originally redirected away from, but this time with a session. The page is pulled up, the query string parameters are parsed, and the orders are deleted.

There are certainly ways to stop this as part of this is a bit trivial. For instance you could pop up an Ok/Cancel dialog asking "are you sure you want to delete these?", but for the sake of discussion lets think of this at a high level.

The biggest problem with this scenario is that deleting orders doesn't require anything more than being signed in. By default you had the highest privileges available.

This problem is similar to the problem many users of Windows XP had. They were, by default, running with administrative privileges. This lead to a bunch of problems because any application running could do whatever it pleased on the system. Malware was rampant, and worse, users were just doing all around stupid things because they didn't know what they were doing but they had the permissions necessary to do it.

The solution to that problem is to give users non-administrative privileges by default, and when something required higher privileges you have to re-authenticate and temporarily run with the higher privileges. The key here is that you are running temporarily with higher privileges. However, security lost the argument and Microsoft caved while developing Windows Vista creating User Account Control (UAC). By default a user is an administrator, but they don't have administrative privileges. Their user token is a stripped down administrator token. You only have non-administrative privileges. In order to take full advantage of the administrator token, a user has to elevate and request the full token temporarily. This is a stop-gap solution though because it's theoretically possible to circumvent UAC because the administrative token exists. It also doesn't require you to re-authenticate – you just have to approve the elevation.

As more and more things are moving to the web it's important that we don't lose control over privileges. It's still very important that you don't have administrative privileges by default because, frankly, you probably don't need them all the time.

Some web applications are requiring elevation. For instance consider online banking sites. When I sign in I have a default set of privileges. I can view my accounts and transfer money between my accounts. Anything else requires that I re-authenticate myself by entering a private pin. So for instance I cannot transfer money to an account that doesn't belong to me without proving that it really is me making the transfer.

There are a couple ways you can design a web application that requires privilege elevation. Lets take a look at how to do it with Claims Based Authentication and WIF.

First off, lets look at the protocol. Out of the box WIF supports the WS-Federation protocol. The passive version of the protocol supports a query parameter of wauth. This parameter defines how authentication should happen. The values for it are mostly specific to each STS however there are a few well-defined values that the SAML protocol specifies. These values are passed to the STS to tell it to authenticate using a particular method. Here are some most often used:

Authentication Type/Credential Wauth Value
Password urn:oasis:names:tc:SAML:1.0:am:password
Kerberos urn:ietf:rfc:1510
TLS urn:ietf:rfc:2246
PKI/X509 urn:oasis:names:tc:SAML:1.0:am:X509-PKI
Default urn:oasis:names:tc:SAML:1.0:am:unspecified

When you pass one of these values to the STS during the signin request, the STS should then request that particular type of credential. the wauth parameter supports arbitrary values so you can use whatever you like. So therefore we can create a value that tells the STS that we want to re-authenticate because of an elevation request.

All you have to do is redirect to the STS with the wauth parameter:


Once the user has re-authenticated you need to tell the relying party some how. This is where the Authentication Method claim comes in handy:


Just add the claim to the output identity:

protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
    IClaimsIdentity ident = principal.Identity as IClaimsIdentity;
    ident.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "urn:super:secure:elevation:method"));
    // finish filling claims...
    return ident;

At that point the relying party can then check to see whether the method satisfies the request. You could write an extension method like:

public static bool IsElevated(this IClaimsPrincipal principal)
    return principal.Identity.AuthenticationType == "urn:super:secure:elevation:method";

And then have a bit of code to check:

var p = Thread.CurrentPrincipal as IClaimsPrincipal;
if (p != null && p.IsElevated())

This satisfies half the requirements for elevating privilege. We need to make it so the user is only elevated for a short period of time. We can do this in an event handler after the token is received by the RP.  In Global.asax we could do something like:

void Application_Start(object sender, EventArgs e)
        += new EventHandler<SessionSecurityTokenReceivedEventArgs>
(SessionAuthenticationModule_SessionSecurityTokenReceived); } void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender,
SessionSecurityTokenReceivedEventArgs e) { if (e.SessionToken.ClaimsPrincipal.IsElevated()) { SessionSecurityToken token
= new SessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
e.SessionToken.ValidFrom, e.SessionToken.ValidFrom.AddMinutes(15)); e.SessionToken = token; } }

This will check to see if the incoming token has been elevated, and if it has, set the lifetime of the token to 15 minutes.

There are other places where this could occur like within the STS itself, however this value may need to be independent of the STS.

As I said earlier, as more and more things are moving to the web it's important that we don't lose control of privileges. By requiring certain types of authentication in our relying parties, we can easily support elevation by requiring the STS to re-authenticate.

Dear Recipient

Got this email just now.  I was amused (mainly because it got past the spam filters)…

Dear Email user,

This message is from Administration center Maintenance Policy verified that your mailbox exceeds (10.9GB) its limit, you may be unable to receive new email or send mails, To re- set your SPACE on our database prior to maintain your IN BOX, You must Reply to this email by Confirming your account details below.


First Name:



Date Of Birth:

Failure to do this will immediately render your Web-email address deactivated from our database.


Admin Help Desk© 2011

And my Reply…

Dear Admin Help Desk© 2011,

Thank you for the warning that my SPACE on your database prior to maintain my IN BOX exceeds its limit. Please find the following information handy for future reference.

Surname: Jobs

First Name: Steve

User-Name: Jobbers1

Password: CenterOfTheUniverse

Date Of Birth: December 25th, 0001

If for whatever reason you cannot reset my SPACE please email me back at tips@fbi.gov and I will be sure to help you.

Part 5: Incident Response Management with Team Foundation Server

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part five of the series, unedited for all to enjoy.

There are only a few certainties in life: death, taxes, me getting this post in late, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.


There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

An Incident Response Plan (IRP - the procedure) serves a few functions:

  • It has the list of people to contact in the event of the emergency
  • It is the actual list of steps to follow when bad things happen
  • It includes references to other procedures for code written by other teams

This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

Emergency Contacts (Incident Response Team)

The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

  • Developer – Someone to comprehend and/or triage the problem
  • Tester – Someone to test and verify any changes
  • Manager – Someone to approve changes that need to be made
  • Marketing/PR – Someone to make a public announcement (if necessary)

Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

The Incident Response Plan

Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well. 

Each plan should provide the steps to answer a few questions about the vulnerability:

  • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
  • Was the vulnerability found in something you host, or an application that your customers host?
  • Is it an ongoing attack?
  • What was breached?
  • How do you notify the your customers about the vulnerability?
  • When do you notify them about the vulnerability?

And each plan should provide the steps to answer a few questions about the fix:

  • If it's an ongoing attack, how do you stop it?
  • How do you test the fix?
  • How do you deploy the fix?
  • How do you notify the public about the fix?

Some of these questions may not be answerable immediately – you may need to wait until a postmortem to answer them. 

This is the high level IRP for example:

  • The Attack – It's already happened
  • Evaluate the state of the systems or products to determine the extent of the vulnerability
    • What was breached?
    • What is the vulnerability
  • Define the first step to mitigate the threat
    • How do you stop the threat?
    • Design the bug fix
  • Isolate the vulnerabilities if possible
    • Disconnect targeted machine from network
    • Complete forensic backup of system
    • Turn off the targeted machine if hosted
  • Initiate the mitigation plan
    • Develop the bug fix
    • Test the bug fix
  • Alert the necessary people
    • Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
    • If necessary, inform the proper legal/governmental bodies
  • Deploy any fixes
    • Rebuild any affected systems
    • Deploy patch(es)
    • Reconnect to network
  • Follow up with legal/governmental bodies if prosecution of attacker is necessary
    1. Analyze forensic backups of systems
  • Do a postmortem of the attack/vulnerability
    • What went wrong?
    • Why did it go wrong?
    • What went right?
    • Why did it go right?
    • How can this class of attack be mitigated in the future?
    • Are there any other products/systems that would be affected by the same class?

Some of procedures can be done in parallel, hence the need for people to be on call.

Team Foundation Server

So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..


While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:


A bug works well to manage incident responses.


Once a bug is created we can link a new task with the bug.


And then we can assign a user to the task:


Each bug and task are now visible in the Security Exit Criteria query:


Once all the items in the Exit Criteria have been met, you can release the patch.


Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

In the last four posts we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will these posts automatically make you write secure code, but hopefully they have given you guidance to start understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

Part 4: Secure Architecture

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part four of the series, unedited for all to enjoy.

Before you start to build an application you need to start with a design of it.  In the last article I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project.  It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile verse the world discussion because no matter what at some point you still need to have a design of the application.

Before we can design an application, we need to know that there are two basic types of code/modules:

  • That which is security related; e.g. authentication, crypto, etc.
  • That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.

They can be described as privileged or unprivileged.


Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application.  Careless design here will render your application highly vulnerable.  Needless to say, we don't want that.  We want a foundation we can trust.  So, we have three options:

  • Don't write secure code, and be plagued with potential security vulnerabilities.  Less cost, but you cover the risk.
  • Write secure code, test it, have it verified by an outside source.  More cost, you still cover the risk.
  • Make someone else write the secure code.  Range of cost, but you don't cover the risk.

In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom.  This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES.  Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts.  This may sound harsh, but seriously, don't.  If you think you might need to, stop thinking.  If you still think you need to, contact a security expert. PLEASE!

This applies to both coding and architecture.  In part 2 we did not come up with a novel way of protecting our inputs, we used well known libraries or methods.  Well now we want to apply this to the application architecture. 


Let's start with authentication.

A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication.  Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users.  So why would you put that application in charge of authenticating users?  It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication.  Not only that, you are letting the application manage the entire identity for the user.  How much of that identity information is duplicated in multiple databases?  Is this really the right way to do things?  Probably not.

From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application.  Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.

Let's think about this another way.

Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.

Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink.  The bartender knows nothing about you except your date of birth because that's all the bartender needs to know.  Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.

Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service?  Frankly, it doesn't matter as it's the core function of this external service and not our application.  Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.

In developer speak, this is called Claims Based Authentication.  A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token.  A Security Token Service (STS) generates the token, and our application consumes it.  It is up to the STS to handle authentication.  Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms.  If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol.  Google, Facebook, and Twitter use Claims via the OAuth protocol.  Claims are everywhere.

Alright, less talking, more diagramming:


The process goes something like this:

  • Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
  • The STS tells the user's browser to POST the token to the application
  • The application verifies the token and verifies whether it trusts the the STS
  • The Application consumes the token and uses the claims as it sees fit

Now we get back to asking how the heck does the STS handle authentication?  The answer is that it depends (Ah, the consultants answer).  The best case scenario is that you use an STS and identity store that already exist.  If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory).  If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services.  If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF).  In fact, use WIF as the identity library in the diagram above.  Making a web application claims-aware involves a process called Federation.  With WIF it's really easy to do.

Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:

private static TimeSpan GetAge()
    IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new ApplicationException("Isn't a claims based identity");

    var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);

        throw new ApplicationException("There are no date of birth claims");

    string dob = dobClaims.First().Value;

    TimeSpan age = DateTime.Now - DateTime.Parse(dob);

    return age;

There is secondary benefit to Claims Based Authentication.  You can also use it for authorization.  WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources.  Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.

Once authentication and authorization are dealt with, the two final architectural nightmares problems revolve around privacy and cryptography.


Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?).  This can include information like SIN numbers, addresses, phone numbers, etc.  The easiest solution is to simply not use the information.  Don't ask for it and don't store it anywhere.  Since this isn't always possible, the goal should be to use (and request) as little as possible.  Once you have no more uses for the information, delete it.

This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design.  Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:

Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.

Privacy is also a measure of access to information in a system.  Authentication and authorization are a core component of proper privacy controls.  There needs to be access control on user information.  Further, access to this information needs to be audited.  Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later.  There are quite a number of logging frameworks available, such as log4net or ELMAH.


Finally there is cryptography.

For the love of all things holy, do not do any custom crypto work.  Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP.  There are special circles of hell devoted to those who try and write their own crypto algorithms for production systems.  Actually, this is true for anything security related.

Be aware of how you are storing your private keys.

Don't store them in the application as magic-strings, or store them with the application at all. 

If possible store them in a Hardware Security Module (HSM). 

Make sure you have proper access control policies for the private keys.

Centralize all crypto functions so different modules aren't using their own implementations.

Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation.  The .NET platform has made it easy to change.  You can specify a string:

public static byte[] SymmetricEncrypt(byte[] plainText, byte[] initVector, byte[] keyBytes)
    if (plainText == null || plainText.Length == 0)
        throw new ArgumentNullException("plainText");

    if (initVector == null || initVector.Length == 0)
        throw new ArgumentNullException("initVector");

    if (keyBytes == null || keyBytes.Length == 0)
        throw new ArgumentNullException("keyBytes");

    using (SymmetricAlgorithm symmetricKey 
= SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES' { return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector)); } } private static byte[] CryptoTransform(byte[] payload, ICryptoTransform transform) { using (MemoryStream memoryStream = new MemoryStream()) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write)) { cryptoStream.Write(payload, 0, payload.Length); cryptoStream.FlushFinalBlock(); return memoryStream.ToArray(); } }

Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use.

By following these design guidelines you should have a fairly secure foundation for your application.  Now lets look at unprivileged modules.


In Part 2 there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs.  Most vulnerabilities, one way or another, are the result of bad input.  This is therefore going to be a very short section.

Don't let input touch queries directly.  Build business objects around data and encode any strings.  Parse all incoming data.  Fail gracefully on bad input.

Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.

If encryption is required, call into a privileged module.

Finally, validate the need for SSL.  If necessary, force it.

Final Thoughts

Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack.

Throughout the first four articles in this series we've looked at how to develop a secure application.  In the last article, we will look at how to respond to threats and mitigate the damage done.

Part 2: Vulnerability Deep Dive

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part two of the series, unedited for all to enjoy.

Know your enemy.

In the previous post I stated that knowledge is key to writing secure code:

Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.

In order to truly protect our applications and the data within, we need to know about the threats in the real world. The OWASP top 10 list gives a us a good starting point in understanding some of the more common vulnerabilities that malicious users can use for attack:

  • Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Unvalidated Redirects and Forwards

It's important to realize that is is strictly for the web – we aren't talking client applications, though some do run into the same problems.  I've chosen this list for one reason: most of us really only develop web-based applications.  It's also helpful to keep in mind that we aren't talking just Microsoft-centric vulnerabilities.  These also exist in applications running on the LAMP (Linux/Apache/MySQL/PHP) stack and variants in between.  Finally, it's very important to note that just following these instructions won't automatically give you a secure code base – these are just primers in some ways of writing secure code.

In the 4th part of this series we'll dig into some of the architectural options we have to mitigate a few of the above vulnerabilities, but in this part we are going to go through a few items and discuss some of the frameworks available to us to help reduce or mitigate the vulnerabilities.


Injection is a way of changing the flow of a procedure by introducing arbitrary changes. An example is SQL injection. Hopefully by now everyone has heard of SQL Injection, but lets take a look at this bit of code for those who don't know about it:

string query = string.Format("SELECT * FROM UserStore WHERE UserName = '{0}' AND PasswordHash = '{1}'", username, password);

If we passed it into a SqlCommand we could use it to see whether or not a user exists, and whether or not their hashed password matches the one in the table. If so, they are authenticated. Well what happens if I enter something other than my username? What if I enter this:

'; --

It would modify the SQL Query to be this:

SELECT * FROM UserStore WHERE UserName = ''; -- AND PasswordHash = 'TXlQYXNzd29yZA=='

This has essentially broken the query into a single WHERE clause, asking for a user with a blank username because the single quote closed the parameter, the semicolon finished the executing statement, and the double dash made anything following it into a comment.

Hopefully your user table doesn't contain any blank records, so lets extend that a bit:

' OR 1=1; --

We've now appended a new clause, so the query looks for records with a blank username OR where 1=1. Since 1 always equals 1, it will return true, and since the query looks for any filter that returns true, it returns every record in the table.

If our SqlCommand just looked for at least one record in the query set, the user is authenticated. Needless to say, this is bad.

We could go one step further and log in as a specific user:

administrator';  --

We've now modified the query in such a way that it is just looking for a user with a particular username, such as the administrator.  It only took four characters to bypass a password and log in as the administrator.

Injection can also work in a number of other places such as when you are querying Active Directory or WMI. It doesn't just have to be for authentication either. Imagine if you have a basic report query that returns a large query set. If the attacker can manipulate the query, they could read data they shouldn't be allowed to read, or worse yet they could modify or delete the data.

Essentially our problem is that we don't sanitize our inputs.  If a user is allowed to enter any value they want into the system, they could potentially cause unexpected things to occur.  The solution is simple: sanitize the inputs!

If we use a SqlCommand object to execute our query above, we can use parameters.  We can write something like:

string query = "SELECT * FROM UserStore WHERE UserName = @username AND PasswordHash = @passwordHash";
SqlCommand c = new SqlCommand(query);
c.Parameters.Add(new SqlParameter("@username", username));
c.Parameters.Add(new SqlParameter("@passwordHash", passwordHash));

This does two things.  One, it makes .NET handle the string manipulation, and two it makes .NET properly sanitize the parameters, so

' OR 1=1; --

is converted to

' '' OR 1=1;—'

In the SQL language, two single quote characters acts as an escape sequence for a single quote, so in effect the query is trying to look for a value as is, containing the quote.

The other option is to use a commercially available Object Relational Mapper (ORM) like the Entity Framework or NHibernate where you don't have to write error-prone SQL queries.  You could write something like this with LINQ:

var users = from u in entities.UserStore where u.UserName == username && u.PasswordHash == passwordHash select u;

It looks like a SQL query, but it's compileable C#.  It solves our problem by abstracting away the ungodly mess that is SqlCommands, DataReaders, and DataSets.

Cross-Site Scripting (XSS)

XSS is a way of adding a chunk of malicious JavaScript into a page via flaws in the website. This JavaScript could do a number of different things such as read the contents of your session cookie and send it off to a rogue server. The person in control of the server can then use the cookie and browse the affected site with your session. In 2007, 80% of the reported vulnerabilities on the web were from XSS.

XSS is generally the result of not properly validating user input. Conceptually it usually works this way:

  1. A query string parameter contains a value: ?q=blah
  2. This value is outputted on the page
  3. a malicious user notices this
  4. The malicious user inserts a chunk of JavaScript into the URL parameter: ?q=<script>alert("pwned");</script>
  5. This script is outputted without change to the page, and the JavaScript is executed
  6. The malicious user sends the URL to an unsuspecting user
  7. The user clicks on it while logged into the website
  8. The JavaScript reads the cookie and sends it to the malicious user's server
  9. The malicious user now has the unsuspecting user's authenticated session

This occurs because we don't sanitize user input. We don't remove or encode the script so it can't execute.  We therefore need to encode the inputted data.  It's all about the sanitized inputs.

The basic problem is that we want to display the content the user submitted on a page, but the content can be potentially executable.  Well, how do we display JavaScript textually?  We encode each character with HtmlEncode, so the < (left angle bracket) is outputted as &lt; and the > (right angle bracket) is outputted as &gt;.  In .NET you have have some helpers in the HttpUtility class:


This works fairly well, but you can bypass it by doing multiple layers of encoding (encoding an encoded value that was encoded with another formula).  This problem exists because HtmlEncode uses a blacklist of characters, so whenever it comes across a specific character it will encode it.  We want it to do the opposite – use a whitelist.  So whenever it comes across a known character it doesn't encode it, such as the letter 'a', otherwise it encodes everything else.  It's generally far easier to protect something if you only allow known good things instead of blocking known threats (because threats are constantly changing).

Microsoft released a toolkit to solve this encoding problem, called the AntiXss toolkit.  It's now part of the Microsoft Web Protection Library, which also actually contains some bits to help solve the SQL injection problem.  To use this encoder, you just need to do something like this:

string encodedValue = Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(userInput);

There is another step, which is to set the cookie to server-only, meaning that client side scripts cannot read the contents of the cookie.  Only newer browsers support this, but all we have to do is write something like this:

HttpCookie cookie = new HttpCookie("name", "value");
cookie.HttpOnly = true;

For added benefit while we are dealing with cookies, we can also do this:

cookie.Secure = true;

Setting Secure to true requires that the cookie only be sent over HTTPS.

This should be the last step in the output.  There shouldn't be any tweaking to the text or cookie after this point.  Call it the last line of defense on the server-side.

Cross-Site Request Forgery (CSRF)

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc. You need to log in, fill in the details, and click submit. That submit POST’s the data back to the server, and the server processes it. In ASP.NET WebForms The only validation that goes on is whether the ViewState hasn’t been tampered with.  Other web frameworks skip the ViewState bit, because well, they don't have a ViewState.

Now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat. Yay, kittehs! Anyway, on that page is a simple set of hidden form tags with malicious data in it. Something like their account number, and an obscene number for cash transfer. On page load, JavaScript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it. Sneaky.

There is actually a pretty elegant way of solving this problem.  We need to create a value that changes on every page request, and send it as part of the response.  Once the server receives the response, it validates the value and if it's bad it throws an exception.  In the the cryptography world, this is called a nonce.  In ASP.NET WebForms we can solve this problem by encrypting the ViewState.  We just need a bit of code like this in the page (or masterpage):

void Page_Init (object sender, EventArgs e) 
    ViewStateUserKey = Session.SessionID; 

When we set the ViewStateUserKey property on the page, the ViewState is encrypted based on this key.  This key is only valid for the length of the session, so this does two things.  First, since the ViewState is encrypted, the malicious user cannot modify their version of the ViewState since they don't know the key.  Second, if they use an unmodified version of a ViewState, the server will throw an exception since the victim's UserKey doesn't match the key used to encrypt the initial ViewState, and the ViewState parser doesn't understand the value that was decrypted with the wrong key.  Using this piece of code depends entirely on whether or not you have properly set up session state though.  To get around that, we need to set the key to a cryptographically random value that is only valid for the length of the session, and is only known on the server side.  We could for instance use the modifications we made to the cookie in the XSS section, and store the key in there.  It gets passed to the client, but client script can't access it.  This places a VERY high risk on the user though, because this security depends entirely on the browser version.  It also means that any malware installed on the client can potentially read the cookie – though the user has bigger problems if they have a virus. 

Security is complex, huh?  Anyway…

In MVC we can do something similar except we use the Html.AntiForgeryToken().

This is a two step process.  First we need to update the Action method(s) by adding the ValidateAntiForgeryToken attribute to the method:

public ActionResult Transfer(WireTransfer transfer)
        if (!ModelState.IsValid)
            return View(transfer); 


        return RedirectToAction("Transfers");
        return View(transfer);

Then we need to add the AntiForgeryToken to the page:

<%= Html.AntiForgeryToken() %>

This helper will output a nonce that gets checked by the ValidateAntiForgeryToken attribute.

Insecure Cryptographic Storage

I think it's safe to say that most of us get cryptography related-stuff wrong most of the time at first. I certainly do. Mainly because crypto is fricken hard to do properly. If you noticed above in my SQL Injection query, I used this value for my hashed password: TXlQYXNzd29yZA==.

It's not actually hashed. It's encoded using Base64 (the double-equals is a dead give away). The decoded value is 'MyPassword'. The difference being that hashing is a one-way process. Once I hash something (with a cryptographic hash), I can't de-hash it. Second, if I happen to get hold of someone else's user table, I can look for hashed passwords that look the same. So anyone else that has a password in the table as "TXlQYXNzd29yZA==", I know that their password is 'MyPassword'. This is where a salt comes in handy. A salt is just a chunk of data appended to the unhashed password, and then hashed. Each user has a unique salt, and therefore will have a unique hash.

Then in the last section on CSRF I talked about using a nonce.  Nonce's are valuable to the authenticity of a request.  They prevent replay attacks, meaning that the encrypted output will look the same as a previous response, and is therefore a copy of the last message.  It is extremely important that the attacker not know how this nonce is generated.

Which leads to the question of how to do you properly secure encryption keys?  I actually sighed a little just now because Jonathan gave me a strict word limit, and I can't even touch on properly securing encryption keys because that is a blog series on its own.

Properly using cryptography in an application is really hard to do. Proper cryptography in an application is a topic fit for a book.

Final Thoughts

In this article we touched on only four of the items in the OWASP top 10 list as they are directly solvable using publically available frameworks.  The other six items in the list can be solved through the use of tools as well as designing a secure architecture, both of which we will talk about in future posts.

Talking about Security Article Series

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications.  It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities.

Obviously it's not a series on everything you need to know about security, but hopefully it's a starting point.  My goal is to get people to at least start talking about security in their applications.

This is the series:

An unhandled exception occurred in the Silverlight Application in SharePoint 2010

If you get an error "An unhandled exception occurred in the Silverlight Application in SharePoint 2010" when you try to create a document library, list, site or anything in SharePoint 2010 (using new fancy Silverlight control), then it could be that security validation has been disabled Webpart Security Validation under Web Application General Settings in SharePoint Central Administration. Apparently, Silverlight application is unable to connect to the WCF endpoint configured by the product for enabling Client Object Model, if Security validation is set to Off. Enable it, and you should be fine…

Part 1: Development Security Basics

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities.  This is part one of the series, unedited for all to enjoy.

Every year or so a Software Security Advocacy group creates a top 10 list of the security flaws developers introduce into their software.  This is something I affectionately refer to as the stupid things we do when building applications listThe group is OWASP (Open Web Application Security Project) and the list is the OWASP Top 10 Project (of which I have no affiliation to either).  In this article we will dig into some of the ways we can combat the ever-growing list of security flaws in our applications.

Security is a trade off.  We need to balance the requirements of the application with the time and budget constraints of the project.

A lot of times though, nobody has enough forethought to think that security should be a feature, or more importantly, that security should just be a central design requirement for the application regardless of what the time or budget constraints may be (do I sound bitter?).

This of course leads to a funny problem.  What happens when your application gets attacked?

There is no easy way to say it: the developers get blamed.  Or if it's a seriously heinous breach the boss gets arrested because they were accountable for the breach.  In any case it doesn't end well for the organization.

Microsoft had this problem for years – although with a twist because security has always been important to the company. Windows NT 3.5 was Microsoft's first OS that was successfully evaluated under the TCSEC regime at C2 – government speak for "it met a rigorous set of requirements designed to evaluate the security of a product in high-security environments".  However, this didn't really jive well with what the news was saying since so many vulnerabilities were plaguing Windows 2000 and XP.  There was proof that security was important through the evaluation, but reality looked different because of the bugs.

This led to a major change in Microsoft's ways.  In January 2002 Bill Gates sent out a company-wide memorandum stating the need for a new way of doing things. 

The events of last year - from September's terrorist attacks to a number of malicious and highly publicized computer viruses - reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it's the airlines or computer systems.

The creation of the Trustworthy Computing Initiative was the result.  In short order, Microsoft did a complete 180 on how it developed software.

Part of the problem with writing secure code is that you just can't look for the bugs at the end of a development cycle, fix them, and move on.  It just doesn't work.  Microsoft introduced the Security Development Lifecycle to combat this problem, as it introduced processes during the development lifecycle to aid the developers in writing secure code.

Conceptually it's pretty simple: defense in depth.


In order to develop secure applications, we need to adapt our development model from the beginning of development training, all the way up to application release, as well as how we respond to vulnerabilities after launch to include security requirements.

It's important to include security at the beginning of the development process, otherwise we run into the Problem Windows had.

A good chunk of the codebase for Windows Vista was scrapped because too many bugs were found over the course of testing.  Once the Windows team started introducing the SDL into their development process, the number of security bugs found dropped considerably.  This is the reason it took six years to release Windows Vista.  Funny enough, this is partly the reason Windows 7 only took two years to release – less security bugs!

Now, Microsoft had an invested interest in writing secure code so it was an all or nothing kind of thing with the SDL.  Companies that haven't made this decision may have considerably more trouble implementing the SDL simply because it costs money to do so.  Luckily we don't have to implement the entire process all at once.

As we move through this series, we'll touch on some of the key aspects of the SDL and how we can fit it into the development lifecycle.

Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.  This is where the top 10 list from OWASP comes in handy: 

  • Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Unvalidated Redirects and Forwards

In the next article in this series we'll take a look at a few of these vulnerabilities up close and some of the libraries available to us to help combat our attackers. Throughout this series we'll also show how different steps in the SDL process can help find and mitigate these vulnerabilities.

In Part III of this series we'll take a look at some of the tools Microsoft has created to aid the process of secure design and analysis.

In Part IV of this series, we'll dig into some of the architectural considerations of developing secure applications.

Finally, to conclude this series we'll take a look at how we can use Team Foundation Server to help us manage incident responses for future vulnerabilities.

Making the X509Store more Friendly

When you need to grab a certificate out of a Windows Certificate Store, you can use a class called X509Store.  It's very simple to use:

X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser);


X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);


However, I don't like this open/close mechanism.  It reminds me too much of Dispose(), except I can't use a using statement.  There are lots of arguments around whether a using statement is a good way of doing things and I'm in the camp of yes, it is.  When they are used properly they make code a lot more logical.  It creates a scope for an object explicitly.  I want to do something like this:

using (X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser, OpenFlags.ReadOnly))
    X509Certificate2Collection myCerts = store.Certificates.Find(X509FindType.FindByThumbprint, "...", false);

The simple solution would be to subclass this, implement IDisposable, and overwrite some of the internals.  The problem though is that someone on the .NET team thought it would be wise to seal the class.  Crap.  Okay, lets create a new class:

public class X509Store2 : IDisposable
    private X509Store store;

    public X509Store2(IntPtr storeHandle, OpenFlags flags)
        store = new X509Store(storeHandle);

    public X509Store2(StoreLocation storeLocation, OpenFlags flags)
        store = new X509Store(storeLocation);

    public X509Store2(StoreName storeName, OpenFlags flags)
        store = new X509Store(storeName);

    public X509Store2(string storeName, OpenFlags flags)
        store = new X509Store(storeName);

    public X509Store2(StoreName storeName, StoreLocation storeLocation, OpenFlags flags)
        store = new X509Store(storeName, storeLocation);

    public X509Store2(string storeName, StoreLocation storeLocation, OpenFlags flags)
        store = new X509Store(storeName, storeLocation);

    public X509Certificate2Collection Certificates { get { return store.Certificates; } }

    public StoreLocation Location { get { return store.Location; } }

    public string Name { get { return store.Name; } }

    public IntPtr StoreHandle { get { return store.StoreHandle; } }

    public void Add(X509Certificate2 certificate)

    public void AddRange(X509Certificate2Collection certificates)

    private void Close()

    private void Open(OpenFlags flags)

    public void Remove(X509Certificate2 certificate)
    public void RemoveRange(X509Certificate2Collection certificates)

    public void Dispose()

At this point I've copied all the public members of the X509Store class and called their counterparts in the store.  I've also set Open() and Close() to private so they can't be called.  In theory I could just remove them, but I didn't.