Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part four of the series, unedited for all to enjoy.
Before you start to build an application you need to start with a design of it. In the last article I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project. It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile verse the world discussion because no matter what at some point you still need to have a design of the application.
Before we can design an application, we need to know that there are two basic types of code/modules:
- That which is security related; e.g. authentication, crypto, etc.
- That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.
They can be described as privileged or unprivileged.
Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application. Careless design here will render your application highly vulnerable. Needless to say, we don't want that. We want a foundation we can trust. So, we have three options:
- Don't write secure code, and be plagued with potential security vulnerabilities. Less cost, but you cover the risk.
- Write secure code, test it, have it verified by an outside source. More cost, you still cover the risk.
- Make someone else write the secure code. Range of cost, but you don't cover the risk.
In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom. This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES. Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts. This may sound harsh, but seriously, don't. If you think you might need to, stop thinking. If you still think you need to, contact a security expert. PLEASE!
This applies to both coding and architecture. In part 2 we did not come up with a novel way of protecting our inputs, we used well known libraries or methods. Well now we want to apply this to the application architecture.
Let's start with authentication.
A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication. Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users. So why would you put that application in charge of authenticating users? It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication. Not only that, you are letting the application manage the entire identity for the user. How much of that identity information is duplicated in multiple databases? Is this really the right way to do things? Probably not.
From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application. Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.
Let's think about this another way.
Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.
Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink. The bartender knows nothing about you except your date of birth because that's all the bartender needs to know. Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.
Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service? Frankly, it doesn't matter as it's the core function of this external service and not our application. Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.
In developer speak, this is called Claims Based Authentication. A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token. A Security Token Service (STS) generates the token, and our application consumes it. It is up to the STS to handle authentication. Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms. If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol. Google, Facebook, and Twitter use Claims via the OAuth protocol. Claims are everywhere.
Alright, less talking, more diagramming:
The process goes something like this:
- Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
- The STS tells the user's browser to POST the token to the application
- The application verifies the token and verifies whether it trusts the the STS
- The Application consumes the token and uses the claims as it sees fit
Now we get back to asking how the heck does the STS handle authentication? The answer is that it depends (Ah, the consultants answer). The best case scenario is that you use an STS and identity store that already exist. If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory). If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services. If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF). In fact, use WIF as the identity library in the diagram above. Making a web application claims-aware involves a process called Federation. With WIF it's really easy to do.
Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:
private static TimeSpan GetAge()
IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;
if (ident == null)
throw new ApplicationException("Isn't a claims based identity");
var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);
throw new ApplicationException("There are no date of birth claims");
string dob = dobClaims.First().Value;
TimeSpan age = DateTime.Now - DateTime.Parse(dob);
There is secondary benefit to Claims Based Authentication. You can also use it for authorization. WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources. Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.
Once authentication and authorization are dealt with, the two final architectural
nightmares problems revolve around privacy and cryptography.
Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?). This can include information like SIN numbers, addresses, phone numbers, etc. The easiest solution is to simply not use the information. Don't ask for it and don't store it anywhere. Since this isn't always possible, the goal should be to use (and request) as little as possible. Once you have no more uses for the information, delete it.
This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design. Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:
Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.
Privacy is also a measure of access to information in a system. Authentication and authorization are a core component of proper privacy controls. There needs to be access control on user information. Further, access to this information needs to be audited. Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later. There are quite a number of logging frameworks available, such as log4net or ELMAH.
Finally there is cryptography.
For the love of all things holy, do not do any custom crypto work. Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP. There are special circles of hell devoted to those who try and write their own crypto algorithms for production systems. Actually, this is true for anything security related.
Be aware of how you are storing your private keys.
Don't store them in the application as magic-strings, or store them with the application at all.
If possible store them in a Hardware Security Module (HSM).
Make sure you have proper access control policies for the private keys.
Centralize all crypto functions so different modules aren't using their own implementations.
Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation. The .NET platform has made it easy to change. You can specify a string:
public static byte SymmetricEncrypt(byte plainText, byte initVector, byte keyBytes)
if (plainText == null || plainText.Length == 0)
throw new ArgumentNullException("plainText");
if (initVector == null || initVector.Length == 0)
throw new ArgumentNullException("initVector");
if (keyBytes == null || keyBytes.Length == 0)
throw new ArgumentNullException("keyBytes");
using (SymmetricAlgorithm symmetricKey
= SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES'
return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector));
private static byte CryptoTransform(byte payload, ICryptoTransform transform)
using (MemoryStream memoryStream = new MemoryStream())
using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write))
cryptoStream.Write(payload, 0, payload.Length);
Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use.
By following these design guidelines you should have a fairly secure foundation for your application. Now lets look at unprivileged modules.
In Part 2 there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs. Most vulnerabilities, one way or another, are the result of bad input. This is therefore going to be a very short section.
Don't let input touch queries directly. Build business objects around data and encode any strings. Parse all incoming data. Fail gracefully on bad input.
Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.
If encryption is required, call into a privileged module.
Finally, validate the need for SSL. If necessary, force it.
Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack.
Throughout the first four articles in this series we've looked at how to develop a secure application. In the last article, we will look at how to respond to threats and mitigate the damage done.