Part 5: Incident Response Management with Team Foundation Server

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part five of the series, unedited for all to enjoy.

There are only a few certainties in life: death, taxes, me getting this post in late, and one of your applications getting attacked.  Throughout the lifetime of an application it will undergo a barrage of attack – especially if it's public facing.  If you followed the SDL, tested properly, coded securely, and managed well, you will have gotten most of the bugs out.

Most.

There will always be bugs in production code, and there will very likely always be a security bug in production code.  Further, if there is a security bug in production code, an attacker will probably find it.  Perhaps the best metric for security is along the lines of mean-time-to-failure.  Or rather, mean-time-to-breach.  All safes for storing valuables are rated in how long they can withstand certain types of attacks – not whether they can, but how long they can.  There is no one-single thing we can do to prevent an attack, and we cannot prevent all attacks.  It's just not in the cards.  So, it stands to reason then that we should prepare for something bad happening.  The final stage of the SDL requires that an Incident Response Plan is created.  This is the procedure to follow in the event of a vulnerability being found.

In security parlance, there are protocols and procedures.  The majority of the SDL is all protocol.  A protocol is the usual way to do things.  It's the list of steps you follow to accomplish a task that is associated with a normal working condition, e.g. fuzzing a file parser during development.  You follow a set of steps to fuzz something, and you really don't deviate from those steps.  A procedure is when something is different.  A procedure is reactive.  How you respond to a security breach is a procedure.  It's a set of steps, but it's not a normal condition.

An Incident Response Plan (IRP - the procedure) serves a few functions:

  • It has the list of people to contact in the event of the emergency
  • It is the actual list of steps to follow when bad things happen
  • It includes references to other procedures for code written by other teams

This may be one of the more painful parts of the SDL, because it's mostly process over anything else.  Luckily there are two wonderful products by Microsoft that help: Team Foundation Server.  For those of you who just cringed, bear with me.

Microsoft released the MSF-Agile plus Security Development Lifecycle Process Template for VS 2010 (it also takes second place in the longest product name contest) to make the entire SDL process easier for developers.  There is the SDL Process Template for 2008 as well.

It's useful for each stage of the SDL, but we want to take a look at how it can help with managing the IRP.  First though, lets define the IRP.

Emergency Contacts (Incident Response Team)

The contacts usually need to be available 24 hours a day, seven days a week.  These people have a range of functions depending on the severity of the breach:

  • Developer – Someone to comprehend and/or triage the problem
  • Tester – Someone to test and verify any changes
  • Manager – Someone to approve changes that need to be made
  • Marketing/PR – Someone to make a public announcement (if necessary)

Each plan is different for each application and for each organization, so there may be ancillary people involved as well (perhaps an end user to verify data).  Each person isn't necessarily required at each stage of the response, but they still need to be available in the event that something changes.

The Incident Response Plan

Over the years I've written a few Incident Response Plans (Never mind that I was asked to do it after an attack most times – you WILL go out and create one after reading this right?).  Each plan was unique in it's own way, but there were commonalities as well. 

Each plan should provide the steps to answer a few questions about the vulnerability:

  • How was the vulnerability disclosed?  Did someone attack, or did someone let you know about it?
  • Was the vulnerability found in something you host, or an application that your customers host?
  • Is it an ongoing attack?
  • What was breached?
  • How do you notify the your customers about the vulnerability?
  • When do you notify them about the vulnerability?

And each plan should provide the steps to answer a few questions about the fix:

  • If it's an ongoing attack, how do you stop it?
  • How do you test the fix?
  • How do you deploy the fix?
  • How do you notify the public about the fix?

Some of these questions may not be answerable immediately – you may need to wait until a postmortem to answer them. 

This is the high level IRP for example:

  • The Attack – It's already happened
  • Evaluate the state of the systems or products to determine the extent of the vulnerability
    • What was breached?
    • What is the vulnerability
  • Define the first step to mitigate the threat
    • How do you stop the threat?
    • Design the bug fix
  • Isolate the vulnerabilities if possible
    • Disconnect targeted machine from network
    • Complete forensic backup of system
    • Turn off the targeted machine if hosted
  • Initiate the mitigation plan
    • Develop the bug fix
    • Test the bug fix
  • Alert the necessary people
    • Get Marketing/PR to inform clients of breach (don't forget to tell them about the fix too!)
    • If necessary, inform the proper legal/governmental bodies
  • Deploy any fixes
    • Rebuild any affected systems
    • Deploy patch(es)
    • Reconnect to network
  • Follow up with legal/governmental bodies if prosecution of attacker is necessary
    1. Analyze forensic backups of systems
  • Do a postmortem of the attack/vulnerability
    • What went wrong?
    • Why did it go wrong?
    • What went right?
    • Why did it go right?
    • How can this class of attack be mitigated in the future?
    • Are there any other products/systems that would be affected by the same class?

Some of procedures can be done in parallel, hence the need for people to be on call.

Team Foundation Server

So now that we have a basic plan created, we should make it easy to implement.  The SDL Process Template (mentioned above) creates a set of task lists and bug types within TFS projects that are used to define things like security bugs, SDL-specific tasks, exit criteria, etc..

image

While these can (and should) be used throughout the lifetime of the project, they can also be used to map out the procedures in the IRP.  In fact, a new project creates an entry in Open SDL Tasks to create an Incident Response Team:

image

A bug works well to manage incident responses.

image

Once a bug is created we can link a new task with the bug.

image

And then we can assign a user to the task:

image

Each bug and task are now visible in the Security Exit Criteria query:

image

Once all the items in the Exit Criteria have been met, you can release the patch.

Conclusion

Security is a funny thing. A lot of times you don't think about it until it's too late. Other times you follow the SDL completely, and you still get attacked.

In the last four posts we looked at writing secure software from a pretty high level.  We touched on common vulnerabilities and their mitigations, tools you can use to test for vulnerabilities, some thoughts to apply to architecting the application securely, and finally we looked at how to respond to problems after release.  By no means will these posts automatically make you write secure code, but hopefully they have given you guidance to start understanding what goes into writing secure code.  It's a lot of work, and sometimes its hard work.

Finally, there is an idea I like to put into the first section of every Incident Response Plan I've written, and I think it applies to writing software securely in general:

Something bad just happened.  This is not the time to panic, nor the time to place blame.  Your goal is to make sure the affected system or application is secured and in working order, and your customers are protected.

Something bad may not have happened yet, and it may not in the future, but it's important to plan accordingly because your goal should be to protect the application, the system, and most importantly, the customer.

Part 4: Secure Architecture

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part four of the series, unedited for all to enjoy.

Before you start to build an application you need to start with a design of it.  In the last article I stated that bugs that are introduced at this stage of the process are the most expensive to fix throughout the lifetime of the project.  It is this reason that we need to have a good foundation for security, otherwise it'll be expensive (not to mention a pain) to fix. Keep in mind this isn't an agile verse the world discussion because no matter what at some point you still need to have a design of the application.

Before we can design an application, we need to know that there are two basic types of code/modules:

  • That which is security related; e.g. authentication, crypto, etc.
  • That which is not security related (but should still be secure nonetheless); e.g. CRUD operations.

They can be described as privileged or unprivileged.

Privileged

Whenever a piece of code is written that deals with things like authentication or cryptography, we have be very careful with it because it should be part of the secure foundation of the application. Privileged code is the authoritative core of the application.  Careless design here will render your application highly vulnerable.  Needless to say, we don't want that.  We want a foundation we can trust.  So, we have three options:

  • Don't write secure code, and be plagued with potential security vulnerabilities.  Less cost, but you cover the risk.
  • Write secure code, test it, have it verified by an outside source.  More cost, you still cover the risk.
  • Make someone else write the secure code.  Range of cost, but you don't cover the risk.

In general, from a cost/risk perspective, our costs and risks decrease as we move from the top of the list to the bottom.  This should therefore be a no-brainer: DO NOT BUILD YOUR OWN PRIVILEGED SECURITY MODULES.  Do not invent a new way of doing things if you don't need to, and do not rewrite modules that have already been vetted by security experts.  This may sound harsh, but seriously, don't.  If you think you might need to, stop thinking.  If you still think you need to, contact a security expert. PLEASE!

This applies to both coding and architecture.  In part 2 we did not come up with a novel way of protecting our inputs, we used well known libraries or methods.  Well now we want to apply this to the application architecture. 

Authentication

Let's start with authentication.

A lot of times an application has a need for user authentication, but it's core function has nothing to do with user authentication.  Yes, you might need to authenticate users for your mortgage calculator, but the core function of the application is calculating mortgages, and has very little to do with users.  So why would you put that application in charge of authenticating users?  It seems like a fairly simple argument, but whenever you let your application use something like a SqlMembershipProvider you are letting the application manage authentication.  Not only that, you are letting the application manage the entire identity for the user.  How much of that identity information is duplicated in multiple databases?  Is this really the right way to do things?  Probably not.

From an architectural perspective, we want to create an abstract relationship between the identity of the user and the application.  Everything this application needs to know about this user is part of this identity, and (for the most part) the application is not an authority on any of this information because it's not the job of the application to be the authority.

Let's think about this another way.

Imagine for a moment that you want to get a beer at the bar. In theory the bartender should ask you for proof of age. How do you prove it? Well, one option is to have the bartender cut you in half and count the number of rings, but there could be some problems with that. The other option is for you to write down your birthday on a piece of paper to which the bartender approves or disapproves. The third option is to go to the government, get an ID card, and then present the ID to the bartender.

Some may have laughed at the idea of just writing your birthday on a piece of paper, but this is essentially what is happening when you are authenticating users within your application because it is up to the bartender (or your application) to trust the piece of paper. However, we trust the government's assertion that the birthday on the ID is valid, and the ID is for the person requesting the drink.  The bartender knows nothing about you except your date of birth because that's all the bartender needs to know.  Now, the bartender could store information that they think is important to them, like your favorite drink, but the government doesn't care (as it isn't the authoritative source), so the bartender stores that information in his own way.

Now this begs the question of how do you prove your identity/age to the government, or how do you authenticate against this external service?  Frankly, it doesn't matter as it's the core function of this external service and not our application.  Our application just needs to trust that it is valid, and trust that it is a secure authentication mechanism.

In developer speak, this is called Claims Based Authentication.  A claim is an arbitrary piece of information about an identity, such as age, and is bundled into a collection of claims, to be part of a token.  A Security Token Service (STS) generates the token, and our application consumes it.  It is up to the STS to handle authentication.  Both Claims Based Authentication and the Kerberos Protocol are built around the same model, although they use different terms.  If you are looking for examples, Windows Live/Hotmail use Claims via the WS-Federation protocol.  Google, Facebook, and Twitter use Claims via the OAuth protocol.  Claims are everywhere.

Alright, less talking, more diagramming:

image

The process goes something like this:

  • Go to STS and authenticate (this is usually a web page redirect + the user entering their credentials)
  • The STS tells the user's browser to POST the token to the application
  • The application verifies the token and verifies whether it trusts the the STS
  • The Application consumes the token and uses the claims as it sees fit

Now we get back to asking how the heck does the STS handle authentication?  The answer is that it depends (Ah, the consultants answer).  The best case scenario is that you use an STS and identity store that already exist.  If you are in an intranet scenario use Active Directory and Active Directory Federation Services (a free STS for Active Directory).  If your application is on the internet use something like Live ID or Google ID, or even Facebook, simplified with Windows Azure Access Control Services.  If you are really in a bind and need to create your own STS, you can do so with the Windows Identity Foundation (WIF).  In fact, use WIF as the identity library in the diagram above.  Making a web application claims-aware involves a process called Federation.  With WIF it's really easy to do.

Accessing claims within the token is straightforward because you are only accessing a single object, the identity within the CurrentPrincipal:

private static TimeSpan GetAge()
{
    IClaimsIdentity ident = Thread.CurrentPrincipal.Identity as IClaimsIdentity;

    if (ident == null)
        throw new ApplicationException("Isn't a claims based identity");

    var dobClaims = ident.Claims.Where(c => c.ClaimType == ClaimTypes.DateOfBirth);

    if(!dobClaims.Any())
        throw new ApplicationException("There are no date of birth claims");

    string dob = dobClaims.First().Value;

    TimeSpan age = DateTime.Now - DateTime.Parse(dob);

    return age;
}

There is secondary benefit to Claims Based Authentication.  You can also use it for authorization.  WIF supports the concept of a ClaimsAuthorizationManager, which you can use to authorize access to site resources.  Instead of writing your own authorization module, you are simply defining the rules for access, which is very much a business problem, not technical.

Once authentication and authorization are dealt with, the two final architectural nightmares problems revolve around privacy and cryptography.

Privacy

Privacy is the control of Personally Identifiable Information (PII), which is defined as anything you can use to personally identify someone (good definition, huh?).  This can include information like SIN numbers, addresses, phone numbers, etc.  The easiest solution is to simply not use the information.  Don't ask for it and don't store it anywhere.  Since this isn't always possible, the goal should be to use (and request) as little as possible.  Once you have no more uses for the information, delete it.

This is a highly domain-specific problem and it can't be solved in a general discussion on architecture and design.  Microsoft Research has an interesting solution to this problem by using a new language designed specifically for defining the privacy policies for an application:

Preferences and policies are specified in terms of granted rights and required obligations, expressed as assertions and queries in an instance of SecPAL (a language originally developed for decentralized authorization). This paper further presents a formal definition of satisfaction between a policy and a preference, and a satisfaction checking algorithm. Based on the latter, a protocol is described for disclosing PIIs between users and services, as well as between third-party services.

Privacy is also a measure of access to information in a system.  Authentication and authorization are a core component of proper privacy controls.  There needs to be access control on user information.  Further, access to this information needs to be audited.  Anytime someone reads, updates, or deletes personal information, it should be recorded somewhere for review later.  There are quite a number of logging frameworks available, such as log4net or ELMAH.

Cryptography

Finally there is cryptography.

For the love of all things holy, do not do any custom crypto work.  Rely on publically vetted libraries like Bouncy Castle and formats like OpenPGP.  There are special circles of hell devoted to those who try and write their own crypto algorithms for production systems.  Actually, this is true for anything security related.

Be aware of how you are storing your private keys.

Don't store them in the application as magic-strings, or store them with the application at all. 

If possible store them in a Hardware Security Module (HSM). 

Make sure you have proper access control policies for the private keys.

Centralize all crypto functions so different modules aren't using their own implementations.

Finally, if you have to write custom encryption wrappers, make sure your code is capable of switching encryption algorithms without requiring recompilation.  The .NET platform has made it easy to change.  You can specify a string:

public static byte[] SymmetricEncrypt(byte[] plainText, byte[] initVector, byte[] keyBytes)
{
    if (plainText == null || plainText.Length == 0)
        throw new ArgumentNullException("plainText");

    if (initVector == null || initVector.Length == 0)
        throw new ArgumentNullException("initVector");

    if (keyBytes == null || keyBytes.Length == 0)
        throw new ArgumentNullException("keyBytes");

    using (SymmetricAlgorithm symmetricKey 
= SymmetricAlgorithm.Create("algorithm")) // e.g.: 'AES' { return CryptoTransform(plainText, symmetricKey.CreateEncryptor(keyBytes, initVector)); } } private static byte[] CryptoTransform(byte[] payload, ICryptoTransform transform) { using (MemoryStream memoryStream = new MemoryStream()) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, CryptoStreamMode.Write)) { cryptoStream.Write(payload, 0, payload.Length); cryptoStream.FlushFinalBlock(); return memoryStream.ToArray(); } }

Microsoft provides a list of all supported algorithms, as well as how to specify new algorithms for future use.

By following these design guidelines you should have a fairly secure foundation for your application.  Now lets look at unprivileged modules.

Unprivileged

In Part 2 there was a single, all encompassing, hopefully self evident, solution to most of the vulnerabilities: sanitize your inputs.  Most vulnerabilities, one way or another, are the result of bad input.  This is therefore going to be a very short section.

Don't let input touch queries directly.  Build business objects around data and encode any strings.  Parse all incoming data.  Fail gracefully on bad input.

Properly lock down access to resources through authorization mechanisms in privileged modules, and audit all authorization requests.

If encryption is required, call into a privileged module.

Finally, validate the need for SSL.  If necessary, force it.

Final Thoughts

Bugs during this phase of development are the most costly and hardest to fix, as the design and architecture of the application is the most critical step in making sure it isn't vulnerable to attack.

Throughout the first four articles in this series we've looked at how to develop a secure application.  In the last article, we will look at how to respond to threats and mitigate the damage done.

Part 2: Vulnerability Deep Dive

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities. This is part two of the series, unedited for all to enjoy.

Know your enemy.

In the previous post I stated that knowledge is key to writing secure code:

Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.

In order to truly protect our applications and the data within, we need to know about the threats in the real world. The OWASP top 10 list gives a us a good starting point in understanding some of the more common vulnerabilities that malicious users can use for attack:

  • Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Unvalidated Redirects and Forwards

It's important to realize that is is strictly for the web – we aren't talking client applications, though some do run into the same problems.  I've chosen this list for one reason: most of us really only develop web-based applications.  It's also helpful to keep in mind that we aren't talking just Microsoft-centric vulnerabilities.  These also exist in applications running on the LAMP (Linux/Apache/MySQL/PHP) stack and variants in between.  Finally, it's very important to note that just following these instructions won't automatically give you a secure code base – these are just primers in some ways of writing secure code.

In the 4th part of this series we'll dig into some of the architectural options we have to mitigate a few of the above vulnerabilities, but in this part we are going to go through a few items and discuss some of the frameworks available to us to help reduce or mitigate the vulnerabilities.

Injection

Injection is a way of changing the flow of a procedure by introducing arbitrary changes. An example is SQL injection. Hopefully by now everyone has heard of SQL Injection, but lets take a look at this bit of code for those who don't know about it:

string query = string.Format("SELECT * FROM UserStore WHERE UserName = '{0}' AND PasswordHash = '{1}'", username, password);

If we passed it into a SqlCommand we could use it to see whether or not a user exists, and whether or not their hashed password matches the one in the table. If so, they are authenticated. Well what happens if I enter something other than my username? What if I enter this:

'; --

It would modify the SQL Query to be this:

SELECT * FROM UserStore WHERE UserName = ''; -- AND PasswordHash = 'TXlQYXNzd29yZA=='

This has essentially broken the query into a single WHERE clause, asking for a user with a blank username because the single quote closed the parameter, the semicolon finished the executing statement, and the double dash made anything following it into a comment.

Hopefully your user table doesn't contain any blank records, so lets extend that a bit:

' OR 1=1; --

We've now appended a new clause, so the query looks for records with a blank username OR where 1=1. Since 1 always equals 1, it will return true, and since the query looks for any filter that returns true, it returns every record in the table.

If our SqlCommand just looked for at least one record in the query set, the user is authenticated. Needless to say, this is bad.

We could go one step further and log in as a specific user:

administrator';  --

We've now modified the query in such a way that it is just looking for a user with a particular username, such as the administrator.  It only took four characters to bypass a password and log in as the administrator.

Injection can also work in a number of other places such as when you are querying Active Directory or WMI. It doesn't just have to be for authentication either. Imagine if you have a basic report query that returns a large query set. If the attacker can manipulate the query, they could read data they shouldn't be allowed to read, or worse yet they could modify or delete the data.

Essentially our problem is that we don't sanitize our inputs.  If a user is allowed to enter any value they want into the system, they could potentially cause unexpected things to occur.  The solution is simple: sanitize the inputs!

If we use a SqlCommand object to execute our query above, we can use parameters.  We can write something like:

string query = "SELECT * FROM UserStore WHERE UserName = @username AND PasswordHash = @passwordHash";
            
SqlCommand c = new SqlCommand(query);
c.Parameters.Add(new SqlParameter("@username", username));
c.Parameters.Add(new SqlParameter("@passwordHash", passwordHash));

This does two things.  One, it makes .NET handle the string manipulation, and two it makes .NET properly sanitize the parameters, so

' OR 1=1; --

is converted to

' '' OR 1=1;—'

In the SQL language, two single quote characters acts as an escape sequence for a single quote, so in effect the query is trying to look for a value as is, containing the quote.

The other option is to use a commercially available Object Relational Mapper (ORM) like the Entity Framework or NHibernate where you don't have to write error-prone SQL queries.  You could write something like this with LINQ:

var users = from u in entities.UserStore where u.UserName == username && u.PasswordHash == passwordHash select u;

It looks like a SQL query, but it's compileable C#.  It solves our problem by abstracting away the ungodly mess that is SqlCommands, DataReaders, and DataSets.

Cross-Site Scripting (XSS)

XSS is a way of adding a chunk of malicious JavaScript into a page via flaws in the website. This JavaScript could do a number of different things such as read the contents of your session cookie and send it off to a rogue server. The person in control of the server can then use the cookie and browse the affected site with your session. In 2007, 80% of the reported vulnerabilities on the web were from XSS.

XSS is generally the result of not properly validating user input. Conceptually it usually works this way:

  1. A query string parameter contains a value: ?q=blah
  2. This value is outputted on the page
  3. a malicious user notices this
  4. The malicious user inserts a chunk of JavaScript into the URL parameter: ?q=<script>alert("pwned");</script>
  5. This script is outputted without change to the page, and the JavaScript is executed
  6. The malicious user sends the URL to an unsuspecting user
  7. The user clicks on it while logged into the website
  8. The JavaScript reads the cookie and sends it to the malicious user's server
  9. The malicious user now has the unsuspecting user's authenticated session

This occurs because we don't sanitize user input. We don't remove or encode the script so it can't execute.  We therefore need to encode the inputted data.  It's all about the sanitized inputs.

The basic problem is that we want to display the content the user submitted on a page, but the content can be potentially executable.  Well, how do we display JavaScript textually?  We encode each character with HtmlEncode, so the < (left angle bracket) is outputted as &lt; and the > (right angle bracket) is outputted as &gt;.  In .NET you have have some helpers in the HttpUtility class:

HttpUtility.HtmlEncode("<script>alert(\"hello!\");</script>");

This works fairly well, but you can bypass it by doing multiple layers of encoding (encoding an encoded value that was encoded with another formula).  This problem exists because HtmlEncode uses a blacklist of characters, so whenever it comes across a specific character it will encode it.  We want it to do the opposite – use a whitelist.  So whenever it comes across a known character it doesn't encode it, such as the letter 'a', otherwise it encodes everything else.  It's generally far easier to protect something if you only allow known good things instead of blocking known threats (because threats are constantly changing).

Microsoft released a toolkit to solve this encoding problem, called the AntiXss toolkit.  It's now part of the Microsoft Web Protection Library, which also actually contains some bits to help solve the SQL injection problem.  To use this encoder, you just need to do something like this:

string encodedValue = Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(userInput);

There is another step, which is to set the cookie to server-only, meaning that client side scripts cannot read the contents of the cookie.  Only newer browsers support this, but all we have to do is write something like this:

HttpCookie cookie = new HttpCookie("name", "value");
cookie.HttpOnly = true;

For added benefit while we are dealing with cookies, we can also do this:

cookie.Secure = true;

Setting Secure to true requires that the cookie only be sent over HTTPS.

This should be the last step in the output.  There shouldn't be any tweaking to the text or cookie after this point.  Call it the last line of defense on the server-side.

Cross-Site Request Forgery (CSRF)

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc. You need to log in, fill in the details, and click submit. That submit POST’s the data back to the server, and the server processes it. In ASP.NET WebForms The only validation that goes on is whether the ViewState hasn’t been tampered with.  Other web frameworks skip the ViewState bit, because well, they don't have a ViewState.

Now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat. Yay, kittehs! Anyway, on that page is a simple set of hidden form tags with malicious data in it. Something like their account number, and an obscene number for cash transfer. On page load, JavaScript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it. Sneaky.

There is actually a pretty elegant way of solving this problem.  We need to create a value that changes on every page request, and send it as part of the response.  Once the server receives the response, it validates the value and if it's bad it throws an exception.  In the the cryptography world, this is called a nonce.  In ASP.NET WebForms we can solve this problem by encrypting the ViewState.  We just need a bit of code like this in the page (or masterpage):

void Page_Init (object sender, EventArgs e) 
{ 
    ViewStateUserKey = Session.SessionID; 
}

When we set the ViewStateUserKey property on the page, the ViewState is encrypted based on this key.  This key is only valid for the length of the session, so this does two things.  First, since the ViewState is encrypted, the malicious user cannot modify their version of the ViewState since they don't know the key.  Second, if they use an unmodified version of a ViewState, the server will throw an exception since the victim's UserKey doesn't match the key used to encrypt the initial ViewState, and the ViewState parser doesn't understand the value that was decrypted with the wrong key.  Using this piece of code depends entirely on whether or not you have properly set up session state though.  To get around that, we need to set the key to a cryptographically random value that is only valid for the length of the session, and is only known on the server side.  We could for instance use the modifications we made to the cookie in the XSS section, and store the key in there.  It gets passed to the client, but client script can't access it.  This places a VERY high risk on the user though, because this security depends entirely on the browser version.  It also means that any malware installed on the client can potentially read the cookie – though the user has bigger problems if they have a virus. 

Security is complex, huh?  Anyway…

In MVC we can do something similar except we use the Html.AntiForgeryToken().

This is a two step process.  First we need to update the Action method(s) by adding the ValidateAntiForgeryToken attribute to the method:

[AcceptVerbs(HttpVerbs.Post)]
[ValidateAntiForgeryToken]
public ActionResult Transfer(WireTransfer transfer)
{
    try
    {
        if (!ModelState.IsValid)
            return View(transfer); 

        context.WireTransfers.Add(transfer);
        context.SubmitChanges();

        return RedirectToAction("Transfers");
    }
    catch
    {
        return View(transfer);
    }
}

Then we need to add the AntiForgeryToken to the page:

<%= Html.AntiForgeryToken() %>

This helper will output a nonce that gets checked by the ValidateAntiForgeryToken attribute.

Insecure Cryptographic Storage

I think it's safe to say that most of us get cryptography related-stuff wrong most of the time at first. I certainly do. Mainly because crypto is fricken hard to do properly. If you noticed above in my SQL Injection query, I used this value for my hashed password: TXlQYXNzd29yZA==.

It's not actually hashed. It's encoded using Base64 (the double-equals is a dead give away). The decoded value is 'MyPassword'. The difference being that hashing is a one-way process. Once I hash something (with a cryptographic hash), I can't de-hash it. Second, if I happen to get hold of someone else's user table, I can look for hashed passwords that look the same. So anyone else that has a password in the table as "TXlQYXNzd29yZA==", I know that their password is 'MyPassword'. This is where a salt comes in handy. A salt is just a chunk of data appended to the unhashed password, and then hashed. Each user has a unique salt, and therefore will have a unique hash.

Then in the last section on CSRF I talked about using a nonce.  Nonce's are valuable to the authenticity of a request.  They prevent replay attacks, meaning that the encrypted output will look the same as a previous response, and is therefore a copy of the last message.  It is extremely important that the attacker not know how this nonce is generated.

Which leads to the question of how to do you properly secure encryption keys?  I actually sighed a little just now because Jonathan gave me a strict word limit, and I can't even touch on properly securing encryption keys because that is a blog series on its own.

Properly using cryptography in an application is really hard to do. Proper cryptography in an application is a topic fit for a book.

Final Thoughts

In this article we touched on only four of the items in the OWASP top 10 list as they are directly solvable using publically available frameworks.  The other six items in the list can be solved through the use of tools as well as designing a secure architecture, both of which we will talk about in future posts.

Talking about Security Article Series

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications.  It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities.

Obviously it's not a series on everything you need to know about security, but hopefully it's a starting point.  My goal is to get people to at least start talking about security in their applications.

This is the series:

Part 1: Development Security Basics

Over on the Canadian Solution Developer's blog I have a series on the basics of writing secure applications. It's a bit of an introduction to all the things we should know in order to write software that doesn't contain too many vulnerabilities.  This is part one of the series, unedited for all to enjoy.

Every year or so a Software Security Advocacy group creates a top 10 list of the security flaws developers introduce into their software.  This is something I affectionately refer to as the stupid things we do when building applications listThe group is OWASP (Open Web Application Security Project) and the list is the OWASP Top 10 Project (of which I have no affiliation to either).  In this article we will dig into some of the ways we can combat the ever-growing list of security flaws in our applications.

Security is a trade off.  We need to balance the requirements of the application with the time and budget constraints of the project.

A lot of times though, nobody has enough forethought to think that security should be a feature, or more importantly, that security should just be a central design requirement for the application regardless of what the time or budget constraints may be (do I sound bitter?).

This of course leads to a funny problem.  What happens when your application gets attacked?

There is no easy way to say it: the developers get blamed.  Or if it's a seriously heinous breach the boss gets arrested because they were accountable for the breach.  In any case it doesn't end well for the organization.

Microsoft had this problem for years – although with a twist because security has always been important to the company. Windows NT 3.5 was Microsoft's first OS that was successfully evaluated under the TCSEC regime at C2 – government speak for "it met a rigorous set of requirements designed to evaluate the security of a product in high-security environments".  However, this didn't really jive well with what the news was saying since so many vulnerabilities were plaguing Windows 2000 and XP.  There was proof that security was important through the evaluation, but reality looked different because of the bugs.

This led to a major change in Microsoft's ways.  In January 2002 Bill Gates sent out a company-wide memorandum stating the need for a new way of doing things. 

The events of last year - from September's terrorist attacks to a number of malicious and highly publicized computer viruses - reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it's the airlines or computer systems.

The creation of the Trustworthy Computing Initiative was the result.  In short order, Microsoft did a complete 180 on how it developed software.

Part of the problem with writing secure code is that you just can't look for the bugs at the end of a development cycle, fix them, and move on.  It just doesn't work.  Microsoft introduced the Security Development Lifecycle to combat this problem, as it introduced processes during the development lifecycle to aid the developers in writing secure code.

Conceptually it's pretty simple: defense in depth.

Training
RequirementsDesignImplementationVerificationRelease
Response

In order to develop secure applications, we need to adapt our development model from the beginning of development training, all the way up to application release, as well as how we respond to vulnerabilities after launch to include security requirements.

It's important to include security at the beginning of the development process, otherwise we run into the Problem Windows had.

A good chunk of the codebase for Windows Vista was scrapped because too many bugs were found over the course of testing.  Once the Windows team started introducing the SDL into their development process, the number of security bugs found dropped considerably.  This is the reason it took six years to release Windows Vista.  Funny enough, this is partly the reason Windows 7 only took two years to release – less security bugs!

Now, Microsoft had an invested interest in writing secure code so it was an all or nothing kind of thing with the SDL.  Companies that haven't made this decision may have considerably more trouble implementing the SDL simply because it costs money to do so.  Luckily we don't have to implement the entire process all at once.

As we move through this series, we'll touch on some of the key aspects of the SDL and how we can fit it into the development lifecycle.

Perhaps the most important aspect of the SDL is that it's important to have a good foundation of knowledge of security vulnerabilities.  This is where the top 10 list from OWASP comes in handy: 

  • Injection
  • Cross-Site Scripting (XSS)
  • Broken Authentication and Session Management
  • Insecure Direct Object References
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Insecure Cryptographic Storage
  • Failure to Restrict URL Access
  • Insufficient Transport Layer Protection
  • Unvalidated Redirects and Forwards

In the next article in this series we'll take a look at a few of these vulnerabilities up close and some of the libraries available to us to help combat our attackers. Throughout this series we'll also show how different steps in the SDL process can help find and mitigate these vulnerabilities.

In Part III of this series we'll take a look at some of the tools Microsoft has created to aid the process of secure design and analysis.

In Part IV of this series, we'll dig into some of the architectural considerations of developing secure applications.

Finally, to conclude this series we'll take a look at how we can use Team Foundation Server to help us manage incident responses for future vulnerabilities.

Visual Studio TFS Lab Management

One of my ongoing projects is to dive deeply into Visual Studio Team Foundation Server 2010.  TFS is pretty easy to get up and running, but as you get into some of the advanced features like Build Services and Lab Management, it gets kind of tricky.  Luckily there’s a fair bit of guidance from our favorite blue badged company.

On the Lab Management Team Blog there is a 4 part walkthrough on Getting Started with Lab Manager in TFS.  Since they are using the RC build of TFS, the walkthrough was pretty spot on to the RTM build.  Here is the walkthrough:

  1. http://blogs.msdn.com/b/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-1.aspx
  2. http://blogs.msdn.com/b/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-2.aspx
  3. http://blogs.msdn.com/b/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-3.aspx
  4. http://blogs.msdn.com/b/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-4.aspx

If you are looking for test code to try out deployments and testing check out part 3, as it contains a working project.

AntiXss vs HttpUtility &amp;ndash; So What?

Earlier today, Cory Fowler suggested I write up a post discussing the differences between the AntiXss library and the methods found in HttpUtility and how it helps defend from cross site scripting (xss).  As I was thinking about what to write, it occurred to me that I really had no idea how it did what it did, and why it differed from HttpUtility.  <side-track>I’m kinda wondering how many other people out there run in to the same thing?  We are told to use some technology because it does xyz better than abc, but when it comes right down to it, we aren’t quite sure of the internals.  Just a thought for later I suppose. </side-track>

A Quick Refresher

To quickly summarize what xss is: If you have a textbox on your website that someone can enter text into, and then on another page, display that same text, the user could maliciously add in <script> tags to do anything it wanted with JavaScript.  This usually results in redirecting to another website that shows advertisements or try’s to install malware.

The way to stop this is to not trust any input, and encode any character that could be part of a tag to an HtmlEncode’d entity.

HttpUtility does this though, right?

The HttpUtility class definitely does do this.  However, it is relatively limited in how it encodes possibly malicious text.  It works by encoding specific characters like the the brackets < > to &lt; and &gt;  This can get tricky because it you could theoretically bypass these characters (somehow – speculative).

Enter AntiXss

The AntiXss library works in essentially the opposite manner.  It has a white-list of allowed characters, and encodes everything else.  These characters are the usual a-z 0-9, etc characters.

Further Reading

I’m not really doing you, dear reader, any help by reiterating what dozens of people have said before me (and probably did it better), so here are a couple links that contain loads of information on actually using the AntiXss library and protecting your website from cross site scripting:

WinFS

WinFS has been puttering around my idle thoughts lately. 

Yep, weird.

Why is it still available on MSDN and TechNet subscriptions?

Food for thought.

Visual Studio Step Up Promotion...The Headache

A few months ago some friends of mine at Microsoft told me about a step-up promotion that was going on for the release of Visual Studio 2010.  If you purchased a license for Visual Studio 2008 through Volume Licensing, it would translate into the next version up for the 2010 version.  Seems fairly straightforward but here is the actual process:

vsStepUp

So we upgraded our licenses to benefit from the step up.  Problem was, we couldn’t access any of the applications we were licensed to use (after RTM, obviously).  After a week or so of back and forth with Microsoft we finally got it squared away.  A lot of manual cajoling in the MSDN Sales system, I suspect, took place.  It turns out a lot of people were running into this issue.

Someone told me this issue got elevated to Steve B (not our specific issue, but the step-up issue in general).  I’m curious where things actually went wrong.  I suspect the workflow that was in place at the business level wasn’t in place at the technical level, so everything ended up becoming a manual process.  However, that is purely speculative.  Talk with Steve if you have questions.

In the end, everything worked out.  I got Visual Studio 2010 installed (which fricken rocks, btw), and my productivity will go up immensely once we get TFS deployed.  After of course, it has the necessary drop while I’m downloading and playing with the new MSDN subscription.

For those that are interested in the promotion, it’s still valid until the end of April.  Contact your account rep’s if you are interested.

Visual Studio 2010 RTM!

Earlier this morning, Microsoft launched Visual Studio 2010.  Woohoo!  here’s the jist:

Watch the Keynote and Channel 9 Live here: http://www.microsoft.com/visualstudio/en-us/watch-it-live

Get the real bits here (if you have an MSDN license): http://msdn.microsoft.com/en-ca/subscriptions/default.aspx

Get the trial bits here:

Get the Express versions here: http://www.microsoft.com/express/

All the important stuff you want/need to know about Visual Studio 2010 development: http://msdn.microsoft.com/en-ca/ff625297.aspx

Enjoy!