Six Simple Development Rules (for Writing Secure Code)

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.