AntiXss vs HttpUtility – So What?

Earlier today, Cory Fowler suggested I write up a post discussing the differences between the AntiXss library and the methods found in HttpUtility and how it helps defend from cross site scripting (xss).  As I was thinking about what to write, it occurred to me that I really had no idea how it did what it did, and why it differed from HttpUtility.  <side-track>I’m kinda wondering how many other people out there run in to the same thing?  We are told to use some technology because it does xyz better than abc, but when it comes right down to it, we aren’t quite sure of the internals.  Just a thought for later I suppose. </side-track>

A Quick Refresher

To quickly summarize what xss is: If you have a textbox on your website that someone can enter text into, and then on another page, display that same text, the user could maliciously add in <script> tags to do anything it wanted with JavaScript.  This usually results in redirecting to another website that shows advertisements or try’s to install malware.

The way to stop this is to not trust any input, and encode any character that could be part of a tag to an HtmlEncode’d entity.

HttpUtility does this though, right?

The HttpUtility class definitely does do this.  However, it is relatively limited in how it encodes possibly malicious text.  It works by encoding specific characters like the the brackets < > to &lt; and &gt;  This can get tricky because it you could theoretically bypass these characters (somehow – speculative).

Enter AntiXss

The AntiXss library works in essentially the opposite manner.  It has a white-list of allowed characters, and encodes everything else.  These characters are the usual a-z 0-9, etc characters.

Further Reading

I’m not really doing you, dear reader, any help by reiterating what dozens of people have said before me (and probably did it better), so here are a couple links that contain loads of information on actually using the AntiXss library and protecting your website from cross site scripting:

WinFS

WinFS has been puttering around my idle thoughts lately. 

Yep, weird.

Why is it still available on MSDN and TechNet subscriptions?

Food for thought.

Visual Studio Step Up Promotion...The Headache

A few months ago some friends of mine at Microsoft told me about a step-up promotion that was going on for the release of Visual Studio 2010.  If you purchased a license for Visual Studio 2008 through Volume Licensing, it would translate into the next version up for the 2010 version.  Seems fairly straightforward but here is the actual process:

vsStepUp

So we upgraded our licenses to benefit from the step up.  Problem was, we couldn’t access any of the applications we were licensed to use (after RTM, obviously).  After a week or so of back and forth with Microsoft we finally got it squared away.  A lot of manual cajoling in the MSDN Sales system, I suspect, took place.  It turns out a lot of people were running into this issue.

Someone told me this issue got elevated to Steve B (not our specific issue, but the step-up issue in general).  I’m curious where things actually went wrong.  I suspect the workflow that was in place at the business level wasn’t in place at the technical level, so everything ended up becoming a manual process.  However, that is purely speculative.  Talk with Steve if you have questions.

In the end, everything worked out.  I got Visual Studio 2010 installed (which fricken rocks, btw), and my productivity will go up immensely once we get TFS deployed.  After of course, it has the necessary drop while I’m downloading and playing with the new MSDN subscription.

For those that are interested in the promotion, it’s still valid until the end of April.  Contact your account rep’s if you are interested.

A Launch Event For Visual Studio

About 10 minutes ago I was told to sit down and open my laptop.  The event is about start, and the place is extremely crowded.  Barnaby Jeans is on stage doing the usual intro.  Twitter, blog, twit tweet tweet tweet #vs2010.

Sean Graglia hits the stage.  Today VS2010 and .NET 4.0 were released.  It makes you a little nostalgic about the old days of development.  The way things were done, the way teams worked, the way we tested.  It’s changed quite a bit since a quarter century ago.  We used to be proud of a user interface that worked on an 80x24 printout.  Notsomuch anymore.

We now need a new way of doing things.  Visual Studio 2010 can help us with this.  Whether we want to deliver to conventional devices, or things like the phone, or even the cloud, Visual Studio can help us from inception to release.

So here we have Keith Yedlin from Corp up to now discuss the value of 2010.  First up is multi-CPU.  .NET 4.0 has created the ability to easily multi-thread processes for faster processing.  That’s actually pretty cool.  Multi-threading applications is for lack of a better word, terrible.  It’s hard to do natively, its hard to do in Managed code, and it’s even harder to figure out what’s going on after the fact.

Now, that brings up an interesting thought.  Debugging can be tricky.  Visual Studio 2010 has introduced something pretty awesome.  You can now create trace files for the debugger, and while that’s not all that interesting, you can load that back into the debugger from a remote PC and run through exactly what happened.  That is pretty fricken awesome.

Team Foundation Server

TFS was first released in 2005.  It was tricky to use, and the learning curve was fairly high.  Oh, and it was bloody expensive.  2010 has changed that entirely.  It’s now easier to use, easier to manage, and cost is much more reasonable (read: free version - FTW).  This can (and does) easily translate into faster time to market, more bug/test coverage, and overall better code.  Win-win-win.

More to come.

Visual Studio 2010 RTM!

Earlier this morning, Microsoft launched Visual Studio 2010.  Woohoo!  here’s the jist:

Watch the Keynote and Channel 9 Live here: http://www.microsoft.com/visualstudio/en-us/watch-it-live

Get the real bits here (if you have an MSDN license): http://msdn.microsoft.com/en-ca/subscriptions/default.aspx

Get the trial bits here:

Get the Express versions here: http://www.microsoft.com/express/

All the important stuff you want/need to know about Visual Studio 2010 development: http://msdn.microsoft.com/en-ca/ff625297.aspx

Enjoy!

ViewStateUserKey, ValidateAntiForgeryToken, and the Security Development Lifecycle

Last week Microsoft published the 5th revision to the SDL.  You can get it here: http://www.microsoft.com/security/sdl/default.aspx.

Of note, there are additions for .NET -- specifically ASP.NET and the MVC Framework.  Two key things I noticed initially were the addition of System.Web.UI.Page.ViewStateUserKey, and ValidateAntiForgeryToken Attribute in MVC.

Both have existed for a while, but they are now added to requirements for final testing.

ViewStateUserKey is page-specific identifier for a user.  Sort of a viewstate session.  It’s used to prevent forging of Form data from other pages, or in fancy terms it prevents Cross-site Request Forgery attacks.

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc.  You need to log in, fill in the details, and click submit.  That submit POST’s the data back to the server, and the server processes it.  The only validation that goes on is whether the viewstate hasn’t been tampered with.

Okay, so now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat.  Yay, kittehs!  Anyway, on that page is a simple set of hidden form tags with malicious data in it.  Something like their account number, and an obscene number for cash transfer.  On page load, javascript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it.  Sneaky.

The reason this worked is because the viewstate was never modified.  It could be the same viewstate across multiple sessions.  Therefore, the way you fix this to add a session identifier to the viewstate through the ViewStateUserKey.  Be forewarned, you need to do this in Page_Init, otherwise it’ll throw an exception.  The easiest way to accomplish this is:

void Page_Init (object sender, EventArgs e) 
{ 
	ViewStateUserKey = Session.SessionID; 
}

Oddly simple.  I wonder why this isn’t default in the newer versions of ASP.NET?

Next up is the ValidateAntiForgeryToken attribute.

In MVC, you add this attribute to all POST action methods.  This attribute requires all POST’ed forms have a token associated with each request.  Each token is session specific, so if it’s an old or other-session token, the POST will fail.  So given that, you need to add the token to the page.  To do that you use the Html.AntiForgeryToken() helper to add the token to the form.

It prevents the same type of attack as the ViewStateUserKey, albeit in a much simpler fashion.

Database Shrinkage

Don’t do it!

shrinkDatabase

Bad User Interfaces are Insecure

The Best of Intentions

So you’ve built this application.  It’s a brilliant application.  It’s design is spectacular, the architecture is flawless, the coding is clean and coherent, and you even followed the SDL best practices and created a secure application.

There is one minor problem though.  The interface is terrible.  It’s not intuitive, and settings are poorly described in the options window.  A lot of people wouldn’t necessarily see this as a security issue, but more of an interaction bug -- blame the UX people and get on with your day.

Consider this (highly hyperbolic) options window though:

BadSecuritySettings

How intuitive is it?  Notsomuch, eh?  You have to really think about what it’s asking.  Worst of all, there is so much extraneous information there that is supposed to help you decide.

At first glance I’m going to check it.  I see “security” and “enable” in the text, and naturally assume it’s asking me if I want to make it run securely (lets say for the sake of argument it speaks the truth), because god knows I’m not going to read it all the way through the first time.

By the second round through I’ve already assumed I know what it’s asking, read it fully, get confused, and struggle with what it has to say.

A normal end user will not even get to this point.  They’ll check it, and click save without thinking, because of just that – they don’t want to have to think about it.

Now, consider this:

GoodSecuritySettings

Isn’t this more intuitive?  Isn’t it easier to look at?  But wait, does it do the same thing?  Absolutely.  It asks the user if they want to run a secure application.

The Path to Security Hell

When I first considered what I wanted to say on this topic, I asked myself “how can this really be classified as a security bug?”  After all, it’s the user’s fault for checking it right?

Well, no.  It’s our fault.  We developed it securely, we told them they needed it to be run securely, and we gave them the option to turn off security (again, hyperbole, but you get the point).  It’s okay to let them choose if they want to run an insecure application, but if we confuse them, if we make it difficult to understand what the heck is going on, they aren’t actually doing what they want and we therefore failed at making the application they wanted secure, secure.

It is our problem.

So what?

Most developers I know at the very least will make an attempt to write a secure application.  They check for buffer overflows, SQL Injection, Cross Site Scripting, blah blah blah.  Unfortunately some, myself included, tend to forget that the end users don’t necessarily know about security, nor care about it.  We do like most developers do.  We tell them what we know: “There has been a fatal exception at 0x123FF567!!one! The index was outside the bounds of the array.  We need to destroy the application threads and process.”

That sounds fairly familiar to most error messages we display to our end users.  Frankly, they don’t care about it.  They are just pissed the work they were doing was just lost.

The funny thing is, we really don’t notice this.  When I was building the first settings window above, I kept reading the text and thinking to myself, it makes perfect sense.  The reason for this is by virtue of the fact that what I wrote is my logic.  I wrote the logic, I wrote the text, I inherently understand what I wrote.  We do this all the time.  I do this all the time, and then I get a phone call from some user saying “wtf does this mean?”, aaaaaaand then I change it to something a little more friendly.  By the 4th or so iteration of this I usually get it right (or maybe they just get tired of calling?).

So what does this say about us? Well, I’m not sure. I think it’s saying we need to work on our user interface skills, and as an extension of that, we need to work on our soft skills – our interpersonal skills. Maybe. Just a thought.

A Stab at a New Resume

While I am definitely not looking for a new job, I was bored and thought I would take a stab at a stylized resume to see if I could hone some of my (lack of) graphics skills.  It didn’t turn out too badly, but I am certainly no graphics designer.

What do you think?

mainResume

The Known Universe

Holy crap this is cool:

> > >