Converting Claims to Windows Tokens and User Impersonation

In a domain environment it is really useful to switch user contexts in a web application.  This could be if you are needing to log in with credentials that have elevated permissions (or vice-versa) or just needing to log in as another user.

It’s pretty easy to do this with Windows Identity Foundation and Claims Authentication.  When the WIF framework is installed, a service is installed (that is off by default) that can translate Claims to Windows Tokens.  This is called (not surprisingly) the Claims to Windows Token Service or (c2WTS).

Following the deploy-with-least-amount-of-attack-surface methodology, this service does not work out of the box.  You need to turn it on and enable which user’s are allowed to impersonate via token translation.  Now, this doesn’t mean which users can switch, it means which users running the process are allowed to switch.  E.g. the process running the IIS application pools local service/network service/local system/etc (preferably a named service user other than system users).

To allow users to do this go to C:\Program Files\Windows Identity Foundation\v3.5\c2wtshost.exe.config and add in the service users to <allowedCallers>:

<windowsTokenService>
  <!--
      By default no callers are allowed to use the Windows Identity Foundation Claims To NT Token Service.
      Add the identities you wish to allow below.
    -->
  <allowedCallers>
    <clear/>
    <!-- <add value="NT AUTHORITY\Network Service" /> -->
    <!-- <add value="NT AUTHORITY\Local Service" /> –>
    <!-- <add value="nt authority\system" /> –>
    <!-- <add value="NT AUTHORITY\Authenticated Users" /> -->
  </allowedCallers>
</windowsTokenService>

You should notice that by default, all users are not allowed.  Once you’ve done that you can start up the service.  It is called Claims to Windows Token Service in the Services MMC snap-in.

That takes care of the administrative side of things.  Lets write some code.  But first, some usings:

using System;
using System.Linq;
using System.Security.Principal;
using System.Threading;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.WindowsTokenService;

The next step is to actually generate the token.  From an architectural perspective, we want to use the UPN claims type as that’s what the service wants to see.  To get the claim, we do some simple LINQ:

IClaimsIdentity identity = (ClaimsIdentity)Thread.CurrentPrincipal.Identity;
string upn = identity.Claims.Where(c => c.ClaimType == ClaimTypes.Upn).First().Value;

if (String.IsNullOrEmpty(upn))
{
    throw new Exception("No UPN claim found");
}

Following that we do the impersonation:

WindowsIdentity windowsIdentity = S4UClient.UpnLogon(upn);

using (WindowsImpersonationContext ctxt = windowsIdentity.Impersonate())
{
    DoSomethingAsNewUser();

    ctxt.Undo(); // redundant with using { } statement
}

To release the token we call the Undo() method, but if you are within a using { } statement the Undo() method is called when the object is disposed.

One thing to keep in mind though.  If you do not have permission to impersonate a user a System.ServiceModel.Security.SecurityAccessDeniedException will be thrown.

That’s all there is to it.

Implementation Details

In my opinion, these types of calls really shouldn’t be made all that often.  Realistically you need to take a look at how impersonation fits into the application and then go from there.  Impersonation is pretty weighty topic for discussion, and frankly, I’m not an expert.

Converting Bootstrap Tokens to SAML Tokens

there comes a point where using an eavesdropping application to catch packets as they fly between Secure Token Services and Relying Parties becomes tiresome.  For me it came when I decided to give up on creating a man-in-the-middle between SSL sessions between ADFS and applications.  Mainly because ADFS doesn’t like that.  At all.

Needless to say I wanted to see the tokens.  Luckily, Windows Identity Foundation has the solution by way of the Bootstrap token.  To understand what it is, consider how this whole process works.  Once you’ve authenticated, the STS will POST a chunk of XML (the SAML Token) back to the RP.  WIF will interpret it as necessary and do it’s magic generating a new principal with the payload.  However, in some instances you need to keep this token intact.  This would be the case if you were creating a web service and needed to forward the token.  What WIF does is generate a bootstrap token from the SAML token, in the event you needed to forward it off to somewhere.

Before taking a look at it, let's add in some useful using statements:

using System;
using System.IdentityModel.Tokens;
using System.Text;
using System.Threading;
using System.Xml;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.Tokens;
using Microsoft.IdentityModel.Tokens.Saml11;

The bootstrap token is attached to IClaimsPrincipal identity:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

However if you do this out of the box, BootstrapToken will be null.  By default, WIF will not save the token.  We need to explicitly enable this in the web.config file.  Add this line under <microsoft.IdentityModel><service><securityTokenHandlers>:

<securityTokenHandlerConfiguration saveBootstrapTokens="true" />

Once you’ve done that, WIF will load the token.

The properties are fairly straightforward, but you can’t just get a blob from it:

image

Luckily we have some code to convert from the bootstrap token to a chunk of XML:

SecurityToken bootstrapToken = ((IClaimsPrincipal)Thread.CurrentPrincipal).Identities[0].BootstrapToken;

StringBuilder sb = new StringBuilder();

using (var writer = XmlWriter.Create(sb))
{
     new Saml11SecurityTokenHandler(new SamlSecurityTokenRequirement()).WriteToken(writer, bootstrapToken);
}

string theXml = sb.ToString();

We get a proper XML document:

image

That’s all there is to it.

Making an ASP.NET Website Claims Aware with the Windows Identity Foundation

Straight from Microsoft this is what the Windows Identity Foundation is:

Windows Identity Foundation helps .NET developers build claims-aware applications that externalize user authentication from the application, improving developer productivity, enhancing application security, and enabling interoperability. Developers can enjoy greater productivity, using a single simplified identity model based on claims. They can create more secure applications with a single user access model, reducing custom implementations and enabling end users to securely access applications via on-premises software as well as cloud services. Finally, they can enjoy greater flexibility in application development through built-in interoperability that allows users, applications, systems and other resources to communicate via claims.

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

In order for this to work you need a central web application called a Secure Token Service (STS).  This application will do authentication and provide a set of available claims.  It will say “hey! I am able to give you the person’s email address, their username and the roles they belong to.”  Each of those pieces of information is a claim.  This message exists in the application’s Federation Metadata

So far you are probably saying “yeah, so what?”

What I haven’t mentioned is that every application (called a Relying Party) that uses this central application has one thing in common: each application doesn’t have to handle authentication – at all.  Each application passes off the authentication request to the central application and the central application does the hard work.  When you type in your username and password, you are typing it into the central application, not one of the many other applications.  Once the central application authenticates your credentials it POST’s the claims back to the other application.  A diagram might help:

image

Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)

The key takeaway is that only one single application does authentication.  Everything else just redirects to it.  So lets actually see what it takes to authenticate against an STS (central application).  In future posts I will go into detail about how to create an STS as well as how to use Active Directory Federation Services, which is an STS that authenticates directly against (you guessed it) Active Directory.

First step is to install the Framework and SDK.

WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en

WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504

The SDK will install sample projects and add two Visual Studio menu items under the Tools menu.  Both menu items do essentially the same thing, the difference being that “Add STS Reference” pre-populates the wizard with the current web application’s data.

Once the SDK is installed start up Visual Studio as Administrator.  Create a new web application.  Next go to the Properties section and go into the Web section.  Change the Server Settings to use IIS.  You need to use IIS.  To install IIS on Windows 7 check out this post.

image

So far we haven’t done anything crazy.  We’ve just set a new application to use IIS for development.  Next we have some fun.  Let’s add the STS Reference.

To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial screen:

image


Click next and it will prompt you about using an HTTPS connection.  For the sake of this we don’t need HTTPS so just continue.  The next screen asks us about where we get the STS Federation Metadata from.  In this case I already have an STS so I just paste in the URI:

image

Once it downloads the metadata it will ask if we want the Token that the STS sends back to be encrypted.  My recommendation is that we do, but for the sake of this we won’t.

image

As an aside: In order for the STS to encrypt the token it will use a public key to which our application (the Relying Party) will have the private key.  When we select a certificate it will stick that public key in the Relying Party’s own Federation Metadata file.  Anyway… When we click next we are given a list of available Claims the STS can give us:

image
There is nothing to edit here; it’s just informative.  Next we get a summary of what we just did:

image

We can optionally schedule a Windows task to download changes.

We’ve now just added a crap-load of information to the *.config file.  Actually, we really didn’t.  We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule to handle session management.  Everything else is just boiler-plate configuration.  So lets test this thing:

  1. Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
  2. Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice the variables we’ve passed?)
  3. Type in our username and password…
  4. Redirect to http://localhost/WebApplication1
  5. Yellow Screen of Death

Wait.  What?  If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably blow up.  This is because the data that was POST’ed back to us from the STS had funny characters in the values like angle brackets and stuff.  ASP.NET does not like this.  Rightfully so, Cross Site Scripting attacks suck.  To resolve this you have two choices:

  1. Add <httpRuntime requestValidationMode="2.0" /> to your web.config
  2. Use a proper RequestValidator that can handle responses from Token Services

For the sake of testing add <httpRuntime requestValidationMode="2.0" /> to the web.config and retry the test.  You should be redirected to http://localhost/WebApplication1 and no errors should occur.

Seems like a pointless exercise until you add a chunk of code to the default.aspx page. Add a GridView and then add this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;

namespace WebApplication1
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

            GridView1.DataSource = claimsIdentity.Claims;
            GridView1.DataBind();
        }
    }
}

Rerun the test and you should get back some values.  I hope some light bulbs just turned on for some people :)

Azure Blob Uploads

Earlier today I was talking with Cory Fowler about an issue he was having with an Azure blob upload.  Actually, he offered to help with one of my problems first before he asked me for my thoughts – he’s a real community guy.  Alas I wasn’t able to help him with his problem, but it got me thinking about how to handle basic Blob uploads. 

On the CommunityFTW project I had worked on a few months back I used Azure as the back end for media storage.  The basis was simple: upload media stuffs to a container of my choice.  The end result was this class:

    public sealed class BlobUploadManager
    {
        private static CloudBlobClient blobStorage;

        private static bool s_createdContainer = false;
        private static object s_blobLock = new Object();
        private string theContainer = "";

        public BlobUploadManager(string containerName)
        {
            if (string.IsNullOrEmpty(containerName))
                throw new ArgumentNullException("containerName");

            CreateOnceContainer(containerName);
        }

        public CloudBlobClient BlobClient { get; set; }

        public string CreateUploadContainer()
        {
            BlobContainerPermissions perm = new BlobContainerPermissions();
            var blobContainer = blobStorage.GetContainerReference(theContainer);
            perm.PublicAccess = BlobContainerPublicAccessType.Container;
            blobContainer.SetPermissions(perm);

            var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
            {
                Permissions = SharedAccessPermissions.Write,
                SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
            });

            return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
        }

        private void CreateOnceContainer(string containerName)
        {
            this.theContainer = containerName;

            if (s_createdContainer)
                return;

            lock (s_blobLock)
            {
                var storageAccount = new CloudStorageAccount(
                                         new StorageCredentialsAccountAndKey(
                                             SettingsController.GetSettingValue("BlobAccountName"),
                                             SettingsController.GetSettingValue("BlobKey")),
                                         false);

                blobStorage = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
                container.CreateIfNotExist();

                container.SetPermissions(
                    new BlobContainerPermissions()
                    {
                        PublicAccess = BlobContainerPublicAccessType.Container
                    });

                s_createdContainer = true;
            }
        }

        public string UploadBlob(Stream blobStream, string blobName)
        {
            if (blobStream == null)
                throw new ArgumentNullException("blobStream");

            if (string.IsNullOrEmpty(blobName))
                throw new ArgumentNullException("blobName");

            blobStorage.GetContainerReference(this.theContainer)
		       .GetBlobReference(blobName.ToLowerInvariant())
		       .UploadFromStream(blobStream);

            return blobName.ToLowerInvariant();
        }
    }

With any luck with might help someone trying to jump into Azure.

Getting the Data to the Phone

A few posts back I started talking about what it would take to create a new application for the new Windows Phone 7.  I’m not a fan of learning from trivial applications that don’t touch on the same technologies that I would be using in the real world, so I thought I would build a real application that someone can use.

Since this application uses a well known dataset I kind of get lucky because I already have my database schema, which is in a reasonably well designed way.  My first step is to get it to the Phone, so I will use WCF Data Services and an Entity Model.  I created the model and just imported the necessary tables.  I called this model RaceInfoModel.edmx.  The entities name is RaceInfoEntities  This is ridiculously simple to do.

The following step is to expose the model to the outside world through an XML format in a Data Service.  I created a WCF Data Service and made a few config changes:

using System.Data.Services;
using System.Data.Services.Common;
using System;

namespace RaceInfoDataService
{
    public class RaceInfo : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } }

This too is reasonably simple.  Since it’s a web service, I can hit it from a web browser and I get a list of available datasets:

image

This isn’t a complete list of available items, just a subset.

At this point I can package everything up and stick it on a web server.  It could technically be ready for production if you were satisfied with not having any Access Control’s on reading the data.  In this case, lets say for arguments sake that I was able to convince the powers that be that everyone should be able to access it.  There isn’t anything confidential in the data, and we provide the data in other services anyway, so all is well.  Actually, that’s kind of how I would prefer it anyway.  Give me Data or Give me Death!

Now we create the Phone project.  You need to install the latest build of the dev tools, and you can get that here http://developer.windowsphone.com/windows-phone-7/.  Install it.  Then create the project.  You should see:

image

The next step is to make the Phone application actually able to use the data.  Here it gets tricky.  Or really, here it gets stupid.  (It better he fixed by RTM or else *shakes fist*)

For some reason, the Visual Studio 2010 Phone 7 project type doesn’t allow you to automatically import services.  You have to generate the service class manually.  It’s not that big a deal since my service won’t be changing all that much, but nevertheless it’s still a pain to regenerate it manually every time a change comes down the pipeline.  To generate the necessary class run this at a command prompt:

cd C:\Windows\Microsoft.NET\Framework\v4.0.30319
DataSvcutil.exe
     /uri:http://localhost:60141/RaceInfo.svc/
     /DataServiceCollection
     /Version:2.0
     /out:"PATH.TO.PROJECT\RaceInfoService.cs"

(Formatted to fit my site layout)

Include that file in the project and compile.

UPDATE: My bad, I had already installed the reference, so this won’t compile for most people.  The Windows Phone 7 runtime doesn’t have the System.Data namespace available that we need.  Therefore we need to install them…  They are still in development, so here is the CTP build http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=b251b247-70ca-4887-bab6-dccdec192f8d.

You should now have a compile-able project with service references that looks something like:

image

We have just connected our phone application to our database!  All told, it took me 10 minutes to do this.  Next up we start playing with the data.

My First CodePlex Project!

A few minutes ago I just finalized my first CodePlex project.  While working on the ever-mysterious Infrastructure 2010 project, I needed to integrate the Live Meeting API into an application we are using.  So I decided to stick it into it’s own assembly for reuse.

I also figured that since it’s a relatively simple project, and because for the life of me I couldn’t find a similar wrapper, I would open source it.  Maybe there is someone out there who can benefit from it.

The code is ugly, but it works.  I suspect I will continue development, and clean it up a little.  With that being said:

  • It needs documentation (obviously).
  • All the StringBuilder stuff should really be converted to XML objects
  • It need's cleaner exception handling
  • It needs API versioning support
  • It needs to implement more API functions

Otherwise it works like a charm.  Check it out!

Six Simple Development Rules (for Writing Secure Code)

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

Windows LiveID Almost OpenID

liveopenidThe Windows Live team announced a few months ago that their Live ID service will be a new provider for the OpenID system.  The Live team was quoted:

Beginning today, Windows Live™ ID is publicly committing to support the OpenID digital identity framework with the announcement of the public availability of a Community Technology Preview (CTP) of the Windows Live ID OpenID Provider.

You will soon be able to use your Windows Live ID account to sign in to any OpenID Web site.

I saw the potential in OpenID a while ago, long before I heard about Microsoft’s intentions.  The only problem was that I didn’t really find a good way to implement such a system on my website.  Not only that, I didn’t really have a purpose for doing such a thing.  The only reason anyone would need to log into the site would be to administer it.  And seeing as I’m the only person who could log in, there was never a need.

Then a brilliant idea hit me.  Let users create accounts to make comment posting easier.  Originally, a user would leave a comment, and I would log in to verify comments, at which point the comment would actually show up.  Sometimes I wouldn’t log in for a couple days, which meant no comments.  So now, if a user wants to post a comment, all they have to do is log in with their openID, and the comment will appear.

Implementing OpenID

I used the ExtremeSwank OpenID Consumer for ASP.NET 2.0.  The beauty of this framework is that all I have to do is drop a control on a webform and OpenID functionality is there.  The control handles all the communications, and when the authenticating site returns it’s data, you access the data through the control’s properties.  To handle the authentication on my end, I tied the values returned from the control into my already in place Forms Authentication mechanism:

if (!(OpenIDControl1.UserObject
== null)) { if (Membership.GetUser(OpenIDControl1.UserObject.Identity)
== null) { string email = OpenIDControl1.UserObject
.GetValue(SimpleRegistrationFields.Email); string username = ""; if (HttpContext.Current.User.Identity != null) { username = HttpContext.Current.User.Identity.Name; } else { username = OpenIDControl1.UserObject.Identity; } MembershipCreateStatus membershipStatus; MembershipUser user = Membership.CreateUser( username, RandomString(12, false), email, "This is an OpenID Account. You should log in with your OpenID", RandomString(12, false), true, out membershipStatus ); if (membershipStatus != MembershipCreateStatus.Success) { lblError.Text
= "Cannot create account for OpenID Account: "
+ membershipStatus.ToString(); } } }
That’s all there is to it.