Making an ASP.NET Website Claims Aware with the Windows Identity Foundation

Straight from Microsoft this is what the Windows Identity Foundation is:

Windows Identity Foundation helps .NET developers build claims-aware applications that externalize user authentication from the application, improving developer productivity, enhancing application security, and enabling interoperability. Developers can enjoy greater productivity, using a single simplified identity model based on claims. They can create more secure applications with a single user access model, reducing custom implementations and enabling end users to securely access applications via on-premises software as well as cloud services. Finally, they can enjoy greater flexibility in application development through built-in interoperability that allows users, applications, systems and other resources to communicate via claims.

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

In order for this to work you need a central web application called a Secure Token Service (STS).  This application will do authentication and provide a set of available claims.  It will say “hey! I am able to give you the person’s email address, their username and the roles they belong to.”  Each of those pieces of information is a claim.  This message exists in the application’s Federation Metadata

So far you are probably saying “yeah, so what?”

What I haven’t mentioned is that every application (called a Relying Party) that uses this central application has one thing in common: each application doesn’t have to handle authentication – at all.  Each application passes off the authentication request to the central application and the central application does the hard work.  When you type in your username and password, you are typing it into the central application, not one of the many other applications.  Once the central application authenticates your credentials it POST’s the claims back to the other application.  A diagram might help:

image

Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)

The key takeaway is that only one single application does authentication.  Everything else just redirects to it.  So lets actually see what it takes to authenticate against an STS (central application).  In future posts I will go into detail about how to create an STS as well as how to use Active Directory Federation Services, which is an STS that authenticates directly against (you guessed it) Active Directory.

First step is to install the Framework and SDK.

WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en

WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504

The SDK will install sample projects and add two Visual Studio menu items under the Tools menu.  Both menu items do essentially the same thing, the difference being that “Add STS Reference” pre-populates the wizard with the current web application’s data.

Once the SDK is installed start up Visual Studio as Administrator.  Create a new web application.  Next go to the Properties section and go into the Web section.  Change the Server Settings to use IIS.  You need to use IIS.  To install IIS on Windows 7 check out this post.

image

So far we haven’t done anything crazy.  We’ve just set a new application to use IIS for development.  Next we have some fun.  Let’s add the STS Reference.

To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial screen:

image


Click next and it will prompt you about using an HTTPS connection.  For the sake of this we don’t need HTTPS so just continue.  The next screen asks us about where we get the STS Federation Metadata from.  In this case I already have an STS so I just paste in the URI:

image

Once it downloads the metadata it will ask if we want the Token that the STS sends back to be encrypted.  My recommendation is that we do, but for the sake of this we won’t.

image

As an aside: In order for the STS to encrypt the token it will use a public key to which our application (the Relying Party) will have the private key.  When we select a certificate it will stick that public key in the Relying Party’s own Federation Metadata file.  Anyway… When we click next we are given a list of available Claims the STS can give us:

image
There is nothing to edit here; it’s just informative.  Next we get a summary of what we just did:

image

We can optionally schedule a Windows task to download changes.

We’ve now just added a crap-load of information to the *.config file.  Actually, we really didn’t.  We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule to handle session management.  Everything else is just boiler-plate configuration.  So lets test this thing:

  1. Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
  2. Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice the variables we’ve passed?)
  3. Type in our username and password…
  4. Redirect to http://localhost/WebApplication1
  5. Yellow Screen of Death

Wait.  What?  If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably blow up.  This is because the data that was POST’ed back to us from the STS had funny characters in the values like angle brackets and stuff.  ASP.NET does not like this.  Rightfully so, Cross Site Scripting attacks suck.  To resolve this you have two choices:

  1. Add <httpRuntime requestValidationMode="2.0" /> to your web.config
  2. Use a proper RequestValidator that can handle responses from Token Services

For the sake of testing add <httpRuntime requestValidationMode="2.0" /> to the web.config and retry the test.  You should be redirected to http://localhost/WebApplication1 and no errors should occur.

Seems like a pointless exercise until you add a chunk of code to the default.aspx page. Add a GridView and then add this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;

namespace WebApplication1
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

            GridView1.DataSource = claimsIdentity.Claims;
            GridView1.DataBind();
        }
    }
}

Rerun the test and you should get back some values.  I hope some light bulbs just turned on for some people :)

Data as a Service and the Applications that consume it

Over the past few months I have seen quite a few really cool technologies released or announced, and I believe they have a very real potential in many markets.  A lot of companies that exist outside the realm of Software Development, rarely have the opportunity to use such technologies.

Take for instance the company I work for: Woodbine Entertainment Group.  We have a few different businesses, but as a whole our market is Horse Racing.  Our business is not software development.  We don’t always get the chance to play with or use some of the new technologies released to the market.  I thought this would be a perfect opportunity to see what it will take to develop a new product using only new technologies.

Our core customer pretty much wants Race information.  We have proof of this by the mere fact that on our two websites, HorsePlayer Interactive and our main site, we have dedicated applications for viewing Races.  So lets build a third race browser.  Since we already have a way of viewing races from your computer, lets build it on the new Windows Phone 7.

The Phone – The application

This seems fairly straightforward.  We will essentially be building a Silverlight application.  Let’s take a look at what we need to do (in no particular order):

  1. Design the interface – Microsoft has loads of guidance on following with the Metro design.  In future posts I will talk about possible designs.
  2. Build the interface – XAML and C#.  Gotta love it.
  3. Build the Business Logic that drives the views – I would prefer to stay away from this, suffice to say I’m not entirely sure how proprietary this information is
  4. Build the Data Layer – Ah, the fun part.  How do you get the data from our internal servers onto the phone?  Easy, OData!

The Data

We have a massive database of all the Races on all the tracks that you can wager on through our systems.  The data updates every few seconds relative to changes from the tracks for things like cancellations or runner odds.  How do we push this data to the outside world for the phone to consume?  We create a WCF Data Service:

  1. Create an Entities Model of the Database
  2. Create Data Service
  3. Add Entity reference to Data Service (See code below)
 
    public class RaceBrowserData : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } 

That’s actually all there is to it for the data.

The Authentication

The what?  Chances are the business will want to limit application access to only those who have accounts with us.  Especially so if we did something like add in the ability to place a wager on that race.  There are lots of ways to lock this down, but the simplest approach in this instance is to use a Secure Token Service.  I say this because we already have a user store and STS, and duplication of effort is wasted effort.  We create a STS Relying Party (The application that connects to the STS):

  1. Go to STS and get Federation Metadata.  It’s an XML document that tells relying parties what you can do with it.  In this case, we want to authenticate and get available Roles.  This is referred to as a Claim.  The role returned is a claim as defined by the STS.  Somewhat inaccurately, we would do this:
    1. App: Hello! I want these Claims for this user: “User Roles”.  I am now going to redirect to you.
    2. STS: I see you want these claims, very well.  Give me your username and password.
    3. STS: Okay, the user passed.  Here are the claims requested.  I am going to POST them back to you.
    4. App: Okay, back to our own processes.
  2. Once we have the Metadata, we add the STS as a reference to the Application, and call a web service to pass the credentials.
  3. If the credentials are accepted, we get returned the claims we want, which in this case would be available roles.
  4. If the user has the role to view races, we go into the Race view.  (All users would have this role, but adding Roles is a good thing if we needed to distinguish between wagering and non-wagering accounts)

One thing I didn’t mention is how we lock down the Data Service.  That’s a bit more tricky, and more suited for another post on the actual Data Layer itself.

So far we have laid the ground work for the development of a Race Browser application for the Windows Phone 7 using the Entity Framework and WCF Data Services, as well as discussed the use of the Windows Identity Foundation for authentication against an STS.

With any luck (and permission), more to follow.

ADFS 2.0 Windows Service Not Starting on Server 2008

I’ve been working on getting a testable ADFS environment setup for evaluation and development.  Basically, because of laziness (and timeliness), I’m using Windows Virtual PC to host Server 2008 guests for testing.  I didn’t have the time to setup a fully working x64 environment, so I couldn’t go to R2.

One of the issues I’ve been running into is that the Windows Service won’t start properly.  Or rather, at all.  It’s running into a timing issue when running as Network Service, as its timing out while waiting for a network connection.  More Googling with Bing returned the fix for me from here.

In the file [C:\Program Files\Active Directory Federation Services 2.0\Microsoft.IdentityServer.Servicehost.exe.config] add this entry to it:

<runtime>
    <generatePublisherEvidence enabled="false"/> 
</runtime>

Other places have noted that this isn’t a problem on R2.  I haven’t tested this yet, so I don’t know if it’s true.

IIS 7 Certificate Request Completion breaking with &amp;ldquo;ASN1 bad tag value met 0x8009310b&amp;rdquo;

Only took a couple quick searches Googling with Bing, but in IIS 7 if you create a request for a certificate, create it by a CA and then complete the request, and find it blows up with this message box:

CertEnroll::CX509Enrollment::p_InstallResponse: ASN1 bad tag value met. 0x8009310b (ASN: 267)

All it means is that the CA that issued the certificate isn’t trusted on the server.  I came across this in a test environment I was building.  I had a Domain with CA Services, and a server that existed outside the domain.  I used the domain CA to create the certificate, but because the web server wasn’t part of the domain, it didn’t trust the CA.

My fix was to add the CA as a trusted Root Authority on the web server.

ViewStateUserKey, ValidateAntiForgeryToken, and the Security Development Lifecycle

Last week Microsoft published the 5th revision to the SDL.  You can get it here: http://www.microsoft.com/security/sdl/default.aspx.

Of note, there are additions for .NET -- specifically ASP.NET and the MVC Framework.  Two key things I noticed initially were the addition of System.Web.UI.Page.ViewStateUserKey, and ValidateAntiForgeryToken Attribute in MVC.

Both have existed for a while, but they are now added to requirements for final testing.

ViewStateUserKey is page-specific identifier for a user.  Sort of a viewstate session.  It’s used to prevent forging of Form data from other pages, or in fancy terms it prevents Cross-site Request Forgery attacks.

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc.  You need to log in, fill in the details, and click submit.  That submit POST’s the data back to the server, and the server processes it.  The only validation that goes on is whether the viewstate hasn’t been tampered with.

Okay, so now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat.  Yay, kittehs!  Anyway, on that page is a simple set of hidden form tags with malicious data in it.  Something like their account number, and an obscene number for cash transfer.  On page load, javascript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it.  Sneaky.

The reason this worked is because the viewstate was never modified.  It could be the same viewstate across multiple sessions.  Therefore, the way you fix this to add a session identifier to the viewstate through the ViewStateUserKey.  Be forewarned, you need to do this in Page_Init, otherwise it’ll throw an exception.  The easiest way to accomplish this is:

void Page_Init (object sender, EventArgs e) 
{ 
	ViewStateUserKey = Session.SessionID; 
}

Oddly simple.  I wonder why this isn’t default in the newer versions of ASP.NET?

Next up is the ValidateAntiForgeryToken attribute.

In MVC, you add this attribute to all POST action methods.  This attribute requires all POST’ed forms have a token associated with each request.  Each token is session specific, so if it’s an old or other-session token, the POST will fail.  So given that, you need to add the token to the page.  To do that you use the Html.AntiForgeryToken() helper to add the token to the form.

It prevents the same type of attack as the ViewStateUserKey, albeit in a much simpler fashion.

Six Simple Development Rules (for Writing Secure Code)

I wish I could say that I came up with this list, but alas I did not.  I came across it on the Assessment, Consulting & Engineering Team blog from Microsoft, this morning.  They are a core part of the Microsoft internal IT Security Group, and are around to provide resources for internal and external software developers.  These 6 rules are key to developing secure applications, and they should be followed at all times.

Personally, I try to follow the rules closely, and am working hard at creating an SDL for our department.  Aside from Rule 1, you could consider each step a sort of checklist for when you sign off, or preferably design, the application for production.

--

Rule #1: Implement a Secure Development Lifecycle in your organization.

This includes the following activities:

  • Train your developers, and testers in secure development and secure testing respectively
  • Establish a team of security experts to be the ‘go to’ group when people want advice on security
  • Implement Threat Modeling in your development process. If you do nothing else, do this!
  • Implement Automatic and Manual Code Reviews for your in-house written applications
  • Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third parties that are producing software for you
  • Have your testers include basic security testing in their standard testing practices
  • Do deployment reviews and hardening exercises for your systems
  • Have an emergency response process in place and keep it updated

If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl

Rule #2: Implement a centralized input validation system (CIVS) in your organization.

These CIVS systems are designed to perform common input validation on commonly accepted input values. Let’s face it, as much as we’d all like to believe that we are the only ones doing things like, registering users, or recording data from visitors it’s actually all the same thing.

When you receive data it will very likely be an integer, decimal, phone number, date, URI, email address, post code, or string. The values and formats of the first 7 of those are very predictable. The string’s are a bit harder to deal with but they can all be validated against known good values. Always remember to check for the three F’s; Form, Fit and Function.

  • Form: Is the data the right type of data that you expect? If you are expecting a quantity, is the data an integer? Always cast data to a strong type as soon as possible to help determine this.
  • Fit: Is the data the right length/size? Will the data fit in the buffer you allocated (including any trailing nulls if applicable). If you are expecting and Int32, or a Short, make sure you didn’t get an Int64 value. Did you get a positive integer for a quantity rather than a negative integer?
  • Function: Can the data you received be used for the purpose it was intended? If you receive a date, is the date value in the right range? If you received an integer to be used as an index, is it in the right range? If you received an int as a value for an Enum, does it match a legitimate Enum value?

In a vast majority of the cases, string data being sent to an application will be 0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –, $, % and ‘. You will almost never need , <> {} or [] unless you have a special use case such as http://www.regexlib.com in which case see Rule #3.

You want to build this as a centralized library so that all of the applications in your organization can use it. This means if you have to fix your phone number validator, everyone gets the fix. By the same token, you have to inspect and scrutinize the crap out of these CIVS to ensure that they are not prone to errors and vulnerabilities because everyone will be relying on it. But, applying heavy scrutiny to a centralized library is far better than having to apply that same scrutiny to every single input value of every single application.  You can be fairly confident that as long as they are using the CIVS, that they are doing the right thing.

Fortunately implementing a CIVS is easy if you start with the Enterprise Library Validation Application Block which is a free download from Microsoft that you can use in all of your applications.

Rule #3: Implement input/output encoding for all externally supplied values.

Due to the prevalence of cross site scripting vulnerabilities, you need to encode any values that came from an outside source that you may display back to the browser. (even embedded browsers in thick client applications). The encoding essentially takes potentially dangerous characters like < or > and converts them into their HTML, HTTP, or URL equivalents.

For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script> it would look like: &lt;script&gt;alert('XSS Bug')&lt;/script&gt;  A lot of this functionality is build into the .NET system. For example, the code to do the above looks like:

Server.HtmlEncode("<script>alert('XSS Bug')</script>");

However it is important to know that the Server.HTMLEncode only encodes about 4 of the nasty characters you might encounter. It’s better to use a more ‘industrial strength’ library like the Anti Cross Site Scripting library. Another free download from Microsoft. This library does a lot more encoding and will do HTTP and URI encoding based on a white list. The above encoding would look like this in AntiXSS

using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");

You can also run a neat test system that a friend of mine developed to test your application for XSS vulnerabilities in its outputs. It is aptly named XSS Attack Tool.

Rule #4: Abandon Dynamic SQL

There is no reason you should be using dynamic SQL in your applications anymore. If your database does not support parameterized stored procedures in one form or another, get a new database.

Dynamic SQL is when developers try to build a SQL query in code then submit it to the DB to be executed as a string rather than calling a stored procedures and feeding it the values. It usually looks something like this:

(for you VB fans)

dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)

In fact, this article from 2001 is chock full of what NOT to do. Including dynamic SQL in a stored procedure.

Here is an example of a stored procedure that is vulnerable to SQL Injection:

Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO

See this article for a look at using Parameterized Stored Procedures.

Rule #5: Properly architect your applications for scalability and failover

Applications can be brought down by a simple crash. Or a not so simple one. Architecting your applications so that they can scale easily, vertically or horizontally, and so that they are fault tolerant will give you a lot of breathing room.

Keep in mind that fault tolerant is not just a way to say that they restart when they crash. It means that you have a proper exception handling hierarchy built into the application.  It also means that the application needs to be able to handle situations that result in server failover. This is usually where session management comes in.

The best fault tolerant session management solution is to store session state in SQL Server.  This also helps avoid the server affinity issues some applications have.

You will also want a good load balancer up front. This will help distribute load evenly so that you won’t run into the failover scenario often hopefully.

And by all means do NOT do what they did on the site in the beginning of this article. Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then let your applications handle the input filtering.

Rule #6: Always check the configuration of your production servers

Configuration mistakes are all too popular. When you consider that proper server hardening and standard out of the box deployments are probably a good secure default, there are a lot of people out there changing stuff that shouldn’t be. You may have remembered when Bing went down for about 45 minutes. That was due to configuration issues.

To help address this, we have released the Web Application Configuration Auditor (WACA). This is a free download that you can use on your servers to see if they are configured according to best practice. You can download it at this link.

You should establish a standard SOE for your web servers that is hardened and properly configured. Any variations to that SOE should be scrutinised and go through a very thorough change control process. Test them first before turning them loose on the production environment…please.

So with all that being said, you will be well on your way to stopping the majority of attacks you are likely to encounter on your web applications. Most of the attacks that occur are SQL Injection, XSS, and improper configuration issues. The above rules will knock out most of them. In fact, Input Validation is your best friend. Regardless of inspecting firewalls and things, the applications is the only link in the chain that can make an intelligent and informed decision on if the incoming data is actually legit or not. So put your effort where it will do you the most good.

Security, Security, Security is about Policy, Policy, Policy

The other day I had the opportunity to take part in an interesting meeting with Microsoft. The discussion was security, and the meeting members were 20 or so IT Pro’s, developers, and managers from various Fortune 500 companies in the GTA. It was not a sales call.

Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant detail about the current threat landscape facing all technology vendors and departments. There was one point that was paramount. Security is not all about technology.

Security is about the policies implemented at the human level. Blinky-lighted devices look cool, but in the end, they will not likely add value to protecting your network. Here in lies the problem. Not too many people realize this -- hence the purpose of the meeting.

Towards the end of the meeting, as we were all letting the presentations sink in, I asked a relatively simple question:

What resources are out there for new/young people entering the security field?

The response was pretty much exactly what I was (unfortunately) expecting: notta.

Security it seems is mostly a self-taught topic. Yes there are some programs at schools out there, but they tend to be academic – naturally. By this I mean that there is no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape that was taken 18 months ago. Most security experts will tell you the landscape changes daily, if not multiple times a day. Therefore we need to keep up on the changes in security, and any teacher will tell you, it’s impossible in an academic situation.

Keeping up to date with security is a manual process. You follow blogs, you subscribe to newsgroups and mailing lists, your company gets hacked by a new form of attack, etc., and in the end you have a reasonable idea of what is out there yesterday. And you know what? This is just the attack vectors! You need to follow a whole new set of blogs and mailing lists to understand how to mitigate such attacks. That sucks.

Another issue is the ramp up to being able to follow daily updates. Security is tough when starting out. It involves so many different processes at so many different levels of the application interactions that eyes glaze over at the thought of learning the ins and outs of security.

So here we have two core problems with security:

  1. Security changes daily – it’s hard to keep up
  2. It’s scary when you are new at this

Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks it down into its core components.

  1. Security is about keeping data away from those who shouldn’t see it
  2. Security is about keeping data available for those who need to see it

At its core, security is simple. It starts getting tricky when you jump into the semantics of how to implement the core. So let’s address this too.

A properly working system will do what you intended it to do at a systematic level: calculate numbers, view customer information, launch a missile, etc. This is a fundamental tenant of application development. Security is about understanding the unintended consequences of what a user can do with that system.

These consequences are of the like:

  • SQL Injection
  • Cross Site Scripting attacks
  • Cross Site Forgery attacks
  • Buffer overflow attacks
  • Breaking encryption schemes
  • Session hijacking
  • etc.

Once you understand that these types of attacks can exist, everything is just semantics from this point on. These semantics are along the line of figuring out best practices for system designs, and that’s really just a matter of studying.

Security is about understanding that anything is possible. Once you understand attacks can happen, you learn how they can happen. Then you learn how to prevent them from happening. To use a phrase I really hate using, security is about thinking outside the box.

Most developers do the least amount of work possible to build an application. I am terribly guilty of this. In doing so however, there is a very high likelihood that I didn’t consider what else can be done with the same code. Making this consideration is (again, lame phrase) thinking outside the box.

It is in following this consideration that I can develop a secure system.

So… policies?

At the end of the day however, I am a lazy developer.  I will still do as little work as possible to get the system working, and frankly, this is not conducive to creating a secure system.

The only way to really make this work is to implement security policies that force certain considerations to be made.  Each system is different, and each organization is different.  There is no single policy that will cover the scope of all systems for all organizations, but a policy is simple. 

A policy is a rule that must be followed, and in this case, we are talking about a development rule.  This can include requiring certain types of tests while developing, or following a specific development model like the Security Development Lifecycle.  It is with these policies that we can govern the creation of secure systems.

Policies create an organization-level standard.  Standards are the backbone of security.

These standards fall under the category of semantics, mentioned earlier.  Given that, I propose an idea for learning security.

  • Understand the core ideology of security – mentioned above
  • Understand that policies drive security
  • Jump head first into the semantics starting with security models

The downside is that you will never understand everything there is to know about security.  No one will.

Perhaps its not that flawed of an idea.

How UAC Actually Works

This post has had a few false starts.  It’s a tough topic to cover, as it’s a very controversial subject for most people still.  Hopefully we can enlighten some people along the way.

From a high level perspective, the UAC was developed to protect the user without necessarily removing administrative privileges.  Any change to the system required a second validation.  On older versions of Windows, an application running with administrative credentials could change any setting on the box.  Viruses and malware became rampant because of this openness, given that the average user had administrative credentials.  Most average users balked at the idea of having a limited user account, so Microsoft came up with an alternative for the new OS, Vista – a second form of validation.  You told the computer you wanted to make a change, it asked “are you sure?” 

Logically it makes sense.  Consider an instance where a devious application wanted to change some setting, and because Windows wanted to verify it’s ok to make this change it asked “are you sure?”  If you responded no, the change didn’t happen.  Simple enough.  However, here we start running into issues.  There are three perspectives to look at. 

First, the end user.  Simple changes to basic settings required validation.  This annoyed most of them, if not all of them.  They didn’t care why it was asking, they just wanted to delete shortcuts from their start menu.  Their reaction: turn off UAC.  Bad idea, but security loses when it comes to usability in the case of the end user.

Second, the irate IT Pro/Developer.  Most people working in IT make changes to system settings constantly.  Given that, the UAC would be seen many times in a day and it would, for lack of a better word, piss that person off.  They didn’t care what security it provided, it was a “stupid-useless-design” that shouldn’t have been created.  Their reaction: turn off UAC.  Once again security loses when it comes to usability.

Third, the knowledgeable IT Pro/Developer.  Not a lot of people fell into this category.  However, these tended to be the same type of people who fit into the Lazy Admin category as well.  When managed properly UAC wasn’t all that annoying because it wasn’t seen all that often.  Set-it-and-forget-it and you don’t ever see the prompt.  If you created the system image properly, you don’t have to constantly keep changing settings.  It’s a simple enough idea.

But…

Application compatibility is a pain.  Most applications didn’t understand the UAC, so they weren’t running with a validation and generally broke when they tried to do things they really shouldn’t be doing in the first place.  These are things like manipulating registry keys that don’t belong to them, writing to system folders, reading data from low-level system API’s etc.  This was reason #1 for disabling UAC.

And now…

With the general availability of Windows 7 in about 2.5 hours from now, it seems like a good time to discuss certain changes to UAC in the latest version of Windows.  The biggest of course being when Windows decides to check for validation.

Windows 7 introduces two new levels of the UAC.  In Vista there was Validate Everything or Off.  Windows 7 added “Do Not Notify Me When I Make Changes to Windows Settings”.  This comes into effect when the user makes a change to a Windows setting like display resolution.  Windows is smart enough to realize it’s the user making the change, and allows it.  It’s second additional level is the same as the first, except it doesn’t hide the desktop.

Now we get into some fun questions. 

  • How does Window’s know to not show the prompt?  It’s fairly straightforward.  All Window’s executables that were released as part of the OS are signed with a certificate.  All executables signed with this certificate are allowed to run if user started.  This is only true for Window’s settings though.  You cannot implement this with 3rd party applications.  There is no auto-allow list.
  • How does Window’s know it’s a user starting the application?  Lots of applications can mimic mouse movements or keyboard commands, but they occur at a higher application level than an actual mouse move.  Input devices like mice and keyboards have an extremely low level driver, and only commands coming from these drivers are interpreted as user input.  You cannot spoof these commands.
  • Can you spoof mouse/keyboard input to accept the UAC request?  No.  The UAC prompt is created in a separate Windows desktop.  Other well known desktops include the Locked screen, login screen, and the Cardspace admin application.  No application can cross these desktops, so an application running in your personal desktop cannot push commands into the UAC desktop.

Mark Russinovich has an excellent article in TechNet Magazine that goes into more detail about changes to the UAC.  Hopefully this post at least covered all sides of the UAC debate.

Creating a new Forest and Domain on Server Core

Over the weekend, good friend, Mitch Garvis decided it was necessary to rebuild his home network.  Now, most home networks don’t have a $25,000 Server at the core.  This one did.  Given that, we decided to do it right.    The network architecture called for Virtualization, so we decided to use Hyper-V.  The network called for management, so we decided to install SCCM and Ops Manager.  The network called for simplicity so we used Active Directory.

However, we decided to up the ante and install this all on Server Core.  Now, the tricky part is that we needed to install Active Directory.  The reason this became tricky was because there is no documented procedure out there on how to install a new Forest on Core.  There are lots of very smart people on the internet that described how to install new domains part of existing forests, but not new forests.  So we got to work.

After running dcpromo a few times we realized we couldn’t create the Forest by throwing commands at it.  It occurred to one of us that we should try creating an unattend.txt install file.  After a few tries, we figured out the proper structure of the file, and after 10 minutes of watching the CLI spit out random sentences, we had a new domain. 

The structure of the file is fairly simple, but you need the correct variable data.  We used the following unattend.txt file to create the new domain:

[DCInstall]
InstallDNS=yes
NewDomain=forest
NewDomainDNSName=swmi.ca
DomainNetBiosName=SWMI
SiteName=Default-First-Site-Name
ReplicaOrNewDomain=domain
ForestLevel=3
DomainLevel=3
DatabasePath="%systemroot%\ntds"
LogPath="%systemroot%\ntds"
RebootOnCompletion=yes
SYSVOLPath="%systemroot%\sysvol"
SafeModeAdminPassword=Pa$$w0rd

Now: Once the file was created we put it in the root of C: on the server core machine, and typed the following command:

dcpromo /unattend:c:\unattend.txt

Surprisingly it worked.  After checking with Microsoft, this is a supported option, and it’s not a hack in any way.  It’s just undocumented.

Until now.

Reference: Mitch Garvis, SWMI, http://garvis.ca/blogs/mitch/archive/2009/10/12/creating-a-new-domain-forest-on-server-core.aspx

Security, Architecture, and Common Sense

Good enough is sometimes not good enough.  I’ve been doing a lot of thinking lately (well, I’m always thinking), and security has been an issue that has come up a lot.  Frankly, I’m a two-bit software developer.  I know my code isn’t the best, nor the most secure.  I use strong passwords, encrypt my sensitive data, and try to limit access to the applications for those who need to use it.

In theory this works.  Problem is, it’s a lame theory.  There are so many unknown factors that have to be taken into account.  Often times they aren’t.

When I go to build an application I spend time designing it and architecting it.  This is usually the case for most developers.  What I’ve noticed though, is that I don’t spend time securing it.  I can’t.

Imagine building a house.  You put locks on the doors, bars on the windows, and someone breaks in.  Why?  Because someone left the key in the door.  You can’t build against that.  You just can’t.

You can follow the Security Development Lifecycle, which I recommend to each every single developer I meet.  There are tons of resources available.  But it can only go so far.  It’s designed more for being part of the iterative processes, not the architecture.  Or at least, that’s how most people interpret it.

So?

My last post talked about Single Sign-On (SSO).  It’s a great sellable feature for any product.  What most people don’t realize though is the inherent security benefit to it.  With it, that means one less password to remember, one less password that could get intercepted, one less password to change every month.  This is a fundamental architectural issue.  But at the same time, it’s common sense.

What is sometimes the simplest idea, is usually the correct solution

What the hell does that mean?  It means keep it simple.  Security is simple.  Keep data from prying eyes, and keep it from getting lost.  This is common sense.

Security is not difficult to comprehend.  It becomes difficult when academics get involved.  Spouting theories and methodologies scares people into thinking security is extremely difficult to implement.  It’s not!

Follow the Data

Understanding the flow of data is crucial in properly architecting an application.  It’s crucial in properly securing an application as well.  SSO is a perfect example of this.

The SSO feature in Office SharePoint Server 2007 maps user credentials to back-end data systems. Using SSO, you can access data from server computers and services that are external to Office SharePoint Server 2007. From within Office SharePoint Server 2007 Web Parts, you can view, create, and change this data. The SSO feature ensures that:

  • User credentials are managed securely.

  • User permission levels that are configured on the external data source are enforced.

It makes perfect sense.  It’s simple when you think about, and it affects every subsystem of SharePoint.  Make security a feature.