Security, Security, Security is about Policy, Policy, Policy

The other day I had the opportunity to take part in an interesting meeting with Microsoft. The discussion was security, and the meeting members were 20 or so IT Pro’s, developers, and managers from various Fortune 500 companies in the GTA. It was not a sales call.

Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant detail about the current threat landscape facing all technology vendors and departments. There was one point that was paramount. Security is not all about technology.

Security is about the policies implemented at the human level. Blinky-lighted devices look cool, but in the end, they will not likely add value to protecting your network. Here in lies the problem. Not too many people realize this -- hence the purpose of the meeting.

Towards the end of the meeting, as we were all letting the presentations sink in, I asked a relatively simple question:

What resources are out there for new/young people entering the security field?

The response was pretty much exactly what I was (unfortunately) expecting: notta.

Security it seems is mostly a self-taught topic. Yes there are some programs at schools out there, but they tend to be academic – naturally. By this I mean that there is no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape that was taken 18 months ago. Most security experts will tell you the landscape changes daily, if not multiple times a day. Therefore we need to keep up on the changes in security, and any teacher will tell you, it’s impossible in an academic situation.

Keeping up to date with security is a manual process. You follow blogs, you subscribe to newsgroups and mailing lists, your company gets hacked by a new form of attack, etc., and in the end you have a reasonable idea of what is out there yesterday. And you know what? This is just the attack vectors! You need to follow a whole new set of blogs and mailing lists to understand how to mitigate such attacks. That sucks.

Another issue is the ramp up to being able to follow daily updates. Security is tough when starting out. It involves so many different processes at so many different levels of the application interactions that eyes glaze over at the thought of learning the ins and outs of security.

So here we have two core problems with security:

  1. Security changes daily – it’s hard to keep up
  2. It’s scary when you are new at this

Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks it down into its core components.

  1. Security is about keeping data away from those who shouldn’t see it
  2. Security is about keeping data available for those who need to see it

At its core, security is simple. It starts getting tricky when you jump into the semantics of how to implement the core. So let’s address this too.

A properly working system will do what you intended it to do at a systematic level: calculate numbers, view customer information, launch a missile, etc. This is a fundamental tenant of application development. Security is about understanding the unintended consequences of what a user can do with that system.

These consequences are of the like:

  • SQL Injection
  • Cross Site Scripting attacks
  • Cross Site Forgery attacks
  • Buffer overflow attacks
  • Breaking encryption schemes
  • Session hijacking
  • etc.

Once you understand that these types of attacks can exist, everything is just semantics from this point on. These semantics are along the line of figuring out best practices for system designs, and that’s really just a matter of studying.

Security is about understanding that anything is possible. Once you understand attacks can happen, you learn how they can happen. Then you learn how to prevent them from happening. To use a phrase I really hate using, security is about thinking outside the box.

Most developers do the least amount of work possible to build an application. I am terribly guilty of this. In doing so however, there is a very high likelihood that I didn’t consider what else can be done with the same code. Making this consideration is (again, lame phrase) thinking outside the box.

It is in following this consideration that I can develop a secure system.

So… policies?

At the end of the day however, I am a lazy developer.  I will still do as little work as possible to get the system working, and frankly, this is not conducive to creating a secure system.

The only way to really make this work is to implement security policies that force certain considerations to be made.  Each system is different, and each organization is different.  There is no single policy that will cover the scope of all systems for all organizations, but a policy is simple. 

A policy is a rule that must be followed, and in this case, we are talking about a development rule.  This can include requiring certain types of tests while developing, or following a specific development model like the Security Development Lifecycle.  It is with these policies that we can govern the creation of secure systems.

Policies create an organization-level standard.  Standards are the backbone of security.

These standards fall under the category of semantics, mentioned earlier.  Given that, I propose an idea for learning security.

  • Understand the core ideology of security – mentioned above
  • Understand that policies drive security
  • Jump head first into the semantics starting with security models

The downside is that you will never understand everything there is to know about security.  No one will.

Perhaps its not that flawed of an idea.

How UAC Actually Works

This post has had a few false starts.  It’s a tough topic to cover, as it’s a very controversial subject for most people still.  Hopefully we can enlighten some people along the way.

From a high level perspective, the UAC was developed to protect the user without necessarily removing administrative privileges.  Any change to the system required a second validation.  On older versions of Windows, an application running with administrative credentials could change any setting on the box.  Viruses and malware became rampant because of this openness, given that the average user had administrative credentials.  Most average users balked at the idea of having a limited user account, so Microsoft came up with an alternative for the new OS, Vista – a second form of validation.  You told the computer you wanted to make a change, it asked “are you sure?” 

Logically it makes sense.  Consider an instance where a devious application wanted to change some setting, and because Windows wanted to verify it’s ok to make this change it asked “are you sure?”  If you responded no, the change didn’t happen.  Simple enough.  However, here we start running into issues.  There are three perspectives to look at. 

First, the end user.  Simple changes to basic settings required validation.  This annoyed most of them, if not all of them.  They didn’t care why it was asking, they just wanted to delete shortcuts from their start menu.  Their reaction: turn off UAC.  Bad idea, but security loses when it comes to usability in the case of the end user.

Second, the irate IT Pro/Developer.  Most people working in IT make changes to system settings constantly.  Given that, the UAC would be seen many times in a day and it would, for lack of a better word, piss that person off.  They didn’t care what security it provided, it was a “stupid-useless-design” that shouldn’t have been created.  Their reaction: turn off UAC.  Once again security loses when it comes to usability.

Third, the knowledgeable IT Pro/Developer.  Not a lot of people fell into this category.  However, these tended to be the same type of people who fit into the Lazy Admin category as well.  When managed properly UAC wasn’t all that annoying because it wasn’t seen all that often.  Set-it-and-forget-it and you don’t ever see the prompt.  If you created the system image properly, you don’t have to constantly keep changing settings.  It’s a simple enough idea.

But…

Application compatibility is a pain.  Most applications didn’t understand the UAC, so they weren’t running with a validation and generally broke when they tried to do things they really shouldn’t be doing in the first place.  These are things like manipulating registry keys that don’t belong to them, writing to system folders, reading data from low-level system API’s etc.  This was reason #1 for disabling UAC.

And now…

With the general availability of Windows 7 in about 2.5 hours from now, it seems like a good time to discuss certain changes to UAC in the latest version of Windows.  The biggest of course being when Windows decides to check for validation.

Windows 7 introduces two new levels of the UAC.  In Vista there was Validate Everything or Off.  Windows 7 added “Do Not Notify Me When I Make Changes to Windows Settings”.  This comes into effect when the user makes a change to a Windows setting like display resolution.  Windows is smart enough to realize it’s the user making the change, and allows it.  It’s second additional level is the same as the first, except it doesn’t hide the desktop.

Now we get into some fun questions. 

  • How does Window’s know to not show the prompt?  It’s fairly straightforward.  All Window’s executables that were released as part of the OS are signed with a certificate.  All executables signed with this certificate are allowed to run if user started.  This is only true for Window’s settings though.  You cannot implement this with 3rd party applications.  There is no auto-allow list.
  • How does Window’s know it’s a user starting the application?  Lots of applications can mimic mouse movements or keyboard commands, but they occur at a higher application level than an actual mouse move.  Input devices like mice and keyboards have an extremely low level driver, and only commands coming from these drivers are interpreted as user input.  You cannot spoof these commands.
  • Can you spoof mouse/keyboard input to accept the UAC request?  No.  The UAC prompt is created in a separate Windows desktop.  Other well known desktops include the Locked screen, login screen, and the Cardspace admin application.  No application can cross these desktops, so an application running in your personal desktop cannot push commands into the UAC desktop.

Mark Russinovich has an excellent article in TechNet Magazine that goes into more detail about changes to the UAC.  Hopefully this post at least covered all sides of the UAC debate.

ASP.NET WebForms are NOT Being Overthrown by MVC

It’s always a fun day when the man himself, ScottGu responds to my email.  Basically it all started last week at Techdays in Toronto (pictures to follow, I promise). 

Quite a few people asked me about MVC, and whether or not it will replace Web Forms.  My response was that it wouldn’t, but I didn’t have any tangible proof.  I discussed new features in .NET 4.0, and how the development is still going strong for future releases.  Some didn’t buy it.

So, earlier today I emailed Scott and asked him for proof.  This was his response:

Hi Steve,

Web Forms is definitely not going away – we are making substantial improvements to it with ASP.NET 4.0 (I’m doing a blog series on some of the improvements now).  ASP.NET MVC provides another option people can use for their UI layer – but it is simply an option, not a replacement.

In terms of the dev team size, the number of people on the ASP.NET team working on WebForms and MVC is actually about equal.  All of the core infrastructure investments (security, caching, config, deployment, etc) also apply equally to both.

Now, MVC is new.  MVC is powerful.  MVC is pretty freakin cool in what it can do.  But it won’t replace WebForms.  Frankly, I like WebForms.  MVC does have it’s place though.  I can see a lot benefits to using it.  It alleviates a lot of boilerplate code in certain development architectures, and that is never a bad thing.

Long Live WebForms!

Roles and Responsibilities for Managing an Enterprise Web Site

The intent of this post is to create a summary definition of roles required to adequately manage an enterprise website. It is designed to be used in tandem with a RACI (Responsibility, Accountability, Consultable, and Informed) document to provide a unified management model for the web Infrastructure developed.

Each role is neither inclusive nor exclusive in that any one person can qualify for more than one role, and more than one person can qualify for the same role, as long as each role has been fulfilled adequately.

In a future post I will discuss the creation of a RACI document.

Roles

  • Database Administrator

Database administrators are charged with controlling website data resources, and use repeatable practices to ensure data availability, integrity and security, recover corrupted data and eliminate data redundancy, as well as leverages tools to improve database performance and efficiency.

  • Application Administrator

Application Administrators are charged with installing, supporting, and maintaining applications, and planning for and responding to service outages and other problems including, but not limited to, troubleshooting end-user issues at the application level.

  • Server/Operating System Administrator

Server Administrators are charged with installing, supporting, and maintaining servers and other systems, as well planning for and responding to server outages and other problems including, but not limited to, troubleshooting Application Administration issues at the Operating System level.

  • User Account/Permissions Administrator

Account Administrators are charged with managing user accounts as well as permissions for users within the system. This includes, but is not limited to, locking and unlocking user accounts, as well as resetting passwords.

  • Hardware Administrator

Hardware Administrators are charged with managing server hardware and resources. This includes, but is not limited to, deployment of servers as well as troubleshooting issues such as faulty hardware.

  • Network Administrator

Network Administrators are charged with managing physical network resources such as routers and switches and logical network resources such as firewall rules and IP settings. This includes, but is not limited to, managing routing rules as well as troubleshooting connectivity issues.

These roles were created in an attempt to define job responsibilities at an executive level.  A RACI document is then suggested as the next step to define what each role entails at the management level.

ASP.NET Application Deployment Best Practices – Part 2

In my previous post I started a list of best practices that should be followed for deploying applications to production systems.  This is continuation of that post.

  • Create new Virtual Application in IIS

Right-click [website app will live in] > Create Application

Creating a new application provides each ASP.NET application its own sandbox environment. The benefit to this is that site resources do not get shared between applications. It is a requirement for all new web applications written in ASP.NET.

  • Create a new application pool for Virtual App
    • Right click on Application Pools and select Add Application Pool
    • Define name: “apAppName” - ‘ap’ followed by the Application Name
    • Set Framework version to 2.0
    • Set the Managed Pipeline mode: Most applications should use the default setting

An application pool is a distinct process running on the web server. It segregates processes and system resources in an attempt to prevent errant web applications from allocating all system resources. It also prevents any nasty application crashes from taking the entire website down. It is also necessary for creating distinct security contexts for applications. Setting this up is essential for high availability.

  • Set the memory limit for application pool

There is a finite amount of available resources on the web servers. We do not want any one application to allocate them all. Setting a reasonable max per application lets the core website run comfortably and allows for many applications to run at any given time. If it is a small lightweight application, the max limit could be set lower.

  • Create and appropriately use an app_Offline.htm file

Friendlier than an ASP.NET exception screen (aka the Yellow Screen of Death)

If this file exists it will automatically stop all traffic into a web application. Aptly named, it is best used when server updates occur that might take the application down for an extended period of time. It should be stylized to conform to the application style. Best practice is to keep the file in the root directory of the application renamed to app_Online.htm, that way it can easily be found if an emergency update were to occur.

  • Don’t use the Default Website instance
    • This should be disabled by default
    • Either create a new website instance or create a Virtual Application under existing website instance

Numerous vulnerabilities in the wild make certain assumptions that the default website instance is used, which creates reasonably predictable attack vectors given that default properties exist. If we disable this instance and create new instances it will mitigate a number of attacks immediately.

  • Create two Build Profiles
    • One for development/testing
    • One for production

Using two build profiles is very handy for managing configuration settings such as connection strings and application keys. It lessens the manageability issues associated with developing web applications remotely. This is not a necessity, though it does make development easier.

  • Don’t use the wwwroot folder to host web apps

Define a root folder for all web applications other than wwwroot

As with the previous comment, there are vulnerabilities that use the default wwwroot folder as an attack vector. A simple mitigation to this is to move the root folders for websites to another location, preferably on a different disk than the Operating System.

These two lists sum up what I believe to be a substantial set of best practices for application deployments.  The intent was not to create a list of best development best practices, or which development model to follow, but as an aid in strictly deployment.  It should be left to you or your department to define development models.

ASP.NET Application Deployment Best Practices – Part 1

Over the last few months I have been collecting best practices for deploying ASP.NET applications to production.  The intent was to create a document that described the necessary steps needed to deploy consistent, reliable, secure applications that are easily maintainable for administrators.  The result was an 11 page document.  I would like to take a couple excerpts from it and essentially list what I believe to be key requirements for production applications.

The key is consistency.

  • Generate new encryption keys

The benefit to doing this is that internal hashing and encrypting schemes use different keys between applications. If an application is compromised, the private keys that can get recovered will have no effect on other applications. This is most important in applications that use Forms Authentication such as the member’s section. This Key Generator app is using built-in .NET key generation code in the RNGCryptoServiceProvider.

  • Version and give Assemblies Strong Names

Use AssemblyInfo.cs file:

[assembly: AssemblyTitle("NameSpace.Based.AssemblyTitle")]
[assembly: AssemblyDescription("This is My Awesome Assembly…")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("My Awesome Company")]
[assembly: AssemblyProduct("ApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2009")]
[assembly: AssemblyTrademark("TM Application Name")]
[assembly: AssemblyCulture("en-CA")]

Strong names and versioning is the backbone of .NET assemblies. It helps distinguish between different versions of assemblies, and provides copyright attributes to code we have written internally. This is especially helpful if we decide to sell any of our applications.

  • Deploy Shared Assemblies to the GAC
    • Assemblies such as common controls
    • gacutil.exe -I "g:\dev\published\myApp\bin\myAssembly.dll"

If any assemblies are created that get used across multiple applications they should be deployed to the GAC (Global Assembly Cache). Examples of this could be Data Access Layers, or common controls such as the Telerik controls. The benefit to doing this is that we will not have multiple copies of the same DLL in different applications. A requirement of doing this is that the assembly must be signed and use a multipart name.

  • Pre-Compile Site: [In Visual Studio] Build > Publish Web Site

Any application that is in production should be running in a compiled state. What this means is that any application should not have any code-behind files or App_Code class files on the servers. This will limit damage if our servers are compromised, as the attacker will not be able to modify the source.

  • Encrypt SQL Connections and Connection Strings

Encrypt SQL Connection Strings

Aspnet_regiis.exe -pe connectionStrings -site myWebSite -app /myWebApp

Encrypt SQL Connections

Add ‘Encrypt=True’ to all connection strings before encrypting

SQL Connections contain sensitive data such as username/password combinations for access to database servers. These connection strings are stored in web.config files which are stored in plain-text on the server. If malicious users access these files they will have credentials to access the servers. Encrypting the strings will prevent the ability to read the config section.

However, encrypting the connection string is only half of the issue. SQL transactions are transmitted across the network in plain-text. Sensitive data could be acquired if a network sniffer was running on a compromised web server. SQL Connections should also be encrypted using SSL Certificates.

  • Use key file generated by Strong Name Tool:

C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\sn.exe

“sn.exe -k g:\dev\path\to\app\myAppKey.snk”

Signing an assembly provides validation that the code is ours. It will also allow for GAC deployment by giving the assembly a signature. The key file should be unique to each application, and should be kept in a secure location.

  • Set retail=”true” in machine.config

<configuration>

<system.web>

<deployment retail="true"/>

</system.web>

</configuration>

In a production environment applications do not want to show exception errors or trace messages. Setting the retail property to true is simple way to turn off debugging, tracing, and force the application to use friendly error pages.

In part 2 I continue my post on more best practices for deployment to a production environment.

Move Their Cheese! (and Change the Design)

I tend to complain a lot.  Which frankly, doesn't do much for what I'm complaining about.  In most cases, it comes down to "okay, here is a problem, now someone else go and fix it."  There is a direct correlation to how many people I annoy too.  The number of people I annoy increases as the magnitude of my complaining-ness (hey, a new word) increases:

upGraph

If I wanted to change something, obviously I’m going about it the wrong way.  However, there is a direct correlation between how often I do something wrong and the likelihood I will get it right.  See previous image.  What that means is if I keep screwing something up, eventually I am bound to get it right.  However, what is not necessarily apparent in the chart is that if I do nothing, I won’t improve upon my actions.  Maybe it is apparent, I don’t know – I’m still working on it.

The reason I bring this up is because I keep hearing people bash/complain/hate the Office Ribbon and application Ribbons through Windows 7:

ribbon2007 The major complaint has been that people couldn’t find what they are looking for anymore.  There aren’t any menus, so they can’t figure out how to set [insert obscure property].  It doesn’t make sense to them.  They now have to change the way they think about the application.  What is unfortunate about this is that menus are a horrible interface.  You shouldn’t have to dig through 6 layers of menus to change a single property, and that’s what Office 2003 became.  The Ribbon has it’s own problems, but it also increases user productivity greatly when the user knows how to use the Ribbon effectively.  Which in lies a major problem.

Most end-users don’t like when you move their cheese.

Well now we have a problem because people also want improved systems.  Improve the system, but don’t change it.  This paradox is why fundamentally different – game changing – designs aren’t seen all that often.  We stick with what we already know because if we deviate people will complain.  It’s a very tough way to create a better interface.

So how do you create a better interface?  You keep changing it.  Guaranteed the first couple of designs are going to annoy people: i.e. the Ribbon.

This is good.

If you keep failing at designs, that means eventually you are bound to figure out what kind of interface works best.  You will never figure it out if you never change.  Without MicroBating MasterSoft’s (hey look, two new words) ego, I must say that Microsoft is doing well in this area.  They keep making lousy design decisions.  See Expression Blend UI, and listen to most non-technical office workers using Office 2007.  I’m sure there are quite a few instances in other applications as well.  However, and I must make this clear, Microsoft is doing the right thing.  They are actively trying to create better interfaces.  Yes, it will piss people off (it’s pissed me off quite a few times), but at least they are making the effort.  And that’s what counts.

EDIT: P.S. I do like the Ribbon.

Stop Complaining About Software Expenses

It’s been a long week, and it’s only Monday.  It all started with an off-the-cuff comment.  It was of the petty nature, and it certainly wasn’t accurate.  It seems that is usually the case with petty comments.

I was berated for suggesting SharePoint Services as a replacement for our ageing intranet, and the commenter responded with a quick “SharePoint?  Microsoft makes that, it’ll cost too much.  Our current java site works just fine, and it’s free.”  Or something of that nature. 

How do you respond to a petty comment?  It’s pretty damn hard:

  1. While Microsoft Office SharePoint Server 2007 does cost money for licensing, Windows SharePoint Services 3.0 (which MOSS is built on) is free.  Not free as in speech, but free as in beer.  Always has been. 
  2. Java is a terrible language for websites.  It’s slow, and none of the developers in the company know Java.  We all program with .NET languages.
  3. The current intranet is running on an AS/400.
  4. The bulk of the stuff we do on our current intranet could very easily be done in SharePoint, without any development.  And, we can also increase productivity with the added features of team workspaces and free templates for other departments.
  5. The only cost will be in man-hours setting the server up, and migrating content.

Those have been my main arguments since I started working here.  We are a Microsoft shop, but very often choose non-Microsoft products.  Hmm…

The main reason we don’t use Microsoft products is cost.  Plain and simple.  Ironically, that is also the same reason WHY we use Microsoft products.

We use SQL Server, Windows Server 2008, Active Directory (finally!), IIS, MOSS (soon), and program in C#.  We don’t use office 2007, only Office 2003, some computers are still on Windows 2000 and XP.  Only one computer is running Vista, and two are running Windows 7.  But then again, we are a Not-For-Profit company.  Budgets are tight.

This post is NOT a comment on our current state of technology, because like I said in a previous post, we do a pretty good job of staying on the cutting edge in a few cases.

This post IS a comment on the people out there who think cost is the only thing to look at when evaluating a product.  For the love of god, STOP bitching about price.  START bitching about quality.

I can’t stand bad software.  People don’t pay for good software, but then complain about its quality.  Come on!  There is a formula out there that calculates the cost of a piece of software over time.  It takes into account initial cost, and the cost of the updates that follow.  It’s a simple y = mx+b formula.

Now, when you have a higher initial cost, you tend to assume it’s of higher quality.  Put this into the equation, and the number of updates, and the cost to implement these updates goes down.  Over the life of the product, it’s cheaper to go with the software that is initially more expensive.  This is basic business.

What this basic business formula doesn’t show you is the added headaches you get with crappy software.  You tend to end up with silos of systems, and silos of data.  You don’t get integration.  This is where the cost sky rockets.  Or more accurately, this is where productivity decreases.

Ironically…

SharePoint Services 3.0 is free.  It doesn’t cost anything to use.  It’s easy to use, and integrates with most of our internal systems.  I just ruined my entire argument.  Sorta.  SharePoint is a quality piece of software, and over time, it will cost less to use and maintain than any of the other intranet/middleware applications out there.  Most people don’t realize this.

I’ll probably get flack for this one:  Most people don’t complain about software expenses.  They complain about Microsoft expenses.

  • “We give Microsoft too much money, and don’t get enough in return.”
  • “There must be better software vendors out there than Microsoft that are cheaper.”
  • “Why bother upgrading; XP Works fine.”

Have you seen the cost of a friggen Oracle license?  What about IBM’s iSeries?  Novell’s Groupwise?  My jaw dropped when I saw the cost of these things.  I can’t say a single nice thing about Groupwise.  It’s a terrible product.  IBM’s iSeries is pretty good, but it’s limited what you can do with it.  Oracle knows databases, but has a higher license cost than a good chunk of a department’s salary.

Microsoft gets most of our money because it has quality products, at a good price.  Look at a few competing vendors products and compare cost and quality as well as the ability to integrate across platforms.  Revelation is a wonderful thing.  You might think twice before settling on cost.

Presenting at Techdays 2009!

Still working out session details, but it looks like I will be presenting in Ottawa and Montreal for Techdays 2009.  I will be loitering around at the Toronto event soaking up all the techie-goodness, so come find me at any of the three events.  We can talk shop, shoot the breeze, or just mill about having a good time.

I promise I won’t embarrass anyone.  Except maybe myself.  But that’s a warning for all occasions.

Here are the dates of the events across Canada.  Buy your tickets before the early-bird deal runs out!

City Date Venue
VANCOUVER SEPTEMBER 14-15 Vancouver Convention Centre
TORONTO SEPTEMBER 29-30 Metro Toronto Convention Centre
HALIFAX NOVEMBER 2-3 World Trade & Convention Centre
CALGARY NOVEMBER 17-18 Calgary Stampede
MONTREAL DECEMBER 2-3 Mont-Royal Centre
OTTAWA DECEMBER 9-10 Hampton Inn & Convention Centre
WINNIPEG DECEMBER 15-16 Winnipeg Convention Centre

The Early Bird price is $299.  The regular Price is $599.

I will post more on the sessions I will be presenting at a later date when I get the full details.

See you there!

Poor Quebec, This is Terrible

void

Microsoft certainly isn’t to blame here, it’s a law in Quebec that prevents contests from happening.  Better chance for me to win it though!