Only took a couple quick searches Googling with Bing, but in IIS 7 if you create a
request for a certificate, create it by a CA and then complete the request, and find
it blows up with this message box:
CertEnroll::CX509Enrollment::p_InstallResponse: ASN1 bad tag value met. 0x8009310b
(ASN: 267)
All it means is that the CA that issued the certificate isn’t trusted on the server.
I came across this in a test environment I was building. I had a Domain with
CA Services, and a server that existed outside the domain. I used the domain
CA to create the certificate, but because the web server wasn’t part of the domain,
it didn’t trust the CA.
My fix was to add the CA as a trusted Root Authority on the web server.
Earlier today, Cory Fowler suggested I write up a post discussing the differences
between the AntiXss library and the methods found in HttpUtility and how it helps
defend from cross site scripting (xss). As I was thinking about what to write,
it occurred to me that I really had no idea how it did what it did, and why it differed
from HttpUtility. <side-track>I’m kinda wondering how many other people
out there run in to the same thing? We are told to use some technology because
it does xyz better than abc, but when it comes right down to it, we aren’t quite sure
of the internals. Just a thought for later I suppose. </side-track>
A Quick Refresher
To quickly summarize what xss is: If you have a textbox on your website that someone
can enter text into, and then on another page, display that same text, the user could
maliciously add in <script> tags to do anything it wanted with JavaScript.
This usually results in redirecting to another website that shows advertisements or
try’s to install malware.
The way to stop this is to not trust any input, and encode any character that could
be part of a tag to an HtmlEncode’d entity.
HttpUtility does this though, right?
The HttpUtility class definitely does do this. However, it is relatively limited
in how it encodes possibly malicious text. It works by encoding specific characters
like the the brackets < > to < and > This can get tricky
because it you could theoretically bypass these characters (somehow – speculative).
Enter AntiXss
The AntiXss library works in essentially the opposite manner. It has a white-list
of allowed characters, and encodes everything else. These characters are the
usual a-z 0-9, etc characters.
Further Reading
I’m not really doing you, dear reader, any help by reiterating what dozens of people
have said before me (and probably did it better), so here are a couple links that
contain loads of information on actually using the AntiXss library and protecting
your website from cross site scripting:
Last week Microsoft published the 5th revision to the SDL. You can get it here: http://www.microsoft.com/security/sdl/default.aspx.
Of note, there are additions for .NET -- specifically ASP.NET and the MVC Framework.
Two key things I noticed initially were the addition of System.Web.UI.Page.ViewStateUserKey,
and ValidateAntiForgeryToken Attribute in MVC.
Both have existed for a while, but they are now added to requirements for final testing.
ViewStateUserKey is page-specific identifier for a user. Sort of a viewstate
session. It’s used to prevent forging of Form data from other pages, or in fancy
terms it prevents Cross-site Request Forgery attacks.
Imagine a web form that has a couple fields on it – sensitive fields, say money transfer
fields: account to, amount, transaction date, etc. You need to log in, fill
in the details, and click submit. That submit POST’s the data back to the server,
and the server processes it. The only validation that goes on is whether the
viewstate hasn’t been tampered with.
Okay, so now consider that you are still logged in to that site, and someone sends
you a link to a funny picture of a cat. Yay, kittehs! Anyway, on that
page is a simple set of hidden form tags with malicious data in it. Something
like their account number, and an obscene number for cash transfer. On page
load, javascript POST’s that form data to the transfer page, and since you are already
logged in, the server accepts it. Sneaky.
The reason this worked is because the viewstate was never modified. It could
be the same viewstate across multiple sessions. Therefore, the way you fix this
to add a session identifier to the viewstate through the ViewStateUserKey. Be
forewarned, you need to do this in Page_Init, otherwise it’ll throw an exception.
The easiest way to accomplish this is:
void Page_Init (object sender, EventArgs e)
{
ViewStateUserKey = Session.SessionID;
}
Oddly simple. I wonder why this isn’t default in the newer versions of ASP.NET?
Next up is the ValidateAntiForgeryToken attribute.
In MVC, you add this attribute to all POST action methods. This attribute requires
all POST’ed forms have a token associated with each request. Each token is session
specific, so if it’s an old or other-session token, the POST will fail. So given
that, you need to add the token to the page. To do that you use the Html.AntiForgeryToken() helper
to add the token to the form.
It prevents the same type of attack as the ViewStateUserKey, albeit in a much simpler
fashion.
The Best of Intentions
So you’ve built this application. It’s a brilliant application. It’s design
is spectacular, the architecture is flawless, the coding is clean and coherent, and
you even followed the SDL best practices and created a secure application.
There is one minor problem though. The interface is terrible. It’s not
intuitive, and settings are poorly described in the options window. A lot of
people wouldn’t necessarily see this as a security issue, but more of an interaction
bug -- blame the UX people and get on with your day.
Consider this (highly hyperbolic) options window though:
How intuitive is it? Notsomuch, eh? You have to really think about what
it’s asking. Worst of all, there is so much extraneous information there that
is supposed to help you decide.
At first glance I’m going to check it. I see “security” and “enable” in the
text, and naturally assume it’s asking me if I want to make it run securely (lets
say for the sake of argument it speaks the truth), because god knows I’m not going
to read it all the way through the first time.
By the second round through I’ve already assumed I know what it’s asking, read it
fully, get confused, and struggle with what it has to say.
A normal end user will not even get to this point. They’ll check it, and click
save without thinking, because of just that – they don’t want to have to think about
it.
Now, consider this:
Isn’t this more intuitive? Isn’t it easier to look at? But wait, does
it do the same thing? Absolutely. It asks the user if they want to run
a secure application.
The Path to Security Hell
When I first considered what I wanted to say on this topic, I asked myself “how can
this really be classified as a security bug?” After all, it’s the user’s fault
for checking it right?
Well, no. It’s our fault. We developed it securely, we told them they
needed it to be run securely, and we gave them the option to turn off security (again,
hyperbole, but you get the point). It’s okay to let them choose if they want
to run an insecure application, but if we confuse them, if we make it difficult to
understand what the heck is going on, they aren’t actually doing what they want and
we therefore failed at making the application they wanted secure, secure.
It is our problem.
So what?
Most developers I know at the very least will make an attempt to write a secure application.
They check for buffer overflows, SQL Injection, Cross Site Scripting, blah blah blah.
Unfortunately some, myself included, tend to forget that the end users don’t necessarily
know about security, nor care about it. We do like most developers do.
We tell them what we know: “There has been a fatal exception at 0x123FF567!!one! The
index was outside the bounds of the array. We need to destroy the application
threads and process.”
That sounds fairly familiar to most error messages we display to our end users.
Frankly, they don’t care about it. They are just pissed the work they were doing
was just lost.
The funny thing is, we really don’t notice this. When I was building the first
settings window above, I kept reading the text and thinking to myself, it makes perfect
sense. The reason for this is by virtue of the fact that what I wrote is my
logic. I wrote the logic, I wrote the text, I inherently understand what I wrote.
We do this all the time. I do this all the time, and then I get a phone call
from some user saying “wtf does this mean?”, aaaaaaand then I change it to something
a little more friendly. By the 4th or so iteration of this I usually get it
right (or maybe they just get tired of calling?).
So what does this say about us? Well, I’m not sure. I think it’s saying we need to
work on our user interface skills, and as an extension of that, we need to work on
our soft skills – our interpersonal skills. Maybe. Just a thought.
SharePoint 2010 allows administrators to pre-configure service accounts to be used when configuring SharePoint components. This way administrators don't have to remember or lookup usernames and passwords for service accounts every time they configure a new web application or SharePoint service. To configure managed service accounts in SharePoint 2010:
- Open SharePoint Central Administration
- Click on Application Management | Manage Web Applications
- Click on New button on the ribbon
- Fill in the text fields in IIS Website, Security Configuration, and Public URL sections, until you reach Application Pool section. This is where the fun begins...
- You can use one of already preconfigured managed accounts or create new one. To use an existing managed account simply pick one from the drop-down list. To create new managed account, click on Register New Managed Account
- Fill in the username and password fields (Note: for some reason there is no "confirm password" field, so make sure you type the password correctly the first time.) This screen also allows you to configure Automatic Password Change schedule. Pretty neat.
- Click OK to save newly configured managed account. Now you're back at Create New Web Application window, where you can see your managed account in the drop-down list.
- If administrators want to pre-configure managed accounts, they can do so in the SharePoint Central Administration. Click on Security | Configure managed accounts (under General Security section). Click on Register Managed Account to configure new managed account, or click on Edit to make changes to an existing managed account.
Just thought it was a good tip to share...
I’ve gotten two emails like this in the last week or so. One was from DHL Shipping,
and this one was from UPS. Attached to the email was a zip, with what I presume
to be a Trojan of some sort.
The content of the email was:
Dear customer!
We were not able to deliver the postal package which was sent on the 28th of December
in time
because the recipient’s address is incorrect.
Please print out the invoice copy attached and collect the package at our department.
United Parcel Service of America.
For a moment I thought the initial email was legit, until I saw it had an attachment.
After reading it, I called phooey on it and deleted it. Then I saw the UPS email.
I’ll have to dig through the application that came in the zip, and see what’s going
on.
I wonder how this type of attack will pan out?
I wish I could say that I came up with this list, but alas I did not. I came
across it on the Assessment,
Consulting & Engineering Team blog from Microsoft, this morning. They
are a core part of the Microsoft internal IT Security Group, and are around to provide
resources for internal and external software developers. These 6 rules are key
to developing secure applications, and they should be followed at all times.
Personally, I try to follow the rules closely, and am working hard at creating an
SDL for our department. Aside from Rule 1, you could consider each step a sort
of checklist for when you sign off, or preferably design, the application for production.
--
Rule #1: Implement a Secure Development Lifecycle in your organization.
This includes the following activities:
-
Train your developers, and testers in secure development and secure testing respectively
-
Establish a team of security experts to be the ‘go to’ group when people want advice
on security
-
Implement Threat Modeling in your development process. If you do nothing else, do
this!
-
Implement Automatic and Manual Code Reviews for your in-house written applications
-
Ensure you have ‘Right to Inspect’ clauses in your contracts with vendors and third
parties that are producing software for you
-
Have your testers include basic security testing in their standard testing practices
-
Do deployment reviews and hardening exercises for your systems
-
Have an emergency response process in place and keep it updated
If you want some good information on doing this, email me and check out this link:
http://www.microsoft.com/sdl
Rule #2: Implement a centralized input validation system (CIVS) in your organization.
These CIVS systems are designed to perform common input validation on commonly accepted
input values. Let’s face it, as much as we’d all like to believe that we are the only
ones doing things like, registering users, or recording data from visitors it’s actually
all the same thing.
When you receive data it will very likely be an integer, decimal, phone number, date,
URI, email address, post code, or string. The values and formats of the first 7 of
those are very predictable. The string’s are a bit harder to deal with but they can
all be validated against known good values. Always remember to check for the three
F’s; Form, Fit and Function.
-
Form: Is the data the right type of data that you expect? If you are expecting a quantity,
is the data an integer? Always cast data to a strong type as soon as possible to help
determine this.
-
Fit: Is the data the right length/size? Will the data fit in the buffer you allocated
(including any trailing nulls if applicable). If you are expecting and Int32, or a
Short, make sure you didn’t get an Int64 value. Did you get a positive integer for
a quantity rather than a negative integer?
-
Function: Can the data you received be used for the purpose it was intended? If you
receive a date, is the date value in the right range? If you received an integer to
be used as an index, is it in the right range? If you received an int as a value for
an Enum, does it match a legitimate Enum value?
In a vast majority of the cases, string data being sent to an application will be
0-9, a-z, A-Z. In some cases such as names or currencies you may want to allow –,
$, % and ‘. You will almost never need , <> {} or [] unless you have a special
use case such as http://www.regexlib.com in
which case see Rule #3.
You want to build this as a centralized library so that all of the applications in
your organization can use it. This means if you have to fix your phone number validator,
everyone gets the fix. By the same token, you have to inspect and scrutinize the crap
out of these CIVS to ensure that they are not prone to errors and vulnerabilities
because everyone will be relying on it. But, applying heavy scrutiny to a centralized
library is far better than having to apply that same scrutiny to every single input
value of every single application. You can be fairly confident that as long
as they are using the CIVS, that they are doing the right thing.
Fortunately implementing a CIVS is easy if you start with the Enterprise
Library Validation Application Block which is a free download from Microsoft that
you can use in all of your applications.
Rule #3: Implement input/output encoding for all externally supplied values.
Due to the prevalence of cross site scripting vulnerabilities, you need to encode
any values that came from an outside source that you may display back to the browser.
(even embedded browsers in thick client applications). The encoding essentially takes
potentially dangerous characters like < or > and converts them into their HTML,
HTTP, or URL equivalents.
For example, if you were to HTTP encode <script>alert(‘XSS Bug’)</script>
it would look like: <script>alert('XSS Bug')</script>
A lot of this functionality is build into the .NET system. For example, the code to
do the above looks like:
Server.HtmlEncode("<script>alert('XSS Bug')</script>");
However it is important to know that the Server.HTMLEncode only encodes about 4 of
the nasty characters you might encounter. It’s better to use a more ‘industrial strength’
library like the Anti
Cross Site Scripting library. Another free download from Microsoft.
This library does a lot more encoding and will do HTTP and URI encoding based on a
white list. The above encoding would look like this in AntiXSS
using Microsoft.Security.Application;
AntiXss.HtmlEncode("<script>alert('XSS Bug')</script>");
You can also run a neat test system that a friend of mine developed to test your application
for XSS vulnerabilities in its outputs. It is aptly named XSS
Attack Tool.
Rule #4: Abandon Dynamic SQL
There is no reason you should be using dynamic SQL in your applications anymore. If
your database does not support parameterized stored procedures in one form or another,
get a new database.
Dynamic SQL is when developers try to build a SQL query in code then submit it to
the DB to be executed as a string rather than calling a stored procedures and feeding
it the values. It usually looks something like this:
(for you VB fans)
dim sql
sql = "Select ArticleTitle, ArticleBody FROM Articles WHERE ArticleID = "
sql = sql & request.querystring("ArticleID")
set results = objConn.execute(sql)
In fact, this
article from 2001 is chock full of what NOT to do. Including dynamic SQL in a
stored procedure.
Here is an example of a stored procedure that is vulnerable to SQL Injection:
Create Procedure GenericTableSelect @TableName VarChar(100)
AS
Declare @SQL VarChar(1000)
SELECT @SQL = 'SELECT * FROM '
SELECT @SQL = @SQL + @TableName
Exec ( @SQL) GO
See this article for a look at using
Parameterized Stored Procedures.
Rule #5: Properly architect your applications for scalability and failover
Applications can be brought down by a simple crash. Or a not so simple one. Architecting
your applications so that they can scale easily, vertically or horizontally, and so
that they are fault tolerant will give you a lot of breathing room.
Keep in mind that fault tolerant is not just a way to say that they restart when they
crash. It means that you have a proper exception handling hierarchy built into the
application. It also means that the application needs to be able to handle situations
that result in server failover. This is usually where session management comes in.
The best fault tolerant session management solution is to store session state in SQL
Server. This also helps avoid the server affinity issues some applications have.
You will also want a good load balancer up front. This will help distribute load evenly
so that you won’t run into the failover scenario often hopefully.
And by all means do NOT do what they did on the site in the beginning of this article.
Set up your routers and switches to properly shunt bad traffic or DOS traffic. Then
let your applications handle the input filtering.
Rule #6: Always check the configuration of your production servers
Configuration mistakes are all too popular. When you consider that proper server hardening
and standard out of the box deployments are probably a good secure default, there
are a lot of people out there changing stuff that shouldn’t be. You may have remembered
when Bing went down for about 45 minutes. That was due to configuration issues.
To help address this, we have released the Web Application Configuration Auditor (WACA).
This is a free download that you can use on your servers to see if they are configured
according to best practice. You can download it at this link.
You should establish a standard SOE for your web servers that is hardened and properly
configured. Any variations to that SOE should be scrutinised and go through a very
thorough change control process. Test them first before turning them loose on the
production environment…please.
So with all that being said, you will be well on your way to stopping the majority
of attacks you are likely to encounter on your web applications. Most of the attacks
that occur are SQL Injection, XSS, and improper configuration issues. The above rules
will knock out most of them. In fact, Input Validation is your best friend. Regardless
of inspecting firewalls and things, the applications is the only link in the chain
that can make an intelligent and informed decision on if the incoming data is actually
legit or not. So put your effort where it will do you the most good.
The other day I had the opportunity to take part in an interesting meeting with Microsoft.
The discussion was security, and the meeting members were 20 or so IT Pro’s, developers,
and managers from various Fortune 500 companies in the GTA. It was not a sales call.
Throughout the day, Microsofties Rob Labbe and Mohammad Akif went into significant
detail about the current threat landscape facing all technology vendors and departments.
There was one point that was paramount. Security is not all about technology.
Security is about the policies implemented at the human level. Blinky-lighted devices
look cool, but in the end, they will not likely add value to protecting your network.
Here in lies the problem. Not too many people realize this -- hence the purpose of
the meeting.
Towards the end of the meeting, as we were all letting the presentations sink in,
I asked a relatively simple question:
What resources are out there for new/young people entering the security field?
The response was pretty much exactly what I was (unfortunately) expecting: notta.
Security it seems is mostly a self-taught topic. Yes there are some programs at schools
out there, but they tend to be academic – naturally. By this I mean that there is
no fluidity in discussion. It’s as if you are studying a snapshot of the IT landscape
that was taken 18 months ago. Most security experts will tell you the landscape changes
daily, if not multiple times a day. Therefore we need to keep up on the changes in
security, and any teacher will tell you, it’s impossible in an academic situation.
Keeping up to date with security is a manual process. You follow blogs, you subscribe
to newsgroups and mailing lists, your company gets hacked by a new form of attack,
etc., and in the end you have a reasonable idea of what is out there yesterday. And
you know what? This is just the attack vectors! You need to follow a whole new set
of blogs and mailing lists to understand how to mitigate such attacks. That sucks.
Another issue is the ramp up to being able to follow daily updates. Security is tough
when starting out. It involves so many different processes at so many different levels
of the application interactions that eyes glaze over at the thought of learning the
ins and outs of security.
So here we have two core problems with security:
-
Security changes daily – it’s hard to keep up
-
It’s scary when you are new at this
Let’s start by addressing the second issue. Security is a scary topic, but let’s breaks
it down into its core components.
-
Security is about keeping data away from those who shouldn’t see it
-
Security is about keeping data available for those who need to see it
At its core, security is simple. It starts getting tricky when you jump into the semantics
of how to implement the core. So let’s address this too.
A properly working system will do what you intended it to do at a systematic level:
calculate numbers, view customer information, launch a missile, etc. This is a fundamental
tenant of application development. Security is about understanding the unintended
consequences of what a user can do with that system.
These consequences are of the like:
-
SQL Injection
-
Cross Site Scripting attacks
-
Cross Site Forgery attacks
-
Buffer overflow attacks
-
Breaking encryption schemes
-
Session hijacking
-
etc.
Once you understand that these types of attacks can exist, everything is just semantics
from this point on. These semantics are along the line of figuring out best practices
for system designs, and that’s really just a matter of studying.
Security is about understanding that anything is possible. Once you understand attacks
can happen, you learn how they can happen. Then you learn how to prevent them from
happening. To use a phrase I really hate using, security is about thinking outside
the box.
Most developers do the least amount of work possible to build an application. I am
terribly guilty of this. In doing so however, there is a very high likelihood that
I didn’t consider what else can be done with the same code. Making this consideration
is (again, lame phrase) thinking outside the box.
It is in following this consideration that I can develop a secure system.
So… policies?
At the end of the day however, I am a lazy developer. I will still do as little
work as possible to get the system working, and frankly, this is not conducive to
creating a secure system.
The only way to really make this work is to implement security policies that force
certain considerations to be made. Each system is different, and each organization
is different. There is no single policy that will cover the scope of all systems
for all organizations, but a policy is simple.
A policy is a rule that must be followed, and in this case, we are talking about a
development rule. This can include requiring certain types of tests while developing,
or following a specific development model like the Security Development Lifecycle.
It is with these policies that we can govern the creation of secure systems.
Policies create an organization-level standard. Standards are the backbone of
security.
These standards fall under the category of semantics, mentioned earlier. Given
that, I propose an idea for learning security.
-
Understand the core ideology of security – mentioned above
-
Understand that policies drive security
-
Jump head first into the semantics starting with security models
The downside is that you will never understand everything there is to know about security.
No one will.
Perhaps its not that flawed of an idea.
Over the weekend, good friend, Mitch Garvis decided it was necessary to rebuild his
home network. Now, most home networks don’t have a $25,000 Server at the core.
This one did. Given that, we decided to do it right. The network
architecture called for Virtualization, so we decided to use Hyper-V. The network
called for management, so we decided to install SCCM and Ops Manager. The network
called for simplicity so we used Active Directory.
However, we decided to up the ante and install this all on Server Core. Now,
the tricky part is that we needed to install Active Directory. The reason this
became tricky was because there is no documented procedure out there on how to install
a new Forest on Core. There are lots of very smart people on the internet that
described how to install new domains part of existing forests, but not new forests.
So we got to work.
After running dcpromo a few times we realized we couldn’t create the Forest by throwing
commands at it. It occurred to one of us that we should try creating an unattend.txt
install file. After a few tries, we figured out the proper structure of the
file, and after 10 minutes of watching the CLI spit out random sentences, we had a
new domain.
The structure of the file is fairly simple, but you need the correct variable data.
We used the following unattend.txt file to create the new domain:
[DCInstall]
InstallDNS=yes
NewDomain=forest
NewDomainDNSName=swmi.ca
DomainNetBiosName=SWMI
SiteName=Default-First-Site-Name
ReplicaOrNewDomain=domain
ForestLevel=3
DomainLevel=3
DatabasePath="%systemroot%\ntds"
LogPath="%systemroot%\ntds"
RebootOnCompletion=yes
SYSVOLPath="%systemroot%\sysvol"
SafeModeAdminPassword=Pa$$w0rd
Now: Once the file was created we put it in the root of C: on the server core machine,
and typed the following command:
dcpromo /unattend:c:\unattend.txt
Surprisingly it worked. After checking with Microsoft, this is a supported option,
and it’s not a hack in any way. It’s just undocumented.
Until now.
Reference: Mitch Garvis, SWMI, http://garvis.ca/blogs/mitch/archive/2009/10/12/creating-a-new-domain-forest-on-server-core.aspx
The intent of this post is to create a summary definition of roles required to adequately
manage an enterprise website. It is designed to be used in tandem with a RACI (Responsibility,
Accountability, Consultable, and Informed) document to provide a unified management
model for the web Infrastructure developed.
Each role is neither inclusive nor exclusive in that any one person can qualify for
more than one role, and more than one person can qualify for the same role, as long
as each role has been fulfilled adequately.
In a future post I will discuss the creation of a RACI document.
Roles
Database administrators are charged with controlling website data resources, and use
repeatable practices to ensure data availability, integrity and security, recover
corrupted data and eliminate data redundancy, as well as leverages tools to improve
database performance and efficiency.
-
Application Administrator
Application Administrators are charged with installing, supporting, and maintaining
applications, and planning for and responding to service outages and other problems
including, but not limited to, troubleshooting end-user issues at the application
level.
-
Server/Operating System Administrator
Server Administrators are charged with installing, supporting, and maintaining servers
and other systems, as well planning for and responding to server outages and other
problems including, but not limited to, troubleshooting Application Administration
issues at the Operating System level.
-
User Account/Permissions Administrator
Account Administrators are charged with managing user accounts as well as permissions
for users within the system. This includes, but is not limited to, locking and unlocking
user accounts, as well as resetting passwords.
Hardware Administrators are charged with managing server hardware and resources. This
includes, but is not limited to, deployment of servers as well as troubleshooting
issues such as faulty hardware.
Network Administrators are charged with managing physical network resources such as
routers and switches and logical network resources such as firewall rules and IP settings.
This includes, but is not limited to, managing routing rules as well as troubleshooting
connectivity issues.
These roles were created in an attempt to define job responsibilities at an executive
level. A RACI document is then suggested as the next step to define what each
role entails at the management level.