From Microsoft Marketing, ADFS 2.0 is:
Active Directory Federation Services 2.0 helps IT enable users to collaborate across
organizational boundaries and easily access applications on-premises and in the cloud,
while maintaining application security. Through a claims-based
infrastructure, IT can enable a single sign-on experience for end-users to
applications without requiring a separate account or password, whether applications
are located in partner organizations or hosted in the cloud.
So, it’s a Token Service plus some. In a previous post I had said:
In other words it is a method for centralizing
user Identity information, very much like how the Windows Live and OpenID systems
work. The system is reasonably simple. I have a Membership data store
that contains user information. I want (n) number of websites to use that membership
store, EXCEPT I don’t want each application to have direct access to membership data
such as passwords. The way around it is through claims.
The membership store in this case being Active Directory.
I thought it would be a good idea to run through how to install ADFS and set up an
application to use it. Since we already discussed how to federate an application
using FedUtil.exe, I will let you go through the steps
in the previous post. I will provide information on where to find the Metadata
later on in this post.
But First: The Prerequisites
-
Join the Server to the Domain. (I’ve started the installation of ADFS three times
on non-domain joined systems. Doh!)
-
Install the latest .NET Framework. I’m kinda partial to using SmallestDotNet.com created
by Scott Hanselman. It’s easy.
-
Install IIS. If you are running Server 2008 R2 you can follow these
steps in another post, or just go through the wizards. FYI: The post installs
EVERY feature. Just remember that when you move to production. Surface
Area and what not…
-
Install PowerShell.
-
Install the Windows Identity Foundation: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en
-
Install SQL Server. This is NOT required. You only need
to install it if you want to use a SQL Database to get custom Claims data. You
could also use a SQL Server on another server…
-
Download ADFS 2.0 RTW: http://www.microsoft.com/downloads/details.aspx?familyid=118c3588-9070-426a-b655-6cec0a92c10b&displaylang=en
The Installation
Read the terms and accept them. If you notice, you only have to read half of
what you see because the rest is in French. Maybe the lawyers are listening…these
things are getting more readable.
Select Federation Server. A Server Proxy allows you to use ADFS on
a web server not joined to the domain.
We already installed all of these things. When you click next it will check
for latest hotfixes and ask if you want to open the configuration MMC snap-in.
Start it.
We want to start the configuration Wizard and then create a new Federation Service:
Next we want to create a Stand-alone federation server:
We need to select a certificate for ADFS to use. By default it uses the SSL
certificate of the default site in IIS. So lets add one. In the IIS Manager
select the server and then select Server Certificates:
We have a couple options when it comes to adding a certificate. For the sake
of this post I’ll just create a self-signed certificate, but if you have a domain
Certificate Authority you could go that route, or if this is a public facing service
create a request and get a certificate from a 3rd party CA.
Once we’ve created the certificate we assign it to the web site. Go to the website
and select Bindings…
Add a site binding for https:
Now that we’ve done that we can go back to the Configuration Wizard:
Click next and it will install the service. It will stop IIS so be aware of
that.
You may receive this error if you are installing on Server 2008:
The fix for this is here: http://www.syfuhs.net/2010/07/23/ADFS20WindowsServiceNotStartingOnServer2008.aspx
You will need to re-run the configuration wizard if you do this. It may complain
about the virtual applications already existing. You two options: 1) delete
the applications in IIS as well as the folder C:\inetpub\adfs; 2) Ignore the warning.
Back to the installation, it will create two new Virtual Applications in IIS:
Once the wizard finishes you can go back to the MMC snap-in and fiddle around.
The first thing we need to do is create an entry for a Relying Party. This will
allow us to create a web application to work with it.
When creating an RP we have a couple options to provide configuration data.
Since we are going to create a web application from scratch we will enter in manual
data. If you already have the application built and have Federation Metadata
available for it, by all means just use that.
We need a name:
Very original, eh?
Next we need to decide on what profile we will be using. Since we are building
an application from scratch we can take advantage of the 2.0 profile, but if we needed
backwards compatibility for a legacy application we should select the 1.0/1.1 profile.
Next we specify the certificate to encrypt our claims sent to the application.
We only need the public key of the certificate. When we run FedUtil.exe we can
specify which certificate we want to use to decrypt the incoming tokens. This
will be the private key of the same certificate. For the sake of this, we’ll
skip it.
The next step gets a little confusing. It asks which protocols we want to use
if we are federating with a separate STS. In this case since we aren’t doing
anything that crazy we can ignore them and continue:
We next need to specify the RP’s identifying URI.
Allow anyone and everyone, or deny everyone and add specific users later? Allow
everyone…
When we finish we want to edit the claim rules:
This dialog will allow us to add mappings between claims and the data within Active
Directory:
So lets add a rule. We want to Send LDAP Attributes as Claims
First we specify what data in Active Directory we want to provide:
Then we specify which claim type to use:
And ADFS is configured! Lets create our Relying Party. You can follow
these steps: Making
an ASP.NET Website Claims Aware with the Windows Identity Foundation. To
get the Federation Metadata for ADFS navigate to the URL that the default website
is mapped to + /FederationMetadata/2007-06/FederationMetadata.xml. In my case
it’s https://web1.nexus.internal.test/FederationMetadata/2007-06/FederationMetadata.xml.
Once you finish the utility it’s important that we tell ADFS that our new RP has Metadata
available. Double click on the RP to get to the properties. Select Monitoring:
Add the URL for the Metadata and select Monitor relying party. This will periodically
call up the URL and download the metadata in the event that it changes.
At this point we can test. Hit F5 and we will redirect to the ADFS page.
It will ask for domain credentials and redirect back to our page. Since I tested
it with a domain admin account I got this back:
It works!
For more information on ADFS 2.0 check out http://www.microsoft.com/windowsserver2008/en/us/ad-fs-2-overview.aspx or
the WIF Blog at http://blogs.msdn.com/b/card/
Happy coding!
A couple
posts back I had discussed how you would make an ASP.NET webforms application
claims aware. It was reasonably detailed an hopefully it was clear. I say that
because to make an MVC application Claims aware, you follow the exact same procedure.
The only difference is the small little chunk of code to see what claims were returned.
Just drop this little snipped into a view and you can muck about:
<ul>
<%
var claimsIdentity
= (System.Threading.Thread.CurrentPrincipal
as Microsoft.IdentityModel.Claims.IClaimsPrincipal)
.Identities[0];
foreach (var claim in claimsIdentity.Claims)
{%>
<li>
<%: claim.ClaimType %>
--
<%: claim.Value %>
<% } %>
</li>
</ul>
Just a quick little collection of useful code snippets when dealing with certificates.
Some of these don’t really need to be in their own methods but it helps for clarification.
Namespaces for Everything
using System.Security.Cryptography.X509Certificates;
using System.Security;
Save Certificate to Store
// Nothing fancy here. Just a helper method to parse strings.
private StoreName parseStoreName(string name)
{
return (StoreName)Enum.Parse(typeof(StoreName), name);
}
// Same here
private StoreLocation parseStoreLocation(string location)
{
return (StoreLocation)Enum.Parse(typeof(StoreLocation), location);
}
private void saveCertToStore(X509Certificate2 x509Certificate2, StoreName storeName, StoreLocation storeLocation)
{
X509Store store = new X509Store(storeName, storeLocation);
store.Open(OpenFlags.ReadWrite);
store.Add(x509Certificate2);
store.Close();
}
Create Certificate from byte[] array
private X509Certificate2 CreateCertificateFromByteArray(byte[] certFile)
{
return new X509Certificate2(certFile);
// will throw exception if certificate has private key
}
The comment says that it will throw an exception if the certificate has a private
key because the private key has a password associated with it. If you don't pass the
password as a parameter it will throw a System.Security.Cryptography.CryptographicException
exception.
Get Certificate from Store by Thumbprint
private bool FindCertInStore(
string thumbprint,
StoreName storeName,
StoreLocation storeLocation,
out X509Certificate2 theCert)
{
theCert = null;
X509Store store = new X509Store(storeName, storeLocation);
try
{
store.Open(OpenFlags.ReadWrite);
string thumbprintFixed = thumbprint.Replace(" ", "").ToUpperInvariant();
foreach (var cert in store.Certificates)
{
if (cert.Thumbprint.ToUpperInvariant().Equals(thumbprintFixed))
{
theCert = cert;
return true;
}
}
return false;
}
finally
{
store.Close();
}
}
Have fun!
Straight from Microsoft this is what the Windows Identity Foundation is:
Windows Identity Foundation helps .NET developers build claims-aware applications
that externalize user authentication from the application, improving developer productivity,
enhancing application security, and enabling interoperability. Developers can enjoy
greater productivity, using a single simplified identity model based on claims. They
can create more secure applications with a single user access model, reducing custom
implementations and enabling end users to securely access applications via on-premises
software as well as cloud services. Finally, they can enjoy greater flexibility in
application development through built-in interoperability that allows users, applications,
systems and other resources to communicate via claims.
In other words it is a method for centralizing user Identity information, very much
like how the Windows Live and OpenID systems work. The system is reasonably
simple. I have a Membership data store that contains user information.
I want (n) number of websites to use that membership store, EXCEPT I don’t want each
application to have direct access to membership data such as passwords. The
way around it is through claims.
In order for this to work you need a central web application called a Secure Token
Service (STS). This application will do authentication and provide a set of
available claims. It will say “hey! I am able to give you the person’s email
address, their username and the roles they belong to.” Each of those pieces
of information is a claim. This message exists in the application’s Federation
Metadata.
So far you are probably saying “yeah, so what?”
What I haven’t mentioned is that every application (called a Relying Party) that uses
this central application has one thing in common: each application doesn’t have to
handle authentication – at all. Each application passes off the authentication
request to the central application and the central application does the hard work.
When you type in your username and password, you are typing it into the central application,
not one of the many other applications. Once the central application authenticates
your credentials it POST’s the claims back to the other application. A diagram
might help:
Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)
The key takeaway is that only one single application does authentication. Everything
else just redirects to it. So lets actually see what it takes to authenticate
against an STS (central application). In future posts I will go into detail
about how to create an STS as well as how to use Active Directory Federation Services,
which is an STS that authenticates directly against (you guessed it) Active Directory.
First step is to install the Framework and SDK.
WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en
WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504
The SDK will install sample projects and add two Visual Studio menu items under the
Tools menu. Both menu items do essentially the same thing, the difference being
that “Add STS Reference” pre-populates the wizard with the current web application’s
data.
Once the SDK is installed start up Visual Studio as Administrator. Create a
new web application. Next go to the Properties section and go into the Web section.
Change the Server Settings to use IIS. You need to use IIS. To
install IIS on Windows 7 check out this post.
So far we haven’t done anything crazy. We’ve just set a new application to use
IIS for development. Next we have some fun. Let’s add the STS Reference.
To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial
screen:
Click next and it will prompt you about using an HTTPS connection. For the sake
of this we don’t need HTTPS so just continue. The next screen asks us about
where we get the STS Federation Metadata from. In this case I already have an
STS so I just paste in the URI:
Once it downloads the metadata it will ask if we want the Token that the STS sends
back to be encrypted. My recommendation is that we do, but for the sake of this
we won’t.
As an aside: In order for the STS to encrypt the token it will use a public key to
which our application (the Relying Party) will have the private key. When we
select a certificate it will stick that public key in the Relying Party’s own Federation
Metadata file. Anyway… When we click next we are given a list of available Claims
the STS can give us:
There is nothing to edit here; it’s just informative. Next we get a summary
of what we just did:
We can optionally schedule a Windows task to download changes.
We’ve now just added a crap-load of information to the *.config file. Actually,
we really didn’t. We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule
to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule
to handle session management. Everything else is just boiler-plate configuration.
So lets test this thing:
-
Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
-
Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice
the variables we’ve passed?)
-
Type in our username and password…
-
Redirect to http://localhost/WebApplication1
-
Yellow Screen of Death
Wait. What? If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably
blow up. This is because the data that was POST’ed back to us from the STS had
funny characters in the values like angle brackets and stuff. ASP.NET does not
like this. Rightfully so, Cross Site Scripting attacks suck. To resolve
this you have two choices:
-
Add <httpRuntime requestValidationMode="2.0" /> to your web.config
-
Use a proper RequestValidator that can handle responses from Token Services
For the sake of testing add <httpRuntime requestValidationMode="2.0"
/> to the web.config and retry the test. You should be redirected to http://localhost/WebApplication1 and
no errors should occur.
Seems like a pointless exercise until you add a chunk of code to the default.aspx
page. Add a GridView and then add this code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;
namespace WebApplication1
{
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];
GridView1.DataSource = claimsIdentity.Claims;
GridView1.DataBind();
}
}
}
Rerun the test and you should get back some values. I hope some light bulbs
just turned on for some people :)
Earlier today I was talking with Cory Fowler about
an issue he was having with an Azure blob upload. Actually, he offered to help
with one of my problems first before he asked me for my thoughts – he’s a real community
guy. Alas I wasn’t able to help him with his problem, but it got me thinking
about how to handle basic Blob uploads.
On the CommunityFTW project I had worked on a few months back I used Azure as the
back end for media storage. The basis was simple: upload media stuffs to a container
of my choice. The end result was this class:
public sealed class BlobUploadManager
{
private static CloudBlobClient blobStorage;
private static bool s_createdContainer = false;
private static object s_blobLock = new Object();
private string theContainer = "";
public BlobUploadManager(string containerName)
{
if (string.IsNullOrEmpty(containerName))
throw new ArgumentNullException("containerName");
CreateOnceContainer(containerName);
}
public CloudBlobClient BlobClient { get; set; }
public string CreateUploadContainer()
{
BlobContainerPermissions perm = new BlobContainerPermissions();
var blobContainer = blobStorage.GetContainerReference(theContainer);
perm.PublicAccess = BlobContainerPublicAccessType.Container;
blobContainer.SetPermissions(perm);
var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
{
Permissions = SharedAccessPermissions.Write,
SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
});
return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
}
private void CreateOnceContainer(string containerName)
{
this.theContainer = containerName;
if (s_createdContainer)
return;
lock (s_blobLock)
{
var storageAccount = new CloudStorageAccount(
new StorageCredentialsAccountAndKey(
SettingsController.GetSettingValue("BlobAccountName"),
SettingsController.GetSettingValue("BlobKey")),
false);
blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
container.CreateIfNotExist();
container.SetPermissions(
new BlobContainerPermissions()
{
PublicAccess = BlobContainerPublicAccessType.Container
});
s_createdContainer = true;
}
}
public string UploadBlob(Stream blobStream, string blobName)
{
if (blobStream == null)
throw new ArgumentNullException("blobStream");
if (string.IsNullOrEmpty(blobName))
throw new ArgumentNullException("blobName");
blobStorage.GetContainerReference(this.theContainer)
.GetBlobReference(blobName.ToLowerInvariant())
.UploadFromStream(blobStream);
return blobName.ToLowerInvariant();
}
}
With any luck with might help someone trying to jump into Azure.
A few posts back I started talking about what it would take to create a new application
for the new Windows Phone 7. I’m not a fan of learning from trivial applications
that don’t touch on the same technologies that I would be using in the real world,
so I thought I would build a real application that someone can use.
Since this application uses a well known dataset I kind of get lucky because I already
have my database schema, which is in a reasonably well designed way. My first
step is to get it to the Phone, so I will use WCF Data Services and an Entity Model.
I created the model and just imported the necessary tables. I called this model
RaceInfoModel.edmx. The entities name is RaceInfoEntities This is ridiculously
simple to do.
The following step is to expose the model to the outside world through an XML format
in a Data Service. I created a WCF Data Service and made a few config changes:
using System.Data.Services;
using System.Data.Services.Common;
using System;
namespace RaceInfoDataService
{
public class RaceInfo : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } }
This too is reasonably simple. Since it’s a web service, I can hit it from a
web browser and I get a list of available datasets:
This isn’t a complete list of available items, just a subset.
At this point I can package everything up and stick it on a web server. It could
technically be ready for production if you were satisfied with not having any Access
Control’s on reading the data. In this case, lets say for arguments sake that
I was able to convince the powers that be that everyone should be able to access it.
There isn’t anything confidential in the data, and we provide the data in other services
anyway, so all is well. Actually, that’s kind of how I would prefer it anyway. Give
me Data or Give me Death!
Now we create the Phone project. You need to install the latest build of the
dev tools, and you can get that here http://developer.windowsphone.com/windows-phone-7/.
Install it. Then create the project. You should see:
The next step is to make the Phone application actually able to use the data.
Here it gets tricky. Or really, here it gets stupid. (It better he fixed
by RTM or else *shakes fist*)
For some reason, the Visual Studio 2010 Phone 7 project type doesn’t allow you to
automatically import services. You have to generate the service class manually.
It’s not that big a deal since my service won’t be changing all that much, but nevertheless
it’s still a pain to regenerate it manually every time a change comes down the pipeline.
To generate the necessary class run this at a command prompt:
cd C:\Windows\Microsoft.NET\Framework\v4.0.30319
DataSvcutil.exe
/uri:http://localhost:60141/RaceInfo.svc/
/DataServiceCollection
/Version:2.0
/out:"PATH.TO.PROJECT\RaceInfoService.cs"
(Formatted to fit my site layout)
Include that file in the project and compile.
UPDATE: My bad, I had already installed the reference, so this won’t
compile for most people. The Windows Phone 7 runtime doesn’t have the System.Data
namespace available that we need. Therefore we need to install them… They
are still in development, so here is the CTP build http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=b251b247-70ca-4887-bab6-dccdec192f8d.
You should now have a compile-able project with service references that looks something
like:
We have just connected our phone application to our database! All told, it took
me 10 minutes to do this. Next up we start playing with the data.
Teeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeesting.
For those curious, this is the only way I can figure to do a permanent redirect in
ASP.NET 3.5 and lower.
using System;
namespace newtelligence.DasBlog.Web
{
public partial class rss : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location", "/SyndicationService.asmx/GetRss");
}
}
}
Over the past few months I have seen quite a few really cool technologies released
or announced, and I believe they have a very real potential in many markets.
A lot of companies that exist outside the realm of Software Development, rarely have
the opportunity to use such technologies.
Take for instance the company I work for: Woodbine
Entertainment Group. We have a few different businesses, but as a whole
our market is Horse Racing. Our business is not software development.
We don’t always get the chance to play with or use some of the new technologies released
to the market. I thought this would be a perfect opportunity to see what it
will take to develop a new product using only new technologies.
Our core customer pretty much wants Race information. We have proof of this
by the mere fact that on our two websites, HorsePlayer
Interactive and our main site, we have dedicated applications for viewing Races.
So lets build a third race browser. Since we already have a way of viewing races
from your computer, lets build it on the new Windows Phone 7.
The Phone – The application
This seems fairly straightforward. We will essentially be building a Silverlight
application. Let’s take a look at what we need to do (in no particular order):
-
Design the interface – Microsoft has loads of guidance on following with the Metro
design. In future posts I will talk about possible designs.
-
Build the interface – XAML and C#. Gotta love it.
-
Build the Business Logic that drives the views – I would prefer to stay away from
this, suffice to say I’m not entirely sure how proprietary this information is
-
Build the Data Layer – Ah, the fun part. How do you get the data from our internal
servers onto the phone? Easy, OData!
The Data
We have a massive database of all the Races on all the tracks that you can wager on
through our systems. The data updates every few seconds relative to changes
from the tracks for things like cancellations or runner odds. How do we push
this data to the outside world for the phone to consume? We create a WCF Data
Service:
-
Create an Entities Model of the Database
-
Create Data Service
-
Add Entity reference to Data Service (See code below)
public class RaceBrowserData : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} }
That’s actually all there is to it for the data.
The Authentication
The what? Chances are the business will want to limit application access to
only those who have accounts with us. Especially so if we did something like
add in the ability to place a wager on that race. There are lots of ways to
lock this down, but the simplest approach in this instance is to use a Secure Token
Service. I say this because we already have a user store and STS, and duplication
of effort is wasted effort. We create a STS Relying Party (The application that
connects to the STS):
-
Go to STS and get Federation Metadata. It’s an XML document that tells relying
parties what you can do with it. In this case, we want to authenticate and get
available Roles. This is referred to as a Claim. The role returned is
a claim as defined by the STS. Somewhat inaccurately, we would do this:
-
App: Hello! I want these Claims for this user: “User Roles”. I am now going
to redirect to you.
-
STS: I see you want these claims, very well. Give me your username and password.
-
STS: Okay, the user passed. Here are the claims requested. I am going
to POST them back to you.
-
App: Okay, back to our own processes.
-
Once we have the Metadata, we add the STS as a reference to the Application, and call
a web service to pass the credentials.
-
If the credentials are accepted, we get returned the claims we want, which in this
case would be available roles.
-
If the user has the role to view races, we go into the Race view. (All users
would have this role, but adding Roles is a good thing if we needed to distinguish
between wagering and non-wagering accounts)
One thing I didn’t mention is how we lock down the Data Service. That’s a bit
more tricky, and more suited for another post on the actual Data Layer itself.
So far we have laid the ground work for the development of a Race Browser application
for the Windows Phone 7 using the Entity Framework and WCF Data Services, as well
as discussed the use of the Windows Identity Foundation for authentication against
an STS.
With any luck (and permission), more to follow.
I’ve been looking at ways of parsing types and values from text without having to
do switch/case statements or explicit casting. So far, based on my understanding
of statically typed languages, is that this is impossible with a statically typed
language.
<Question> Is this really true?</Question>
Given my current knowledge, my way of bypassing this is to use the new dynamic type
in .NET 4. It allows me to implicitly assign an object without having to cast
it. It works by bypassing the type checking at compile time.
Here’s a fairly straightforward example:
static void Main(string[] args)
{
Type boolType = Type.GetType("System.Boolean");
Console.WriteLine(!parse("true", boolType));
Type dateTimeType = Type.GetType("System.DateTime");
DateTime date = parse("7/7/2010", dateTimeType);
Console.WriteLine(date.AddDays(1));
Console.ReadLine();
}
static dynamic parse(string value, Type t)
{
return Convert.ChangeType(value, t);
}
Now, if I were to do something crazy and call
DateTime someDate = parse(“1234”, Type.GetType(“System.Int32”));
a RuntimeBinderException would be thrown because you cannot implicitly convert between
an int and a DateTime.
It certainly makes things a little easier.
Earlier today, Cory Fowler suggested I write up a post discussing the differences
between the AntiXss library and the methods found in HttpUtility and how it helps
defend from cross site scripting (xss). As I was thinking about what to write,
it occurred to me that I really had no idea how it did what it did, and why it differed
from HttpUtility. <side-track>I’m kinda wondering how many other people
out there run in to the same thing? We are told to use some technology because
it does xyz better than abc, but when it comes right down to it, we aren’t quite sure
of the internals. Just a thought for later I suppose. </side-track>
A Quick Refresher
To quickly summarize what xss is: If you have a textbox on your website that someone
can enter text into, and then on another page, display that same text, the user could
maliciously add in <script> tags to do anything it wanted with JavaScript.
This usually results in redirecting to another website that shows advertisements or
try’s to install malware.
The way to stop this is to not trust any input, and encode any character that could
be part of a tag to an HtmlEncode’d entity.
HttpUtility does this though, right?
The HttpUtility class definitely does do this. However, it is relatively limited
in how it encodes possibly malicious text. It works by encoding specific characters
like the the brackets < > to < and > This can get tricky
because it you could theoretically bypass these characters (somehow – speculative).
Enter AntiXss
The AntiXss library works in essentially the opposite manner. It has a white-list
of allowed characters, and encodes everything else. These characters are the
usual a-z 0-9, etc characters.
Further Reading
I’m not really doing you, dear reader, any help by reiterating what dozens of people
have said before me (and probably did it better), so here are a couple links that
contain loads of information on actually using the AntiXss library and protecting
your website from cross site scripting: