Techdays 2010 Presentation: Build Websites Fast with Visual Studio 2010

Joey Devilla has graciously offered me the opportunity to present at Techdays in Toronto this year!  The Toronto event is October 27th-28th.

Here is the session abstract:

DEV355: Build Web Sites Fast with Microsoft Visual Studio 2010 (click for more information)

Day 1 - 1:00pm - 2:05pm

Learn about the new Web developer innovations in Visual Studio 2010. Visual Studio 2010 makes development of standards-based Web sites better than ever with new support for CSS 2, HTML code snippets, powerful dynamic Intellisense for Javascript, and more! Visual Studio 2010 also makes it easy to deploy applications from development to test and production environments with new support for Web Configuration Transforms and integration with the IIS Web Deployment Tool.

For more details:

*Early Bird discount ($349.99 + taxes) expires on September 16, 2010.
Toronto October 27-28, 2010
Metro Toronto Convention Centre
255 Front Street West
Toronto ON M5V 2W6

Show Map

Get Directions
Register Now!

Build your own Directory Federation Service

This is more of a random collection of thoughts because earlier today I came to the conclusion that I need something very similar to Active Directory Federation Services, except for non-domain users.  This is relatively easy to do; all I need is to create a Secure Token Service with a user store for the back end. 

The simplest approach is to use ASP.NET Membership and Roles with SqlProvider’s wrapped up by some WIF special sauce.  Turns out Dominick Baier already did just that with StarterSTS.

The problems I have with this is that it’s a pain to manage when you start getting more than a hundred or so users.  Extending user properties is hard to do too.  So my solution is to use something that is designed for user identities… an LDAP directory.  If it’s good enough for Active Directory, it’ll be plenty useful for this situation.

Reasoning
As an aside, the reason I’m not using Active Directory in the first place is because I need to manage a few thousand well known users without CAL’s.  This would amount to upwards of a couple hundred thousand dollars in licensing costs that just isn’t in the budget.  Further, most of these users probably wouldn’t use any of our systems that use Active Directory for authentication, but nevertheless, we need accounts for them.

Also, it would be a lot easier to manage creation and modification of user accounts because there are loads of processes that have been designed to pull user data out of HR applications into LDAP directories instead of custom SQL queries.

So lets think about what makes up Active Directory Federation Services.  It has roles that provides:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

That’s a pretty lightweight product when you compare it to the other services in Microsoft’s Identity stack. 

We can simplify it even further by breaking down the roles we need.

Token Services

This is actually pretty easy to accomplish.  Refer back to the WIF magic sauce.

Authentication end-point

This is just (well, you know what I mean) a web page login control.  We can’t do Windows Authentication without Kerberos (or NTLM), and we can’t do Kerberos without Active Directory (technically it could be done, but you’d be crazy to try).

Attribute store-property-to-claim mapper

ADFS can connect to a bunch of different attribute stores, including custom built stores if you provide assemblies.  We only really need to map to a few LDAP properties, and make it easy to map to other properties in the future.

Application management tool

This would be to manage the mapper and a few STS settings like URI names and certificates.  This, I think, would be a relatively simple application if we designed the configuration database properly.

Proxy Services

Proxies are a pain in the butt.  Useful in general, but we don’t really need to think about this at the moment.

Some Warnings

There are some things that are worth mentioning.  We have to be really careful about what we create because we are developing a serious piece of the security infrastructure.  Yes, it is for a group of employees that won’t have much access to anything dangerous (if they need access, they’d be migrated to Active Directory), but nevertheless we are creating the main ingress point for the majority of our employees.  It also needs to be accessible from the internet.

It may sound like I think it’ll be a synch to develop this system and have it work securely, but in reality there is a lot that will need to go into it to protect the network, the employees, and the data this could possibly interact with.  It is tough to develop applications securely.  It is far harder to develop secure applications whose sole responsibility is security related.

Next Steps

The next step is to design the thing.  I know how it will exist in relation to the systems it will be used to provide identity to, but aside from that, the architecture of the thing is still unknown.  With any luck I can accomplish rough designs tomorrow on the train, on my way to visit family for the holiday.

Better yet, maybe while visiting with family. Winking smile

Making an ASP.NET Website Claims Aware with the Windows Identity Foundation

Straight from Microsoft this is what the Windows Identity Foundation is:

Windows Identity Foundation helps .NET developers build claims-aware applications that externalize user authentication from the application, improving developer productivity, enhancing application security, and enabling interoperability. Developers can enjoy greater productivity, using a single simplified identity model based on claims. They can create more secure applications with a single user access model, reducing custom implementations and enabling end users to securely access applications via on-premises software as well as cloud services. Finally, they can enjoy greater flexibility in application development through built-in interoperability that allows users, applications, systems and other resources to communicate via claims.

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

In order for this to work you need a central web application called a Secure Token Service (STS).  This application will do authentication and provide a set of available claims.  It will say “hey! I am able to give you the person’s email address, their username and the roles they belong to.”  Each of those pieces of information is a claim.  This message exists in the application’s Federation Metadata

So far you are probably saying “yeah, so what?”

What I haven’t mentioned is that every application (called a Relying Party) that uses this central application has one thing in common: each application doesn’t have to handle authentication – at all.  Each application passes off the authentication request to the central application and the central application does the hard work.  When you type in your username and password, you are typing it into the central application, not one of the many other applications.  Once the central application authenticates your credentials it POST’s the claims back to the other application.  A diagram might help:

image

Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)

The key takeaway is that only one single application does authentication.  Everything else just redirects to it.  So lets actually see what it takes to authenticate against an STS (central application).  In future posts I will go into detail about how to create an STS as well as how to use Active Directory Federation Services, which is an STS that authenticates directly against (you guessed it) Active Directory.

First step is to install the Framework and SDK.

WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en

WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504

The SDK will install sample projects and add two Visual Studio menu items under the Tools menu.  Both menu items do essentially the same thing, the difference being that “Add STS Reference” pre-populates the wizard with the current web application’s data.

Once the SDK is installed start up Visual Studio as Administrator.  Create a new web application.  Next go to the Properties section and go into the Web section.  Change the Server Settings to use IIS.  You need to use IIS.  To install IIS on Windows 7 check out this post.

image

So far we haven’t done anything crazy.  We’ve just set a new application to use IIS for development.  Next we have some fun.  Let’s add the STS Reference.

To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial screen:

image


Click next and it will prompt you about using an HTTPS connection.  For the sake of this we don’t need HTTPS so just continue.  The next screen asks us about where we get the STS Federation Metadata from.  In this case I already have an STS so I just paste in the URI:

image

Once it downloads the metadata it will ask if we want the Token that the STS sends back to be encrypted.  My recommendation is that we do, but for the sake of this we won’t.

image

As an aside: In order for the STS to encrypt the token it will use a public key to which our application (the Relying Party) will have the private key.  When we select a certificate it will stick that public key in the Relying Party’s own Federation Metadata file.  Anyway… When we click next we are given a list of available Claims the STS can give us:

image
There is nothing to edit here; it’s just informative.  Next we get a summary of what we just did:

image

We can optionally schedule a Windows task to download changes.

We’ve now just added a crap-load of information to the *.config file.  Actually, we really didn’t.  We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule to handle session management.  Everything else is just boiler-plate configuration.  So lets test this thing:

  1. Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
  2. Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice the variables we’ve passed?)
  3. Type in our username and password…
  4. Redirect to http://localhost/WebApplication1
  5. Yellow Screen of Death

Wait.  What?  If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably blow up.  This is because the data that was POST’ed back to us from the STS had funny characters in the values like angle brackets and stuff.  ASP.NET does not like this.  Rightfully so, Cross Site Scripting attacks suck.  To resolve this you have two choices:

  1. Add <httpRuntime requestValidationMode="2.0" /> to your web.config
  2. Use a proper RequestValidator that can handle responses from Token Services

For the sake of testing add <httpRuntime requestValidationMode="2.0" /> to the web.config and retry the test.  You should be redirected to http://localhost/WebApplication1 and no errors should occur.

Seems like a pointless exercise until you add a chunk of code to the default.aspx page. Add a GridView and then add this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;

namespace WebApplication1
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

            GridView1.DataSource = claimsIdentity.Claims;
            GridView1.DataBind();
        }
    }
}

Rerun the test and you should get back some values.  I hope some light bulbs just turned on for some people :)

Azure Blob Uploads

Earlier today I was talking with Cory Fowler about an issue he was having with an Azure blob upload.  Actually, he offered to help with one of my problems first before he asked me for my thoughts – he’s a real community guy.  Alas I wasn’t able to help him with his problem, but it got me thinking about how to handle basic Blob uploads. 

On the CommunityFTW project I had worked on a few months back I used Azure as the back end for media storage.  The basis was simple: upload media stuffs to a container of my choice.  The end result was this class:

    public sealed class BlobUploadManager
    {
        private static CloudBlobClient blobStorage;

        private static bool s_createdContainer = false;
        private static object s_blobLock = new Object();
        private string theContainer = "";

        public BlobUploadManager(string containerName)
        {
            if (string.IsNullOrEmpty(containerName))
                throw new ArgumentNullException("containerName");

            CreateOnceContainer(containerName);
        }

        public CloudBlobClient BlobClient { get; set; }

        public string CreateUploadContainer()
        {
            BlobContainerPermissions perm = new BlobContainerPermissions();
            var blobContainer = blobStorage.GetContainerReference(theContainer);
            perm.PublicAccess = BlobContainerPublicAccessType.Container;
            blobContainer.SetPermissions(perm);

            var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
            {
                Permissions = SharedAccessPermissions.Write,
                SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
            });

            return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
        }

        private void CreateOnceContainer(string containerName)
        {
            this.theContainer = containerName;

            if (s_createdContainer)
                return;

            lock (s_blobLock)
            {
                var storageAccount = new CloudStorageAccount(
                                         new StorageCredentialsAccountAndKey(
                                             SettingsController.GetSettingValue("BlobAccountName"),
                                             SettingsController.GetSettingValue("BlobKey")),
                                         false);

                blobStorage = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
                container.CreateIfNotExist();

                container.SetPermissions(
                    new BlobContainerPermissions()
                    {
                        PublicAccess = BlobContainerPublicAccessType.Container
                    });

                s_createdContainer = true;
            }
        }

        public string UploadBlob(Stream blobStream, string blobName)
        {
            if (blobStream == null)
                throw new ArgumentNullException("blobStream");

            if (string.IsNullOrEmpty(blobName))
                throw new ArgumentNullException("blobName");

            blobStorage.GetContainerReference(this.theContainer)
		       .GetBlobReference(blobName.ToLowerInvariant())
		       .UploadFromStream(blobStream);

            return blobName.ToLowerInvariant();
        }
    }

With any luck with might help someone trying to jump into Azure.

Getting the Data to the Phone

A few posts back I started talking about what it would take to create a new application for the new Windows Phone 7.  I’m not a fan of learning from trivial applications that don’t touch on the same technologies that I would be using in the real world, so I thought I would build a real application that someone can use.

Since this application uses a well known dataset I kind of get lucky because I already have my database schema, which is in a reasonably well designed way.  My first step is to get it to the Phone, so I will use WCF Data Services and an Entity Model.  I created the model and just imported the necessary tables.  I called this model RaceInfoModel.edmx.  The entities name is RaceInfoEntities  This is ridiculously simple to do.

The following step is to expose the model to the outside world through an XML format in a Data Service.  I created a WCF Data Service and made a few config changes:

using System.Data.Services;
using System.Data.Services.Common;
using System;

namespace RaceInfoDataService
{
    public class RaceInfo : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } }

This too is reasonably simple.  Since it’s a web service, I can hit it from a web browser and I get a list of available datasets:

image

This isn’t a complete list of available items, just a subset.

At this point I can package everything up and stick it on a web server.  It could technically be ready for production if you were satisfied with not having any Access Control’s on reading the data.  In this case, lets say for arguments sake that I was able to convince the powers that be that everyone should be able to access it.  There isn’t anything confidential in the data, and we provide the data in other services anyway, so all is well.  Actually, that’s kind of how I would prefer it anyway.  Give me Data or Give me Death!

Now we create the Phone project.  You need to install the latest build of the dev tools, and you can get that here http://developer.windowsphone.com/windows-phone-7/.  Install it.  Then create the project.  You should see:

image

The next step is to make the Phone application actually able to use the data.  Here it gets tricky.  Or really, here it gets stupid.  (It better he fixed by RTM or else *shakes fist*)

For some reason, the Visual Studio 2010 Phone 7 project type doesn’t allow you to automatically import services.  You have to generate the service class manually.  It’s not that big a deal since my service won’t be changing all that much, but nevertheless it’s still a pain to regenerate it manually every time a change comes down the pipeline.  To generate the necessary class run this at a command prompt:

cd C:\Windows\Microsoft.NET\Framework\v4.0.30319
DataSvcutil.exe
     /uri:http://localhost:60141/RaceInfo.svc/
     /DataServiceCollection
     /Version:2.0
     /out:"PATH.TO.PROJECT\RaceInfoService.cs"

(Formatted to fit my site layout)

Include that file in the project and compile.

UPDATE: My bad, I had already installed the reference, so this won’t compile for most people.  The Windows Phone 7 runtime doesn’t have the System.Data namespace available that we need.  Therefore we need to install them…  They are still in development, so here is the CTP build http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=b251b247-70ca-4887-bab6-dccdec192f8d.

You should now have a compile-able project with service references that looks something like:

image

We have just connected our phone application to our database!  All told, it took me 10 minutes to do this.  Next up we start playing with the data.

Data as a Service and the Applications that consume it

Over the past few months I have seen quite a few really cool technologies released or announced, and I believe they have a very real potential in many markets.  A lot of companies that exist outside the realm of Software Development, rarely have the opportunity to use such technologies.

Take for instance the company I work for: Woodbine Entertainment Group.  We have a few different businesses, but as a whole our market is Horse Racing.  Our business is not software development.  We don’t always get the chance to play with or use some of the new technologies released to the market.  I thought this would be a perfect opportunity to see what it will take to develop a new product using only new technologies.

Our core customer pretty much wants Race information.  We have proof of this by the mere fact that on our two websites, HorsePlayer Interactive and our main site, we have dedicated applications for viewing Races.  So lets build a third race browser.  Since we already have a way of viewing races from your computer, lets build it on the new Windows Phone 7.

The Phone – The application

This seems fairly straightforward.  We will essentially be building a Silverlight application.  Let’s take a look at what we need to do (in no particular order):

  1. Design the interface – Microsoft has loads of guidance on following with the Metro design.  In future posts I will talk about possible designs.
  2. Build the interface – XAML and C#.  Gotta love it.
  3. Build the Business Logic that drives the views – I would prefer to stay away from this, suffice to say I’m not entirely sure how proprietary this information is
  4. Build the Data Layer – Ah, the fun part.  How do you get the data from our internal servers onto the phone?  Easy, OData!

The Data

We have a massive database of all the Races on all the tracks that you can wager on through our systems.  The data updates every few seconds relative to changes from the tracks for things like cancellations or runner odds.  How do we push this data to the outside world for the phone to consume?  We create a WCF Data Service:

  1. Create an Entities Model of the Database
  2. Create Data Service
  3. Add Entity reference to Data Service (See code below)
 
    public class RaceBrowserData : DataService
{ public static void InitializeService(DataServiceConfiguration config) { if (config
== null) throw new ArgumentNullException("config"); config.UseVerboseErrors
= true; config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); //config.SetEntitySetPageSize("*",
25); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
} } 

That’s actually all there is to it for the data.

The Authentication

The what?  Chances are the business will want to limit application access to only those who have accounts with us.  Especially so if we did something like add in the ability to place a wager on that race.  There are lots of ways to lock this down, but the simplest approach in this instance is to use a Secure Token Service.  I say this because we already have a user store and STS, and duplication of effort is wasted effort.  We create a STS Relying Party (The application that connects to the STS):

  1. Go to STS and get Federation Metadata.  It’s an XML document that tells relying parties what you can do with it.  In this case, we want to authenticate and get available Roles.  This is referred to as a Claim.  The role returned is a claim as defined by the STS.  Somewhat inaccurately, we would do this:
    1. App: Hello! I want these Claims for this user: “User Roles”.  I am now going to redirect to you.
    2. STS: I see you want these claims, very well.  Give me your username and password.
    3. STS: Okay, the user passed.  Here are the claims requested.  I am going to POST them back to you.
    4. App: Okay, back to our own processes.
  2. Once we have the Metadata, we add the STS as a reference to the Application, and call a web service to pass the credentials.
  3. If the credentials are accepted, we get returned the claims we want, which in this case would be available roles.
  4. If the user has the role to view races, we go into the Race view.  (All users would have this role, but adding Roles is a good thing if we needed to distinguish between wagering and non-wagering accounts)

One thing I didn’t mention is how we lock down the Data Service.  That’s a bit more tricky, and more suited for another post on the actual Data Layer itself.

So far we have laid the ground work for the development of a Race Browser application for the Windows Phone 7 using the Entity Framework and WCF Data Services, as well as discussed the use of the Windows Identity Foundation for authentication against an STS.

With any luck (and permission), more to follow.

ADFS 2.0 Windows Service Not Starting on Server 2008

I’ve been working on getting a testable ADFS environment setup for evaluation and development.  Basically, because of laziness (and timeliness), I’m using Windows Virtual PC to host Server 2008 guests for testing.  I didn’t have the time to setup a fully working x64 environment, so I couldn’t go to R2.

One of the issues I’ve been running into is that the Windows Service won’t start properly.  Or rather, at all.  It’s running into a timing issue when running as Network Service, as its timing out while waiting for a network connection.  More Googling with Bing returned the fix for me from here.

In the file [C:\Program Files\Active Directory Federation Services 2.0\Microsoft.IdentityServer.Servicehost.exe.config] add this entry to it:

<runtime>
    <generatePublisherEvidence enabled="false"/> 
</runtime>

Other places have noted that this isn’t a problem on R2.  I haven’t tested this yet, so I don’t know if it’s true.

C# Dynamic Type Conversions

I’ve been looking at ways of parsing types and values from text without having to do switch/case statements or explicit casting.  So far, based on my understanding of statically typed languages, is that this is impossible with a statically typed language.

<Question> Is this really true?</Question>

Given my current knowledge, my way of bypassing this is to use the new dynamic type in .NET 4.  It allows me to implicitly assign an object without having to cast it.  It works by bypassing the type checking at compile time.

Here’s a fairly straightforward example:

static void Main(string[] args)
{
	Type boolType = Type.GetType("System.Boolean");
	Console.WriteLine(!parse("true", boolType));

	Type dateTimeType = Type.GetType("System.DateTime");

	DateTime date = parse("7/7/2010", dateTimeType);
	Console.WriteLine(date.AddDays(1));

	Console.ReadLine();
}

static dynamic parse(string value, Type t)
{
	return Convert.ChangeType(value, t);
}

Now, if I were to do something crazy and call

DateTime someDate = parse(“1234”, Type.GetType(“System.Int32”));

a RuntimeBinderException would be thrown because you cannot implicitly convert between an int and a DateTime.

It certainly makes things a little easier.

SQL Server 2008 R2 Launch Event &amp;ndash; Application Lifecycle Management

Unfortunately I will be unable to attend the ALM presentation later this afternoon, but luckily I was able to catch it in Montreal last week.

When I think of ALM, I think of the development lifecycle of an application – whether it be agile or waterfall or whatever floats your boat – that encompasses all parts of the process.  We’ve had tools over the years that help us manage each section or iteration of the process, but there was some obvious pieces missing.  What about the SQL?  Databases are essential to pretty much all applications that get developed nowadays, yet for a long time we didn’t have much in the way to help streamline and manage the processes of developing database pieces.

Enter ALM for SQL Server.  DBA’s are now given all the tools and resources developers have had for a while.  It’s now easier to manage Packaging and Deployment of Databases, better source control of SQL scripts, and something really cool: Database schema versioning.

I have a story: Sometime over the last couple years, a developer wrote a small little application that monitors changes to database schemas through triggers, and then sync’ed the changes with SVN.  This was pretty cool.  It allowed us to watch what changed when things went south.  Problem was, it wasn’t necessarily reliable, it relied on some internal pieces to be added to the database manually, and made finding changes through SVN tricky.

With ALM, versioning of databases happens before deployment.  Changes are stored in TFS, and its possible to rollback certain changes fairly easily.  Certain changes. :)

That’s pretty cool.

AntiXss vs HttpUtility &amp;ndash; So What?

Earlier today, Cory Fowler suggested I write up a post discussing the differences between the AntiXss library and the methods found in HttpUtility and how it helps defend from cross site scripting (xss).  As I was thinking about what to write, it occurred to me that I really had no idea how it did what it did, and why it differed from HttpUtility.  <side-track>I’m kinda wondering how many other people out there run in to the same thing?  We are told to use some technology because it does xyz better than abc, but when it comes right down to it, we aren’t quite sure of the internals.  Just a thought for later I suppose. </side-track>

A Quick Refresher

To quickly summarize what xss is: If you have a textbox on your website that someone can enter text into, and then on another page, display that same text, the user could maliciously add in <script> tags to do anything it wanted with JavaScript.  This usually results in redirecting to another website that shows advertisements or try’s to install malware.

The way to stop this is to not trust any input, and encode any character that could be part of a tag to an HtmlEncode’d entity.

HttpUtility does this though, right?

The HttpUtility class definitely does do this.  However, it is relatively limited in how it encodes possibly malicious text.  It works by encoding specific characters like the the brackets < > to &lt; and &gt;  This can get tricky because it you could theoretically bypass these characters (somehow – speculative).

Enter AntiXss

The AntiXss library works in essentially the opposite manner.  It has a white-list of allowed characters, and encodes everything else.  These characters are the usual a-z 0-9, etc characters.

Further Reading

I’m not really doing you, dear reader, any help by reiterating what dozens of people have said before me (and probably did it better), so here are a couple links that contain loads of information on actually using the AntiXss library and protecting your website from cross site scripting: