Migrating Large Databases from On-Premise to SQL Azure

altRecently, I was working on a project that required a site migration from a Shared Hosting server to Windows Azure. This application has been up and running for sometime and had acquired quite a substantially sized database.

During the course of the project I ran across a few road blocks which I wasn’t expecting, due to the experience gained in my previous blog entries: Migrate a database using the SQL Azure Data Sync Tool and Scripting a database for SQL Azure (Issues explained in previous link resolved with launch of SQL Server 2008 R2). Hopefully the following tricks will help you along your data migration.

Using Import/Export in SSMS to Migrate to SQL Azure

In addition to the SQL Azure Data Sync Tool, it is possible to use the existing Import/Export Wizard in SQL Server Management Studio to migrate data to SQL Azure. There are a number of things to keep in mind while using the Import/Export Tool:

SQL Server Native Client to .NET Data Provider for SqlServer

SQL Azure doesn’t fall under the typical SQL Server Native Client 10.0 Product SKU, this means that you’ll have to use the .NET Data Provider to migrate your data. The configuration screen for the provider is very intuitive, but there are two key settings that should be changed from their default values, Asynchronous Processing (set to true) and Connection Timeout (increase to 1500).

SQL-Azure-SMSS-IE-Tool

Without changing the Timeout value the data migration would error out after creating the fist few sets of rows. Making this an Asynchronous process was beneficial when exporting multiple tables at a time.

Work-around for SSIS Type: (Type unknown …) Error

There is a chance when you go to run the migration that you will encounter an error as described in Wayne Berry’s [@WayneBerry] blog post entitled “SSIS Error to SQL Azure with varbinary(max)” on the SQL Azure Blog.

As Wayne explains in his post, there are a number of XML files which contain data mapping information used by the Import/Export Wizard in order to map the data from the source database to the proper data type in the destination database.

Database Seeded Identity Insert Issue

I’m not sure why this happened, but when using the Import/Export even with Identity Insert on, the ID [Identity] Column was not Inserting the correct values. To get around this I used the ROW_NUMBER to generate new Identities and rebuilt the foreign key tables.

There is a lot of chatter on the Forums and other blog posts that say that BCP with the –E switch is the most effective way to do exact copying (with Identity Columns).

For more information:

Cost Effective Approach

A good thing to keep in mind while preparing your database for migration is that transactions as well as data transfer costs are applied to Queries to (and from) SQL Azure. With this in mind it would be best to set up a scenario where you would test your data migration to ensure the data migration would be performed in the least number of attempts as possible.

Happy Clouding!

This post also appears on SyntaxC4's Blog

The Basics of Building a Security Token Service

Last week at TechDays in Toronto I ran into a fellow I worked with while I was at Woodbine.  He works with a consulting firm Woodbine uses, and he caught my session on Windows Identity Foundation.  His thoughts were (essentially—paraphrased) that the principle of Claims Authentication was sound and a good idea, however implementing it requires a major investment.  Yes.  Absolutely.  You will essentially be adding a new tier to the application.  Hmm.  I’m not sure if I can get away with that analogy.  It will certainly feel like you are adding a new tier anyway.

What strikes me as the main investment is the Security Token Service.  When you break it down, there are a lot of moving parts in an STS.  In a previous post I asked what it would take to create something similar to ADFS 2.  I said it would be fairly straightforward, and broke down the parts as well as what would be required of them.  I listed:

  • Token Services
  • A Windows Authentication end-point
  • An Attribute store-property-to-claim mapper (maps any LDAP properties to any claim types)
  • An application management tool (MMC snap-in and PowerShell cmdlets)
  • Proxy Services (Allows requests to pass NAT’ed zones)

These aren’t all that hard to develop.  With the exception of the proxy services and token service itself, there’s a good chance we have created something similar to each one if user authentication is part of an application.  We have the authentication endpoint: a login form to do SQL Authentication, or the Windows Authentication Provider for ASP.NET.  We have the attribute store and something like a claims mapper: Active Directory, SQL databases, etc.  We even have an application management tool: anything you used to manage users in the first place.  This certainly doesn’t get us all the way there, but they are good starting points.

Going back to my first point, the STS is probably the biggest investment.  However, it’s kind of trivial to create an STS using WIF.  I say that with a big warning though: an STS is a security system.  Securing such a system is NOT trivial.  Writing your own STS probably isn’t the best way to approach this.  You would probably be better off to use an STS like ADFS.  With that being said it’s good to know what goes into building an STS, and if you really do have the proper resources to develop one, as well as do proper security testing (you probably wouldn’t be reading this article on how to do it in that case…), go for it.

For the sake of simplicity I’ll be going through the Fabrikam Shipping demo code since they did a great job of creating a simple STS.  The fun bits are in the Fabrikam.IPSts project under the Identity folder.  The files we want to look at are CustomSecurityTokenService.cs, CustomSecurityTokenServiceConfiguration.cs, and the default.aspx code file.  I’m not sure I like the term “configuration”, as the way this is built strikes me as factory-ish.

image

The process is pretty simple.  A request is made to default.aspx which passes the request to FederatedPassiveSecurityTokenServiceOperations.ProcessRequest() as well as a newly instantiated CustomSecurityTokenService object by calling CustomSecurityTokenServiceConfiguration.Current.CreateSecurityTokenService().

The configuration class contains configuration data for the STS (hence the name) like the signing certificate, but it also instantiates an instance of the STS using the configuration.  The code for is simple:

namespace Microsoft.Samples.DPE.Fabrikam.IPSts
{
    using Microsoft.IdentityModel.Configuration;
    using Microsoft.IdentityModel.SecurityTokenService;

    internal class CustomSecurityTokenServiceConfiguration
: SecurityTokenServiceConfiguration
    {
        private static CustomSecurityTokenServiceConfiguration current;

        private CustomSecurityTokenServiceConfiguration()
        {
            this.SecurityTokenService = typeof(CustomSecurityTokenService);
            this.SigningCredentials =
new X509SigningCredentials(this.ServiceCertificate);
            this.TokenIssuerName = "https://ipsts.fabrikam.com/";
        }

        public static CustomSecurityTokenServiceConfiguration Current
        {
            get
            {
                if (current == null)
                {
                    current = new CustomSecurityTokenServiceConfiguration();
                }

                return current;
            }
        }
    }
}

It has a base type of SecurityTokenServiceConfiguration and all it does is set the custom type for the new STS, the certificate used for signing, and the issuer name.  It then lets the base class handle the rest.  Then there is the STS itself.  It’s dead simple.  The custom class has a base type of SecurityTokenService and overrides a couple methods.  The important method it overrides is GetOutputClaimsIdentity():

protected override IClaimsIdentity GetOutputClaimsIdentity(
IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)
{
    var inputIdentity = (IClaimsIdentity)principal.Identity;

    Claim name = inputIdentity.Claims.Single(claim =>
claim.ClaimType == ClaimTypes.Name);
    Claim email = new Claim(ClaimTypes.Email,
Membership.Provider.GetUser(name.Value, false).Email);
    string[] roles = Roles.Provider.GetRolesForUser(name.Value);

    var issuedIdentity = new ClaimsIdentity();
    issuedIdentity.Claims.Add(name);
    issuedIdentity.Claims.Add(email);

    foreach (var role in roles)
    {
        var roleClaim = new Claim(ClaimTypes.Role, role);
        issuedIdentity.Claims.Add(roleClaim);
    }

    return issuedIdentity;
}

It gets the authenticated user, grabs all the roles from the RolesProvider, and generates a bunch of claims then returns the identity.  Pretty simple.

At this point you’ve just moved the authentication and Roles stuff away from the application.  Nothing has really changed data-wise.  If you only cared about roles, name, and email you are done.  If you needed something more you could easily add in the logic to grab the values you needed. 

By no means is this production ready, but it is a good basis for how the STS creates claims.

Converting Claims to Windows Tokens and User Impersonation

In a domain environment it is really useful to switch user contexts in a web application.  This could be if you are needing to log in with credentials that have elevated permissions (or vice-versa) or just needing to log in as another user.

It’s pretty easy to do this with Windows Identity Foundation and Claims Authentication.  When the WIF framework is installed, a service is installed (that is off by default) that can translate Claims to Windows Tokens.  This is called (not surprisingly) the Claims to Windows Token Service or (c2WTS).

Following the deploy-with-least-amount-of-attack-surface methodology, this service does not work out of the box.  You need to turn it on and enable which user’s are allowed to impersonate via token translation.  Now, this doesn’t mean which users can switch, it means which users running the process are allowed to switch.  E.g. the process running the IIS application pools local service/network service/local system/etc (preferably a named service user other than system users).

To allow users to do this go to C:\Program Files\Windows Identity Foundation\v3.5\c2wtshost.exe.config and add in the service users to <allowedCallers>:

<windowsTokenService>
  <!--
      By default no callers are allowed to use the Windows Identity Foundation Claims To NT Token Service.
      Add the identities you wish to allow below.
    -->
  <allowedCallers>
    <clear/>
    <!-- <add value="NT AUTHORITY\Network Service" /> -->
    <!-- <add value="NT AUTHORITY\Local Service" /> –>
    <!-- <add value="nt authority\system" /> –>
    <!-- <add value="NT AUTHORITY\Authenticated Users" /> -->
  </allowedCallers>
</windowsTokenService>

You should notice that by default, all users are not allowed.  Once you’ve done that you can start up the service.  It is called Claims to Windows Token Service in the Services MMC snap-in.

That takes care of the administrative side of things.  Lets write some code.  But first, some usings:

using System;
using System.Linq;
using System.Security.Principal;
using System.Threading;
using Microsoft.IdentityModel.Claims;
using Microsoft.IdentityModel.WindowsTokenService;

The next step is to actually generate the token.  From an architectural perspective, we want to use the UPN claims type as that’s what the service wants to see.  To get the claim, we do some simple LINQ:

IClaimsIdentity identity = (ClaimsIdentity)Thread.CurrentPrincipal.Identity;
string upn = identity.Claims.Where(c => c.ClaimType == ClaimTypes.Upn).First().Value;

if (String.IsNullOrEmpty(upn))
{
    throw new Exception("No UPN claim found");
}

Following that we do the impersonation:

WindowsIdentity windowsIdentity = S4UClient.UpnLogon(upn);

using (WindowsImpersonationContext ctxt = windowsIdentity.Impersonate())
{
    DoSomethingAsNewUser();

    ctxt.Undo(); // redundant with using { } statement
}

To release the token we call the Undo() method, but if you are within a using { } statement the Undo() method is called when the object is disposed.

One thing to keep in mind though.  If you do not have permission to impersonate a user a System.ServiceModel.Security.SecurityAccessDeniedException will be thrown.

That’s all there is to it.

Implementation Details

In my opinion, these types of calls really shouldn’t be made all that often.  Realistically you need to take a look at how impersonation fits into the application and then go from there.  Impersonation is pretty weighty topic for discussion, and frankly, I’m not an expert.

Installing ADFS 2 and Federating an Application

From Microsoft Marketing, ADFS 2.0 is:

Active Directory Federation Services 2.0 helps IT enable users to collaborate across organizational boundaries and easily access applications on-premises and in the cloud, while maintaining application security. Through a claims-based infrastructure, IT can enable a single sign-on experience for end-users to applications without requiring a separate account or password, whether applications are located in partner organizations or hosted in the cloud.

So, it’s a Token Service plus some.  In a previous post I had said:

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

The membership store in this case being Active Directory.

I thought it would be a good idea to run through how to install ADFS and set up an application to use it.  Since we already discussed how to federate an application using FedUtil.exe, I will let you go through the steps in the previous post.  I will provide information on where to find the Metadata later on in this post.

But First: The Prerequisites

  1. Join the Server to the Domain. (I’ve started the installation of ADFS three times on non-domain joined systems.  Doh!)
  2. Install the latest .NET Framework.  I’m kinda partial to using SmallestDotNet.com created by Scott Hanselman.  It’s easy.
  3. Install IIS.  If you are running Server 2008 R2 you can follow these steps in another post, or just go through the wizards.  FYI: The post installs EVERY feature.  Just remember that when you move to production.  Surface Area and what not…
  4. Install PowerShell.
  5. Install the Windows Identity Foundation: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en
  6. Install SQL Server.  This is NOT required.  You only need to install it if you want to use a SQL Database to get custom Claims data.  You could also use a SQL Server on another server…
  7. Download ADFS 2.0 RTW: http://www.microsoft.com/downloads/details.aspx?familyid=118c3588-9070-426a-b655-6cec0a92c10b&displaylang=en

The Installation

image

Read the terms and accept them.  If you notice, you only have to read half of what you see because the rest is in French.  Maybe the lawyers are listening…these things are getting more readable.

image

Select Federation Server.  A Server Proxy allows you to use ADFS on a web server not joined to the domain.

image

We already installed all of these things.  When you click next it will check for latest hotfixes and ask if you want to open the configuration MMC snap-in.  Start it.

image

We want to start the configuration Wizard and then create a new Federation Service:

image

Next we want to create a Stand-alone federation server:

image

We need to select a certificate for ADFS to use.  By default it uses the SSL certificate of the default site in IIS.  So lets add one.  In the IIS Manager select the server and then select Server Certificates:

image

We have a couple options when it comes to adding a certificate.  For the sake of this post I’ll just create a self-signed certificate, but if you have a domain Certificate Authority you could go that route, or if this is a public facing service create a request and get a certificate from a 3rd party CA.

image

Once we’ve created the certificate we assign it to the web site.  Go to the website and select Bindings…

image

Add a site binding for https:

image

Now that we’ve done that we can go back to the Configuration Wizard:

image

Click next and it will install the service.  It will stop IIS so be aware of that.

image

You may receive this error if you are installing on Server 2008:

image

The fix for this is here: http://www.syfuhs.net/2010/07/23/ADFS20WindowsServiceNotStartingOnServer2008.aspx

You will need to re-run the configuration wizard if you do this.  It may complain about the virtual applications already existing.  You two options: 1) delete the applications in IIS as well as the folder C:\inetpub\adfs; 2) Ignore the warning.

Back to the installation, it will create two new Virtual Applications in IIS:

image

Once the wizard finishes you can go back to the MMC snap-in and fiddle around.  The first thing we need to do is create an entry for a Relying Party.  This will allow us to create a web application to work with it.

image

When creating an RP we have a couple options to provide configuration data.

image

Since we are going to create a web application from scratch we will enter in manual data.  If you already have the application built and have Federation Metadata available for it, by all means just use that.

We need a name:

image

Very original, eh?

Next we need to decide on what profile we will be using.  Since we are building an application from scratch we can take advantage of the 2.0 profile, but if we needed backwards compatibility for a legacy application we should select the 1.0/1.1 profile.

image

Next we specify the certificate to encrypt our claims sent to the application.  We only need the public key of the certificate.  When we run FedUtil.exe we can specify which certificate we want to use to decrypt the incoming tokens.  This will be the private key of the same certificate.  For the sake of this, we’ll skip it.

image

The next step gets a little confusing.  It asks which protocols we want to use if we are federating with a separate STS.  In this case since we aren’t doing anything that crazy we can ignore them and continue:

image

We next need to specify the RP’s identifying URI.

image

Allow anyone and everyone, or deny everyone and add specific users later?  Allow everyone…

image

When we finish we want to edit the claim rules:

image

This dialog will allow us to add mappings between claims and the data within Active Directory:

image

So lets add a rule.  We want to Send LDAP Attributes as Claims

image

First we specify what data in Active Directory we want to provide:

image

Then we specify which claim type to use:

image

And ADFS is configured!  Lets create our Relying Party.  You can follow these steps: Making an ASP.NET Website Claims Aware with the Windows Identity Foundation.  To get the Federation Metadata for ADFS navigate to the URL that the default website is mapped to + /FederationMetadata/2007-06/FederationMetadata.xml.  In my case it’s https://web1.nexus.internal.test/FederationMetadata/2007-06/FederationMetadata.xml.

Once you finish the utility it’s important that we tell ADFS that our new RP has Metadata available.  Double click on the RP to get to the properties.  Select Monitoring:

image

Add the URL for the Metadata and select Monitor relying party.  This will periodically call up the URL and download the metadata in the event that it changes.

At this point we can test.  Hit F5 and we will redirect to the ADFS page.  It will ask for domain credentials and redirect back to our page.  Since I tested it with a domain admin account I got this back:

image

It works!

For more information on ADFS 2.0 check out http://www.microsoft.com/windowsserver2008/en/us/ad-fs-2-overview.aspx or the WIF Blog at http://blogs.msdn.com/b/card/

Happy coding!

Making an ASP.NET Website Claims Aware with the Windows Identity Foundation

Straight from Microsoft this is what the Windows Identity Foundation is:

Windows Identity Foundation helps .NET developers build claims-aware applications that externalize user authentication from the application, improving developer productivity, enhancing application security, and enabling interoperability. Developers can enjoy greater productivity, using a single simplified identity model based on claims. They can create more secure applications with a single user access model, reducing custom implementations and enabling end users to securely access applications via on-premises software as well as cloud services. Finally, they can enjoy greater flexibility in application development through built-in interoperability that allows users, applications, systems and other resources to communicate via claims.

In other words it is a method for centralizing user Identity information, very much like how the Windows Live and OpenID systems work.  The system is reasonably simple.  I have a Membership data store that contains user information.  I want (n) number of websites to use that membership store, EXCEPT I don’t want each application to have direct access to membership data such as passwords.  The way around it is through claims.

In order for this to work you need a central web application called a Secure Token Service (STS).  This application will do authentication and provide a set of available claims.  It will say “hey! I am able to give you the person’s email address, their username and the roles they belong to.”  Each of those pieces of information is a claim.  This message exists in the application’s Federation Metadata

So far you are probably saying “yeah, so what?”

What I haven’t mentioned is that every application (called a Relying Party) that uses this central application has one thing in common: each application doesn’t have to handle authentication – at all.  Each application passes off the authentication request to the central application and the central application does the hard work.  When you type in your username and password, you are typing it into the central application, not one of the many other applications.  Once the central application authenticates your credentials it POST’s the claims back to the other application.  A diagram might help:

image

Image borrowed from the Identity Training kit (http://www.microsoft.com/downloads/details.aspx?familyid=C3E315FA-94E2-4028-99CB-904369F177C0&displaylang=en)

The key takeaway is that only one single application does authentication.  Everything else just redirects to it.  So lets actually see what it takes to authenticate against an STS (central application).  In future posts I will go into detail about how to create an STS as well as how to use Active Directory Federation Services, which is an STS that authenticates directly against (you guessed it) Active Directory.

First step is to install the Framework and SDK.

WIF RTW: http://www.microsoft.com/downloads/details.aspx?FamilyID=eb9c345f-e830-40b8-a5fe-ae7a864c4d76&displaylang=en

WIF SDK: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c148b2df-c7af-46bb-9162-2c9422208504

The SDK will install sample projects and add two Visual Studio menu items under the Tools menu.  Both menu items do essentially the same thing, the difference being that “Add STS Reference” pre-populates the wizard with the current web application’s data.

Once the SDK is installed start up Visual Studio as Administrator.  Create a new web application.  Next go to the Properties section and go into the Web section.  Change the Server Settings to use IIS.  You need to use IIS.  To install IIS on Windows 7 check out this post.

image

So far we haven’t done anything crazy.  We’ve just set a new application to use IIS for development.  Next we have some fun.  Let’s add the STS Reference.

To add the STS Reference go to Tools > Add Sts Reference… and fill out the initial screen:

image


Click next and it will prompt you about using an HTTPS connection.  For the sake of this we don’t need HTTPS so just continue.  The next screen asks us about where we get the STS Federation Metadata from.  In this case I already have an STS so I just paste in the URI:

image

Once it downloads the metadata it will ask if we want the Token that the STS sends back to be encrypted.  My recommendation is that we do, but for the sake of this we won’t.

image

As an aside: In order for the STS to encrypt the token it will use a public key to which our application (the Relying Party) will have the private key.  When we select a certificate it will stick that public key in the Relying Party’s own Federation Metadata file.  Anyway… When we click next we are given a list of available Claims the STS can give us:

image
There is nothing to edit here; it’s just informative.  Next we get a summary of what we just did:

image

We can optionally schedule a Windows task to download changes.

We’ve now just added a crap-load of information to the *.config file.  Actually, we really didn’t.  We just told ASP.NET to use the Microsoft.IdentityModel.Web.WSFederationAuthenticationModule to handle authentication requests and Microsoft.IdentityModel.Web.SessionAuthenticationModule to handle session management.  Everything else is just boiler-plate configuration.  So lets test this thing:

  1. Hit F5 – Compile compile compile compile compile… loads up http://localhost/WebApplication1
  2. Page automatically redirects to https://login.myweg.com/login.aspx?ReturnUrl=%2fusers%2fissue.aspx%3fwa%3dwsignin1.0%26wtrealm%3dhttp%253a%252f%252flocalhost%252fWebApplication1%26wctx%3drm%253d0%2526id%253dpassive%2526ru%253d%25252fWebApplication1%25252f%26wct%3d2010-08-03T23%253a03%253a40Z&wa=wsignin1.0&wtrealm=http%3a%2f%2flocalhost%2fWebApplication1&wctx=rm%3d0%26id%3dpassive%26ru%3d%252fWebApplication1%252f&wct=2010-08-03T23%3a03%3a40Z (notice the variables we’ve passed?)
  3. Type in our username and password…
  4. Redirect to http://localhost/WebApplication1
  5. Yellow Screen of Death

Wait.  What?  If you are running IIS 7.5 and .NET 4.0, ASP.NET will probably blow up.  This is because the data that was POST’ed back to us from the STS had funny characters in the values like angle brackets and stuff.  ASP.NET does not like this.  Rightfully so, Cross Site Scripting attacks suck.  To resolve this you have two choices:

  1. Add <httpRuntime requestValidationMode="2.0" /> to your web.config
  2. Use a proper RequestValidator that can handle responses from Token Services

For the sake of testing add <httpRuntime requestValidationMode="2.0" /> to the web.config and retry the test.  You should be redirected to http://localhost/WebApplication1 and no errors should occur.

Seems like a pointless exercise until you add a chunk of code to the default.aspx page. Add a GridView and then add this code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Threading;
using System.IdentityModel;
using System.IdentityModel.Claims;
using Microsoft.IdentityModel.Claims;

namespace WebApplication1
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            IClaimsIdentity claimsIdentity = ((IClaimsPrincipal)(Thread.CurrentPrincipal)).Identities[0];

            GridView1.DataSource = claimsIdentity.Claims;
            GridView1.DataBind();
        }
    }
}

Rerun the test and you should get back some values.  I hope some light bulbs just turned on for some people :)

Azure Blob Uploads

Earlier today I was talking with Cory Fowler about an issue he was having with an Azure blob upload.  Actually, he offered to help with one of my problems first before he asked me for my thoughts – he’s a real community guy.  Alas I wasn’t able to help him with his problem, but it got me thinking about how to handle basic Blob uploads. 

On the CommunityFTW project I had worked on a few months back I used Azure as the back end for media storage.  The basis was simple: upload media stuffs to a container of my choice.  The end result was this class:

    public sealed class BlobUploadManager
    {
        private static CloudBlobClient blobStorage;

        private static bool s_createdContainer = false;
        private static object s_blobLock = new Object();
        private string theContainer = "";

        public BlobUploadManager(string containerName)
        {
            if (string.IsNullOrEmpty(containerName))
                throw new ArgumentNullException("containerName");

            CreateOnceContainer(containerName);
        }

        public CloudBlobClient BlobClient { get; set; }

        public string CreateUploadContainer()
        {
            BlobContainerPermissions perm = new BlobContainerPermissions();
            var blobContainer = blobStorage.GetContainerReference(theContainer);
            perm.PublicAccess = BlobContainerPublicAccessType.Container;
            blobContainer.SetPermissions(perm);

            var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
            {
                Permissions = SharedAccessPermissions.Write,
                SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
            });

            return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
        }

        private void CreateOnceContainer(string containerName)
        {
            this.theContainer = containerName;

            if (s_createdContainer)
                return;

            lock (s_blobLock)
            {
                var storageAccount = new CloudStorageAccount(
                                         new StorageCredentialsAccountAndKey(
                                             SettingsController.GetSettingValue("BlobAccountName"),
                                             SettingsController.GetSettingValue("BlobKey")),
                                         false);

                blobStorage = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
                container.CreateIfNotExist();

                container.SetPermissions(
                    new BlobContainerPermissions()
                    {
                        PublicAccess = BlobContainerPublicAccessType.Container
                    });

                s_createdContainer = true;
            }
        }

        public string UploadBlob(Stream blobStream, string blobName)
        {
            if (blobStream == null)
                throw new ArgumentNullException("blobStream");

            if (string.IsNullOrEmpty(blobName))
                throw new ArgumentNullException("blobName");

            blobStorage.GetContainerReference(this.theContainer)
		       .GetBlobReference(blobName.ToLowerInvariant())
		       .UploadFromStream(blobStream);

            return blobName.ToLowerInvariant();
        }
    }

With any luck with might help someone trying to jump into Azure.

ViewStateUserKey, ValidateAntiForgeryToken, and the Security Development Lifecycle

Last week Microsoft published the 5th revision to the SDL.  You can get it here: http://www.microsoft.com/security/sdl/default.aspx.

Of note, there are additions for .NET -- specifically ASP.NET and the MVC Framework.  Two key things I noticed initially were the addition of System.Web.UI.Page.ViewStateUserKey, and ValidateAntiForgeryToken Attribute in MVC.

Both have existed for a while, but they are now added to requirements for final testing.

ViewStateUserKey is page-specific identifier for a user.  Sort of a viewstate session.  It’s used to prevent forging of Form data from other pages, or in fancy terms it prevents Cross-site Request Forgery attacks.

Imagine a web form that has a couple fields on it – sensitive fields, say money transfer fields: account to, amount, transaction date, etc.  You need to log in, fill in the details, and click submit.  That submit POST’s the data back to the server, and the server processes it.  The only validation that goes on is whether the viewstate hasn’t been tampered with.

Okay, so now consider that you are still logged in to that site, and someone sends you a link to a funny picture of a cat.  Yay, kittehs!  Anyway, on that page is a simple set of hidden form tags with malicious data in it.  Something like their account number, and an obscene number for cash transfer.  On page load, javascript POST’s that form data to the transfer page, and since you are already logged in, the server accepts it.  Sneaky.

The reason this worked is because the viewstate was never modified.  It could be the same viewstate across multiple sessions.  Therefore, the way you fix this to add a session identifier to the viewstate through the ViewStateUserKey.  Be forewarned, you need to do this in Page_Init, otherwise it’ll throw an exception.  The easiest way to accomplish this is:

void Page_Init (object sender, EventArgs e) 
{ 
	ViewStateUserKey = Session.SessionID; 
}

Oddly simple.  I wonder why this isn’t default in the newer versions of ASP.NET?

Next up is the ValidateAntiForgeryToken attribute.

In MVC, you add this attribute to all POST action methods.  This attribute requires all POST’ed forms have a token associated with each request.  Each token is session specific, so if it’s an old or other-session token, the POST will fail.  So given that, you need to add the token to the page.  To do that you use the Html.AntiForgeryToken() helper to add the token to the form.

It prevents the same type of attack as the ViewStateUserKey, albeit in a much simpler fashion.

Quotable Quotes from Lang.NET Symposium

Lang.NET was just over an hour or so ago, and there are many funny and interesting quotes I compiled over the last few days. Here are all I can remember and find on twitter.

Mads Torgersen on C# dynamic: “We owe it to IronPython and Ruby to make them first class languages.”

Mads Torgersen: “Static typers put the ‘anguish in languish’.”

Keith Robertson: “I’m here to sell you something. You can tell because I'm the one with the tie.”

Erik Meijer: “C# dynamic ‘is like the needle exchange program’.”

Tim Macfarlane: “We're planning on putting [Tycho] on the DLR pretty soon.”

Karl Prosser: “When I see squiggly brackets, it feels like a real language to me.”

Jeffrey Snover on Powershell: “No prayer based parsing.”

Lars Bak: “It’s good to have a slow compiler because that gives you job security.”

Joshua Goodman: “The way PMs fix things is by sending email.”

Erik Meijer: “I love the math - you don't need brain to do math. It’s all symbol pushing.”

Luke Hoban: “CodeDom is able to handle any language that is C#. Including VB.”

Erik Meijer: "LINQ is the solution to everything.”

Philip Wadler: “Monads aren't everything!”

Philip Wadler: “Nothing is so practical as a good theory.”

Erik Meijer: “Because of reflection, every language on the CLR is dynamic.”

I’ve enjoyed myself at Lang.NET and found the people and content are brilliant. I’ll definitely be back again next year. Kudos to the guys who helped set these up.

Generic Implementation of INotifyPropertyChanged on ADO.NET Data Services (Astoria) Proxies with T4 Code Generation

IMG_4855 Last Week Mike Flasko from the ADO.NET Data Services (Astoria) Team blogged about what’s coming in V1.5 which will ship prior to VS 2010. I applaud these out of band releases.

One of the new features is support for two-way data binding in the client library generated proxy classes. These classes currently do not implement INotifyPropertyChanged events nor project into ObservableCollections out of the box.

Last week at the MVP Summit I had the chance to see a demo of this and other great things coming down the road from the broader Data Programmability Team. It seems like more and more teams are turning to T4 Templates for code generation which is great for our extensibility purposes. At first I was hopeful that the team had implemented these proxy generation changes via changing to T4 templates along with a corresponding “better” template.  Unfortunately, this is not the case and we won’t see any T4 templates in v1.5. It’s too bad – would it really have been that much more work to invest the time in implementing T4 templates than to add new switches to datasvcutil and new code generation (along with testing that code).

Anyway, after seeing some other great uses of T4 templates coming from product teams for VS 2010, I thought I would invest some of my own time to see if I couldn’t come up with a way of implementing INotifyPropertyChanged all on my own. The problem with the existing code gen is that while there are partial methods created and called for each property setter (i.e. FoobarChanged() ), there is no generic event fired that would allow us to in turn raise a InotifyPropertyChanged.PropertyChanged event. So you can manually added this for each and every property on every class – but it’s tedious.

I couldn’t have been the first person to think of doing this, and after a bit of googling, I confirmed that. Alexey Zakharov’s post on generating custom proxies with T4 has been completely ripped off, er, inspirational in this derivative work. What I didn’t like about Alexy’s solution was that it completely over wrote the proxy client. I would have preferred a solution that just implemented the partial methods in a partial class to fire the PropertyChanged event. This way, any changes, improvements, etc. to the core MS codegen can still be expected down the road. Of course, Alexey’s template is a better solution if there are indeed other things that you want to customize about the template in its entirely should you find that what you need to accomplish can’t be done with a partial class.

What I did like about Alexey’s solution is that it uses the service itself to query the service meta data directly. I had planned on using reflection to accomplish the same thing but in hindsight, that would be difficult to generate a partial class of a class I’m currently reflecting on in the same project (of course). Duh.

So what do you need to do to get this solution working?

  1. Add the MetadataHelper.tt file to the project where you have your reference/proxies to the data service. You will want to make sure there is no custom tool associated with this file – it’s just included as a reference in the next one. This file wraps up all the calls to get the meta data I’ve made a couple of small changes to Alexey’s -- Added support for Byte and Boolean (typo in AZ’s).
  2. Copy the DataServiceProxy.tt file to the same project. If you have more than one data service, you’ll want one of these files for each reference. So for starters you may want to rename it accordingly. You are going to need to edit this bad boy as well.
  3. There are two options you’ll need to specify inside of the proxy template. The MetadataUri should be the uri to your service suffixed with $metadata. I’ve found that if your service is secured with integrated authentication, then the the metadata helper won’t pass those credentials along so for the purposes of code generation you’d best leave anonymous access on. Secondly is the Namespace. You will want to use the same namespace used by your service reference. You might have to do a Show All Files and drill into the Reference.cs file to see exactly what that is. 
  4. var options = new {
        MetadataUri = "http://localhost/ObjectSharpSample.Service/SampleDataService.svc/$metadata",
        Namespace = "ObjectSharp.SampleApplication.ServiceClient.DataServiceReference"
        };

That’s it. When you save your file, should everything work, you’ll have a .cs file generate that implements through a partial class an INotifyProxyChanged interface. Something like…..

public partial class Address : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler PropertyChanged;

    private void OnPropertyChanged(string property)
    {
        var handler = PropertyChanged;
        if (handler != null)
        {
            handler(this, new PropertyChangedEventArgs(property));
        }
    }

    partial void OnAddressIdChanged()
    {
        OnPropertyChanged("AddressId");
    }
    partial void OnAddressLine1Changed()
    {
        OnPropertyChanged("AddressLine1");
    }
}

What Makes us Want to Program? Part 4

In my previous post, I started talking about using Microsoft technologies over PHP and open source technologies.  There were a couple reasons why I chose to make the move.  First, from a development perspective, everything was object oriented.  PHP was just getting started with OOP at the time, and it wasn’t all that friendly.  Second, development time was generally cut in at least half, because of the built in controls of ASP.NET.  Third, the end result was a more rich application experience for the same reason.  The final reason comes down to the data aspect.

Pulling data from a database in PHP wasn’t easy to do.  The built in support was for MySQL, with very little, if next to nothing for SQL Server.  In a lot of cases that isn’t always a bad thing.  MySQL is free.  You can’t argue with that.  however, MySQL wasn’t what you would call ACID compliant.  Defined, MySQL did not have the characteristics of being Atomic, Consistent, Isolated, and Durable.  Essentially, when data goes missing, there is nothing you can do about it.  SQL Server on the other hand is very ACID compliant.  This is something you want.  Period.

Once .NET 2.0 was released, a whole new paradigm came into play for data in a web application.  It was easy to access!  No more, or at least next to very little boiler plate coding was necessary for data access now.  Talk about a selling point.  Especially when the developer in question is 16 going on 17.

Now that I didn’t need to worry about data access code, I could start working on figuring out SQL.  At the time t-SQL scared the crap out of me.  My brain just couldn’t work around datasets.  The idea of working with multiple pieces of data at once was foreign.  I understood single valued iterations.  A for loop made sense to me.  SELECTs and JOINs confused me.  Mind you, I didn’t start Statistics in math until the following year.  Did SQL help with statistics, or did statistics help me finally figure out SQL?  It’s a chicken and the egg paradox.

So here I am, 17 years old, understanding multiple languages, building dozens of applications, and attending developer conferences all the while managing my education in High School.  Sweet.  I have 3 years until the next release of Visual Studio comes out.  It was here that I figured I should probably start paying more attention in school.  It’s not so much that I wasn’t paying attention, it’s just that I didn’t care enough.  I put in just enough effort to skate through classes with a passing mark.  It was also at this point in time that I made an interesting supposition.

Experts tend to agree that people who are programming geniuses are also good at math and critical thinking or reasoning.  Not one or the other, but both.  Now I’m not saying I’m a programming genius, but I suck at math.  It was just never in the cards.  But, according to all those High School exams and the psychological profiling they gather from them, my Critical Thinking and Reasoning skills are excellent.  Top 10% in Canada according to the exam results.  My math skills sit around top 20-30% depending on the type.

Neurologists place this type of thinking in the left hemisphere of the brain.  The left brain is associated with verbal, logical, and analytical thinking. It excels in naming and categorizing things, symbolic abstraction, speech, reading, writing, arithmetic.  Those who live in the left brain are very linear.  Perfect for a software developer.

The supposition I made had more to do with the Pre-Frontal Cortex of the brain.  It does a lot of work, some of which is planning complex cognitive behaviors.  Behaviors like making a list, calculating numbers, abstracting thoughts, etc.  It plans out the processes our brains use to get things done.  This is true for both sides of the brain.  So, suppose you are left brain-oriented.  You are predisposed to be good at development.  Now, suppose your Pre-Frontal Cortex is very well developed, more so than the average person.  It could be reasoned that part of being a programming genius is having a well developed Pre-Frontal Cortex.

So why does this make us want to program?  Find out in Part 5.