Single Sign-On from Active Directory to a Windows Azure Application Whitepaper

Just came across this on Alik Levin's blog.  I just started reading it, but so far so good!  Here is the abstract:

This paper contains step-by-step instructions for using Windows® Identity Foundation, Windows Azure, and Active Directory Federation Services (AD FS) 2.0 for achieving SSO across web applications that are deployed both on premises and in the cloud. Previous knowledge of these products is not required for completing the proof of concept (POC) configuration. This document is meant to be an introductory document, and it ties together examples from each component into a single, end-to-end example.

You can download it here in either docx or pdf: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=1296e52c-d869-4f73-a112-8a37314a1632

AzureFest–Final Countdown: 2 Days to go

[The soundtrack for this post can be found at Youtube]

Cory Fowler is the Canadian MVP for Windows Azure, an ObjectSharp Consultant, and a good friend of mine.  He will be presenting on Windows Azure at, you guessed it, AzureFest!  We have two half day events on December 11th 2010 (two days from now – see what I did there?) at Microsoft’s office in Mississauga and it’s chock full of everything you need to know about getting started with Windows Azure.  You can register by clicking here.

What You'll Learn

  • How to setup your Azure Account
  • How to take a traditional on-premise ASP.NET applications and deploy it to Azure
  • Publishing Applications to Azure Developer Portal
  • Setting up the Azure SDK and Azure Tools for Visual Studio on your laptop
  • using the development App Fabric

We Provide

  • The tools you will need on your machine to prepare yourself for Azure
  • Hands on instruction and expert assistance
  • Power and network access
  • Snacks and refreshments
  • For every azure activation – funding for your User Group
  • Post event technical resources so you can take your skills to the next level

You Provide

  • Your own laptop
  • Your own credit card (for Azure activations this is required, even if you only setup for a trial period, but this event is free!)
  • Your experience in building ASP.NET Applications and Services

Seats are still available.  Register!

P.S. Did I mention this event is free?

Single Sign-On Between the Cloud and On-Premise using ADFS 2

One of the issues I hear about hosting services in the cloud has to do with managing Identity.  Since the service isn’t local, it’s harder to tie it into services like Active Directory.  What do I mean by this?

I’m kind of particular how certain things work.  I hate having more than one set of credentials across applications.  Theoretically since we can’t join our Azure Servers to our domain, there’s a good chance we will need separate credentials between our internal domain and Cloud Services.  However, it’s possible to make our Cloud Applications use our Active Directory credentials via a Claims Service.

With Federation Services it’s surprisingly easy to do.  Yesterday we talked about installing Active Directory Federation Services and federating an application.  Today we will talk about what it takes to get things talking between Azure and ADFS.

As a recap, yesterday we:

  1. Installed prerequisites
  2. Installed ADFS 2.0 on a domain joined server
  3. Created a relying party
  4. Created claims mappings to data in Active Directory
  5. Created a simple Claims-Aware application

So what do we need to do next?  There really isn’t much we need to do:

  1. Build Azure App
  2. Federate it using FedUtil.exe

Building an Azure application isn’t trivial, but we don’t need to know much to Federate it. 

How do we federate it?  Follow these steps providing the Azure details for the application URI and the Federation Metadata from ADFS. 

One of the gotcha’s with deploying to Azure though is that the Microsoft.IdentityModel assembly is not part of the GAC, and it’s not in Azure builds.  Therefore we need to copy the assembly to the bin folder for deployment.  We do that by going to the Microsoft.IdentityModel reference properties and setting Copy Local to true:

image

That isn’t the only gotcha.  We need to keep in mind how data is transferred between Cloud and intranet.  In most cases, nothing goes on behind the scenes; it passes across the client’s browser through POST calls.  If the client’s browser is on the local intranet, when it hits the cloud app it will redirect to an intranet location.  This works because the client has access to both the cloud app and can access ADFS.  This isn’t necessarily the case with people who work offsite, or are partners with the company.

We need to have the ADFS Server accessible to the public.  This is kind of an ugly situation.  Leaving the politics out of this, we are sticking a domain joined system out in the public that’s sole responsibility is authentication and identity mapping.

One way to mitigate certain risks is to use an ADFS Proxy Service.  This service will sit on a non-domain joined system sitting on an edge network that creates a connection to the ADFS Server sitting inside the corporate network.  External applications would use the Proxy service.

Installing the Proxy service is relatively simple, but a topic for another post.

Azure Blob Uploads

Earlier today I was talking with Cory Fowler about an issue he was having with an Azure blob upload.  Actually, he offered to help with one of my problems first before he asked me for my thoughts – he’s a real community guy.  Alas I wasn’t able to help him with his problem, but it got me thinking about how to handle basic Blob uploads. 

On the CommunityFTW project I had worked on a few months back I used Azure as the back end for media storage.  The basis was simple: upload media stuffs to a container of my choice.  The end result was this class:

    public sealed class BlobUploadManager
    {
        private static CloudBlobClient blobStorage;

        private static bool s_createdContainer = false;
        private static object s_blobLock = new Object();
        private string theContainer = "";

        public BlobUploadManager(string containerName)
        {
            if (string.IsNullOrEmpty(containerName))
                throw new ArgumentNullException("containerName");

            CreateOnceContainer(containerName);
        }

        public CloudBlobClient BlobClient { get; set; }

        public string CreateUploadContainer()
        {
            BlobContainerPermissions perm = new BlobContainerPermissions();
            var blobContainer = blobStorage.GetContainerReference(theContainer);
            perm.PublicAccess = BlobContainerPublicAccessType.Container;
            blobContainer.SetPermissions(perm);

            var sas = blobContainer.GetSharedAccessSignature(new SharedAccessPolicy()
            {
                Permissions = SharedAccessPermissions.Write,
                SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(60)
            });

            return new UriBuilder(blobContainer.Uri) { Query = sas.TrimStart('?') }.Uri.AbsoluteUri;
        }

        private void CreateOnceContainer(string containerName)
        {
            this.theContainer = containerName;

            if (s_createdContainer)
                return;

            lock (s_blobLock)
            {
                var storageAccount = new CloudStorageAccount(
                                         new StorageCredentialsAccountAndKey(
                                             SettingsController.GetSettingValue("BlobAccountName"),
                                             SettingsController.GetSettingValue("BlobKey")),
                                         false);

                blobStorage = storageAccount.CreateCloudBlobClient();
                CloudBlobContainer container = blobStorage.GetContainerReference(containerName);
                container.CreateIfNotExist();

                container.SetPermissions(
                    new BlobContainerPermissions()
                    {
                        PublicAccess = BlobContainerPublicAccessType.Container
                    });

                s_createdContainer = true;
            }
        }

        public string UploadBlob(Stream blobStream, string blobName)
        {
            if (blobStream == null)
                throw new ArgumentNullException("blobStream");

            if (string.IsNullOrEmpty(blobName))
                throw new ArgumentNullException("blobName");

            blobStorage.GetContainerReference(this.theContainer)
		       .GetBlobReference(blobName.ToLowerInvariant())
		       .UploadFromStream(blobStream);

            return blobName.ToLowerInvariant();
        }
    }

With any luck with might help someone trying to jump into Azure.

The Benefits of Windows Azure

The age of cloud computing is fast approaching. Or at least that's what the numerous vendors of cloud computing would have you believe. The challenge that you (and all developers) face is to determine just what cloud computing is and how you should take advantage of it. Not to mention whether you even should take advantage of it.

While there is little agreement on exactly what constitutes 'cloud computing', there is a consensus that the technology is a paradigm shift for developers. And like pretty much every paradigm shift there is going to be some hype involved. People will recommend moving immediately to the technology en masse. People will suggest that cloud computing has the ability to solve all that is wrong with your Web site. Not surprisingly, neither of these statements is true.

And, as with many other paradigm shifts, the reality is less impactful and slower to arrive than the hype would have you believe. So before you start down this supposedly obvious ‘path to the future of computing’, it's important to have a good sense of what the gains will be. Let's consider some of the benefits that cloud computing offers.

Instant Scalability

If you are tasked with building a customer-facing Web site, then one of the main concerns is scalability. Regardless of the type of site being created, there will be considerable intellectual energy spent determining how to configure the Web servers to maximize the up-time. And in many cases the infrastructure design must also consider issues not related solely to reliability. The ability to handle peak times, which can be a large multiple of the normal level of activity, must also be designed into the architecture.

These spikes in usage come in a couple of different varieties. Sometimes, the spikes come at predictable times. Think of the holiday season for a retail site or a price sale for a travel site. Sometimes the spikes cannot be predetermined, such as a breaking news event for a current events site. But regardless of the type of spikes, the infrastructure architect must create an infrastructure that is capable of absorbing these variations in stride. The result, especially if the peak is 10 times or higher than the average load, is that extra (and mostly unused) capacity must be built into the design. Capacity that must be paid for, yet remains idle.

Into this picture comes cloud computing. Regardless of the cloud platform for which you develop, the ability to scale up and down with the click of a mouse is readily available. For Windows Azure, there are a number of different scalability points, including the number of virtual machines assigned to the application, the number of CPUs in each of the virtual machines, and so on. Within the application itself, you as the designer would have already partitioned the application into the various roles that are then deployed onto the virtual machines.

As the demand on the Web site increases, additional machines, CPUs or roles can be added to ensure a consistency of responsiveness through all of the loads. More importantly, when demand decreases, the resources can be removed. Since these settings form the basis for price paid for the cloud computing service, companies will end up paying only for the capacity that they require.

The price to be paid for this flexibility is that mostly that the application needs to be designed with the necessary roles in mind. As well, there are other constructs (such as the AppFabric and the ServiceBus) and technologies (such as WCF) that need to be mastered and integrated into the application. As a result, it is easier to build a Web application that works with Windows Azure right from the start. This is not to say that existing Web applications can’t be refactored to take advantage of the cloud…they certainly can. But starting from scratch allows you to take full advantage of the benefits offered by Azure.

Expandable Storage

The ability to avoid idle resources is not the only appeal of cloud computing. Another resource that can be virtualized for most applications is the database. Just like the CPU, database usage can rise and fall with the whims and patterns of the user base. And the fact is that the vast majority of business databases do little more that grow in size as time goes one. Again, infrastructure architects need to consider both growth rate and usage patterns as they allocate resources to the database servers. As with the machine-level resources, over-capacity must be designed into the architecture. By using a database hosted in the cloud, the allocation of disk space and processing power can be modified on an as-needed basis. And you, as the consumer, pay only for the space and power that you use.

There are some additional thoughts that need to be given to the use of a cloud database. In order to provide the described flexibility, cloud database providers freely move data from one server to another. As a result, there must be a fairly high level of trust in the provider, particularly if the data is sensitive in nature. For the traditional non-cloud database, the owner of the Web site maintains physical control over the data (by virtue of their physical control over the database servers). Even if the server is hosted at a co-location facility, the Web site owner ‘knows’ where the data is at all times.

When the data is persisted to the cloud, however, this is no longer the case. Now the data is physically in control of the cloud provider. The owner has no idea on which server the data is stored. Or even, when you get right down to it, which city. For some companies, this is a level of trust well beyond what they might have been comfortable with in the past.

As a person who lives abroad, (I’m from Canada), there is one more consideration: privacy. Data privacy laws vary from country to country. When data is stored ‘in the cloud’, there is little weight given to the physical location of the data. After all, the actual location has been virtualized out through the cloud concept. Information can (and does) move across national boundaries based on the requirements of the application. And when data resides in another country, it may very well be subject to the privacy laws of that country. If those laws are significantly different than your own, you might need to modify your corporate policies or the Web application itself to address whichever requirements are more stringent. This sort of situation brings rise to a common approach to cloud storage – data segregation.

In data segregation, the data required by the Web application is stored in multiple locations. Data that is static and/or not particularly sensitive is stored in the cloud. Data that is sensitive is stored in a traditional (and more subject to owner control) location. Naturally, the Web application needs to be structured to combine the data from the different sources. And the traditionally located data needs to be stored in an infrastructure that is reliable and scalable…with all of the problems that the implementation of those features entail.

The functionality offered by cloud computing will be enticing to some, but definitely not all, Web sites. For those who fit the target audience (Web sites that have a wide fluctuation in usage patterns) or just those who want to outsource their Internet infrastructure, cloud computing is definitely appealing. For developers of these sites, platforms such as Windows Azure represents a significant change in the necessary development techniques. And even with the inherent complexity, the shift to cloud computing is beneficial to developers (the resulting applications tend to be more modular, composable and testable), enough to make further exploration of the details worthwhile.

More Thoughts on the Cloud

One of the more farsighted thoughts on the implications of cloud computing is the concern about vendor lock-in. Tim Bray mentioned it in his Get in the Cloud post

Big Issue · I mean a really big issue: if cloud computing is going to take off, it absolutely, totally, must be lockin-free. What that means if that I’m deploying my app on Vendor X’s platform, there have to be other vendors Y and Z such that I can pull my app and its data off X and it’ll all run with minimal tweaks on either Y or Z.

...

I’m simply not interested in any cloud offering at any level unless it offers zero barrier-to-exit.

This idea was also commented on by Dare Obasanjo here. It was Dare who originally pointed me at Tim's post.

My take on the vendor lock-in problem is two-fold. First is the easier one to deal with - the platform on which the application is running. As it sits right now, use of Azure is dependent on you being able to publish an application. The destination for the application is a cloud service, but that is not a big deal. You can just as easily publish the application to your own servers (or server farm). The application which is being pushed out to the cloud is capable of being deployed onto a different infrastructure.

Now, there are aspects of the cloud service which might place some significant requirements on your target infrastructure. A basic look at the model used by Azure indicates that a worker pattern in being used. Requests arrive at the service and are immediately queued. The requests are then processed in the background by a worker. The placement of the request in the queue helps to ensure the reliability of the application, as well as the ability to scale up on demand. So if you created an infrastructure that was capable of supporting such a model, then your lock-in at the application level doesn't exist. Yes, the barrier is high, but it is not insurmountable. And there is the possibility that additional vendors will take up the challenge.

The second potential for lock-in comes from the data. Again, this becomes a matter of how you have defined your application. Many companies will want to maintain their data within their premises. In the Azure world, this can be done through ADO.NET Data Services. In fact, this is currently (I believe) the expected mechanism. The data stores offered by Azure are not intended to be used for large volumes of data. At some point, I expect that Azure will offer the ability to store data (of the larger variety) within the cloud. At that point, the spectre of lock-in becomes solid. And you should consider your escape options before you commit to the service. But until that happens, the reality is that you are still responsible for your data. It is still yours to preserve, backup and use.

The crux of all this is that the cloud provides pretty much the same lock-in that the operating system does now. If you create an ASP.NET application, you are now required to utilize IIS as the web server. If you create a WPF application, you require either Silverlight or .NET Framework on the client. For almost every application choice you make, there is some form of lock-in. It seems to me that, at least at the moment, the lock-in provided by Azure is no worse than any other infrastructure decision that you would make.

Summarizing the Cloud Initiative

So it's the last few hours of PDC for this year. Which means that pretty much all of the information that can be shoved into my brain has been. It also means that it's a pretty decent moment to be summarizing what I've learned.

Obviously (from its presence in the initial keynote and the number of sessions) cloud computing was the big news. This was also one of the more talked about parts of the conference, and not necessary for a good reason. Many people that I have talked to walked out of the keynote wondering exactly what Azure was. Was it web host? If so, what's the point? It's not like there aren't other companies doing the same thing. Could it be more than web hosting? If so, that wasn't made very clear from the keynote. In other words, I was not exactly chomping at the bit to try out Azure.

But it's here at the end of the week. And I've had a chance to see some additional sessions and talk to a number of people about what Azure is capable of and represents for Microsoft. That has tempered my original skepticism a little. But not completely.

In the vision of Azure was presented, the cloud was intended to be a deployment destination in the sky. Within that cloud, there was some unknown infrastructure that you did not need to be aware of. You could configure the number of servers to go up or down depending on the expected traffic to your site. As you change the configured value, your application becomes provisioned accordingly. This is nice for companies that need to deal with spikes in application usage. Or who don't have (or don't want to have) the support personnel for their infrastructure.

However, there are some limitations to the type of applications which can fit into this model. For example, you need to be able to deploy the application. This implies that you have created the application to the point where you can publish it to the Azure service. The published application might include third party components (a purchase ASP.NET control, for example), but can't use a separate third-party application (such as Community Server).

As well, you need to be able to access the application through the web. You could use a Silverlight front end to create the requests. You could use a client-based application to create the requests. But, ultimately, there is a Web request that need to be sent to the service. I fully expect that Silverlight will be the most common interface.

So if you have applications that fit into that particular model, then Azure is something you should look at. There are some 'best practices' that you need to consider as part of your development, but they are not onerous. In fact, they are really the kind of best practices that you should already be using. As well, you should remember that the version of Azure introduced this past week is really just V1.0. You have to know that Microsoft has other ideas for pushing applications to the cloud. So even if the profile doesn't fit you right now, keep your ears open. I expect that future enhancements will only work to envelope more and more people in the cloud.

A Lap Around Windows Azure

The first session that talked about Azure was (not surprisingly) incredibly popular. Filled a room that looked like it seated 700+ people. Then filled an overflow room. And the second overflow room is pack with people standing at the back.

The sample app is a thumbnail generator. In other words, a function that could normally be provided by a Windows Service. Interesting that the 'simplest' scenarios is that one. Interesting that it seems to indicate that what you're deploying into Azure is a service with ASP.NET or Silverlight as the user interface

A couple of times now, Azure has been referred to as an operating system.

Currently, the storage abstractions are: blobs for user data, simple tables for service state and queues for service communications. But these abstractions are not intended to replace a database. Astoria seems to be the expected CRUD channel.

"Azure is an operating system" is getting to the point where it should become a drinking game.

The second demo is a live site, managing teachers in Ethiopia. The speaking actually asked that we not to go to the site because it is live and the the typical usage pattern doesn't include being hit by 1000+ rabid developers all at once. :) In what appears to be a common approach, the user interface is a Silverlight communicating with Azure through web service calls.

As of noon (PDT) today, you can go to http://www.azure.com and download the desktop SDK. And publish applications to the close. Currently usage is free, with restrictions. And there is no indication yet regarding the pricing model, although in the keynote, Ray Ozzie suggested that it would be 'competitive' and that there would be support for 'hobbyists'. So look to Amazon's EC2 for a rough idea and figure that you will have some low/no cost option for you to play with.