The Anatomy of a Security Breach

Without going into too much detail, there is a guy who the security industry collectively hates.  When you hear a statement like that, the happy parts of our brains think this guy must be an underdog.  He must be awesome at what he does, and the big corporations hate him for it.  Or maybe he’s a world-renowned  hacker that nobody can catch.

Suffice to say, neither are the case.

Earlier today it appears that Greg Evans of LIGATT was, for lack of a better word, pwned.  His twitter account was broken into, his email was ransacked, and by the looks of it, his reputation has been ruined.  I think it’s time to look into how and why this happened.

Side Note: I have zero affiliation with whoever broke into his accounts and stole his data.

The Impetus

Before going into the how, it might help to explain why this happened.

[My opinion doesn’t necessarily reflect that of the attackers, nor does the opinion of my employer.  This is strictly an interpretation of the messages]

Good people get hacked.  It’s a fact of life.  Bad people get hacked.  It’s a fact of life. 

Part of this attack left a note explaining why all of this happened, and explained the contents of the data.  You can find the original post to this from his twitter account.

As it happens, the people who did this see it as retribution for all the things Evans has done in the name of the InfoSec industry.

"Do not meddle in the affairs of hackers, for they are subtle and quick to anger"

The first argument being made is that he tries really hard to be a major representative of the industry.

He's been on TV, he's been on radio, he's trying to draw as much attention to himself as possible.

This I would argue isn’t too much of a problem.  The industry needs spokespeople.  However, it goes on:

This man in no way represents this industry. […] He's gone after people at their home to intimidate them and their family. He's gone after them at their work to discredit them with their employer. And as everyone knows, he recklessly sues anyone who speaks negatively of him on the internet.

Nobody likes it when someone says something mean about you, or when they correct you in public.  However, sometimes it happens.  We are all sometimes wrong.  Evans doesn’t appear to agree with that statement, and will try to sue or slander anyone who disagrees with him. 

Don’t poke the bear.  It pisses the bear off, and gets you attacked.

Especially when you have a secret or two to hide:

Finally, to Gregory D Evans: it is done. All your lies are out in the open. Your investors will know. Your lawyers will know. Your employees will know. Your mother will know. Your lovers will know. Just step away and move on. Stop the stock scams. Stop the lawsuits. Stop the harassment. Stop robbing your employees. Stop embezzling. Stop deceiving every person in your life.

If you were someone who wanted to take justice into your own hands, I guess this is reason enough.

So how did this breach happen?

The Attack

It looks like an inside job:

To the brave soul who helped make this possible: thank you. You took great personal risk to bring this information forward, and none of it would be possible without you. It's unclear how you tolerate his lies day after day, but you've redeemed yourself by supporting this cause.

i can only speculate, but there are two-and-a-half basic ways this could have gone down.

  • Insider has administrator access to email servers and downloads the contents of Evans’ inbox
  • Insider gets Evans’ password and downloads the contents of the inbox
  • Insider gives administrative access to attacker and does one of the first two things

Once they had the contents of the email they had to get access to his twitter account.  This leads me to believe that the insider had administrative access, because they could then reset the twitter account’s password and catch the reset email before it got to the inbox.  Seems like the simplest approach.

Either that, or Evans used really weak passwords.

Once the attacker had their package, they just needed to distribute it and the message explaining their actions.  From his twitter account they posted the anonymous message to pastebin.com:

[…]

Enough is enough. He must be stopped by any means necessary. To that end, at the end of this message is a torrent of the inbox of [Evans’ redacted email address]; the only condition of receipt is that you not talk about the spool or this email release on twitter until after you have the full copy and are seeding it. He may be an idiot but his staff watch twitter for any mention of him, and it's imperative that this file be distributed as much as possible before takedown begins.

[…]

The message had a final, succinct message:

Happy Birthday Mr. Evans
[Redacted link to torrent]
archive password will be released shortly

I haven’t downloaded the torrent, so I don’t know what’s in the package.  I suspect the contents will be publicly disclosed shortly on a number of anonymous sites once there are enough seeders.

This could potentially just be a hoax.

The Fallout

Lots of people get hurt when security is breached.  In this case, quite a number of people will have some of their most private information disclosed.

Contained within his inbox is personal information of many, many people. Social security numbers, bank account routing numbers, credit reports, and other reports by private investigators. It was completely impractical to redact all of this information in any effective manner […].

Some people say that justice comes at the price of people’s privacy.  The attackers feel guilty about this:

This release immediately follows with a small regret. Apologies much be given to all the bystanders, innocent or otherwise. […] and for that: sadness. If in your search through this release you find personal information, please contact the person and notify them.

They also don’t have much faith in the likelihood of Evans properly disclosing the breach:

Even when GDE finds out of this breach, it's quite unlikely that he will follow proper breach notification procedures.

Once enough people have downloaded the torrent and started seeding the content, there isn’t any real way to remove the data from public access.  That means every one of those SSN, bank numbers, credit reports, and whatever else is in the archive will be publicly available for the foreseeable future.

Conclusion

Breaches occur all the time for reasons of profit.  This particular breach on the other hand was done in the name of justice and retribution.  While the motives may be different, the moving pieces work the same way, and there are still three basic parts to a breach: the motive to do it, the attack itself, and the fallout after the attack.

Hopefully everyone learns a little something from this particular breach.

My guess is that Evans will.

The Problem with Claims-Based Authentication

Homer Simpson was once quoted as saying “To alcohol! The cause of, and solution to, all of life's problems”.  I can’t help but borrow from it and say that Claims-Based Authentication is the cause of, and solution to, most problems with identity consumption in applications.

When people first come across Claims-Based Authentication there are two extremes of responses:

  • Total amazement at the architectural simplicity and brilliance
  • Fear and hatred of the idea (don’t you dare take away my control of the passwords)

Each has a valid truth to them, but over time you realize all the problems sit somewhere between both extremes.  It’s this middle ground where people run into the biggest problems. 

Over the last few months there’s been quite a few people talking about the pains of OpenID/OpenAuth, which when you get right down to the principle of it, is CBA.  There are some differences such as terminology and implementation, but both follow the Trusted Third Party Authentication model, and that’s really what CBA is all about.

Rob Conery wrote what some people now see as an infamous post on why he hates OpenID.  He thinks it’s a nightmare for various reasons.  The basic list is as follows:

  • As a customer, since you can have multiple OpenID providers that the relying party doesn’t necessarily know about, how do you know which one you originally used to setup an account?
  • If a customer logs in with the wrong OpenID, they can’t access whatever they’ve paid for.  This pisses off said customer.
  • If your customer used the wrong OpenID, how do you, as the business owner, fix that problem? 
    • Is it worth fixing? 
    • Is it worth the effort of writing code to make this a simpler process?
  • “I'll save you the grumbling rant, but coding up Open ID stuff is utterly mind-numbing frustration”.  This says it all.
  • Since you don’t want to write the code, you get someone else to do it.  You find a SaS provider.  The provider WILL go down.  Laws of averages, Murphy, and simple irony will cause it to go down.
  • The standard is dying.  Facebook, Google, Microsoft, Twitter, and Joe-Blow all have their own particular ways of implementing the standard.  Do you really want to keep up with that?
  • Dealing with all of this hassle means you aren’t spending your time creating content, which does nothing for the customer.

The end result is that he is looking to drop support, and bring back traditional authentication models.  E.g. storing usernames and passwords in a database that you control.

Following the Conery kerfuffle, 37signals made an announcement that they were going to drop OpenID support for their products.  They had a pretty succinct reason for doing so:

Fewer than 1% of all 37signals users are currently using OpenID. After consulting with a fair share of them, it seems that most were doing so only because that used to be the only way to get single sign-on for our applications.

I don’t know how many customers they have, but 1% is nowhere near a high enough number to justify keeping something alive in any case.

So we have a problem now, don’t we?  On paper Claims-Based Authentication is awesome, but in practice it’s a pain in the neck.  Well, I suppose that’s the case with most technologies. 

I think one of problems with implementations of new technologies is the lack of guidance.  Trusted-Third Party authentication isn’t really all that new.  Kerberos does it, and Kerberos has been around for more than 30 years.  OpenID, OpenAuth, and WS-Auth/WS-Federation on the other hand, haven't been around all that long.  Given that, I have a bit of guidance that I’ve learned from the history of Kerberos.

First: Don’t trust random providers.

The biggest problem with OpenID is what’s known as the NASCAR problem.  This is another way of referring to Rob’s first problem.  How do you know which provider to use?  Most people recognize logo’s, so show them a bunch of logo’s and hopefully they will pick the one that they used.  Hoping your customer chooses the right one every time is like hoping you can hit a bullseye from 1000 yards, blindfolded.  It could happen.  It won’t.  But it could.

The solution to this is simple: do not trust every provider.  Have a select few providers you will accept, and have them sufficiently distinguishable.  My bank as a provider is going to be WAY different than using Google as a provider.  At least, I would hope that’s the case.

Second: Don’t let the user log in with the wrong account.

While you are at it, try moving the oceans using this shot glass.  Seriously though, if you follow the first step, this one is a by product.  Think about it.  Would a customer be more likely to log into their ISP billing system with their Google account, or their bank’s account?  That may be a bad example in practice because I would never use my bank as a provider, but it’s a great example of being sufficiently distinguishable.  You will always have customers that choose wrong, but the harder you make it for them to choose the wrong thing, the closer you are to hitting that bullseye.

Third: Use Frameworks.  Don’t roll your own.

One of the most important axioms in computer security is don’t roll your own [framework/authn/authz/crypto/etc].  Seriously.  Stop it.  You WILL do it wrong.  I will too.  Use a trusted OpenID/OpenAuth framework, or use WIF.

Forth: Choose a standard that won’t change on you at the whim of a vendor. 

WS-Trust/Auth and SAML are great examples of standards that don’t change willy-nilly.  OpenID/OpenAuth are not.

Fifth: Adopt a provider that already has a large user base, and then keep it simple.

This is an extension of the first rule.  Pick a provider that has a massive number of users already.  Live ID is a great example.  Google Accounts is another.  Stick to Twitter or Facebook.  If you are going to choose which providers to accept, make sure you pick the ones that people actually use.  This may seem obvious, but just remember it when you are presented with Joe’s Fish and Chips and Federated Online ID provider.

Finally: Perhaps the biggest thing I can recommend is to keep it simple.  Start small.  Know your providers, and trust your providers.

Keep in mind that everything I’ve said above does not pertain to any particular technology, but of any technology that uses a Trusted Third Party Authentication model.

It is really easy to get wide-eyed and believe you can develop a working system that accepts every form of identification under the sun, all the while keeping it manageable.  Don’t.  Keep it simple and start small.

The Azure Experience Lab

Every year ObjectSharp puts on a handful of events, and this year we are pushing hard for Azure.  Next week we have such an event geared towards ISV Developers and Business people.  ObjectSharp would like to welcome you to the Azure Experience Lab!

Windows Azure is Microsoft’s cloud operating system. Leveraging the .NET Platform, developers can easily take advantages of their skills to move their applications to the cloud.  The Azure Experience Lab is all about discovering new business opportunities and learning new technologies to leverage cloud computing for your organization and your customers.

For ISVs looking to augment their traditional software models with subscription and service models, cloud computing represents a huge growth opportunity. Join us for a day of exploration and experience with Windows Azure as we explore both the business value and the technologies available for cloud computing with Microsoft.

There are two tracks available for this event, and ideally, we recommend you include individuals in both tracks from your organization to get the most value from our Experience Lab.

  • The Business Value Track is recommended for product managers, strategic planners, CTOs, architects and other decision making leaders who evaluate strategic directions for their organization and their customers.

  • The Azure Development Track is recommended for Solution and Infrastructure Architects, Lead Developers and other technologies who evaluate technologies as part of their solution offerings.

What's Windows Azure About?
Windows Azure is Microsoft’s cloud operating system. Developing for Azure leverages your existing ASP.NET Development Experience and provides developers with on-demand compute and storage to host, scale, and manage web applications in the cloud through Microsoft® datacenters or even hybrid off/on premise hosting models. In the Experience Lab you'll learn how to develop ASP.NET Applications and Services for Cloud Computing.

We Provide
Light refreshments and a networking lunch. Attendees of the Hands-on-labs in the Azure Development Track are provided with computer equipment, labs and various technical resources.

You Provide
To get the most out of this event, we recommend those attending the Azure Development Track bring their personal or business credit card required for Azure activations as part of the hands on labs. This is required, even if you only setup for a trial period and shut down your account after the event.

When:
Monday, February 7th, 2011.
Two Learning Tracks:
Business Value of Azure - 9:00am - 5:00pm
Developing for Azure - 9:00am - 5:00pm

Where:
Azure Experience Lab
11 King Street West, Suite 1400, Toronto, ON M5H 4C7

Admission:
$99 (incl. refreshments)

Registration:
By invitation only. Please quote Invitation code XLAB02
Limited to 20 ISVs (max 2 people, 1 in each track)

8:30-9:00 Registration
9-10:15 What is Azure? All up overview and demonstration of the various services and capabilities of Windows and SQL Azure including a review of costs and benefits associated with each.
10:15-10:30 Break
  Business Value Track Azure Development Track
10:30-12:00 Fireside chat about Business Scenarios for Cloud Computing and Azure and how to unlock business value for your organization and your customers. All about Storage in Azure (including hands on lab)
12:00-1:00 Networking Lunch
1:00-2:30 Understanding, evaluating and mitigating Risks associated with Cloud Computing Building Services in Azure (including hands on lab)
2:30-2:45 Break
2:45-4:15pm Open Discussion for business discussions, individual break outs, q&a panel discussion with ObjectSharp and Microsoft Executives All about Security in Azure (including hands on labs)
4:15pm Closing Summary, Next Steps

If you are an ISV and are interested in attending, please register now!

PrairieDevCon Identity and Security Presentations on June 13th and 14th

Sometime last week I got confirmation that my sessions were accepted for PrairieDevCon!  The schedule has not yet been announced, but here are the two sessions I will be presenting:

Changing the Identity Game with the Windows Identity Foundation

Identity is a tricky thing to manage. These days every application requires some knowledge of the user, which inevitably requires users to log in and out of the applications to prove they are who they are as well as requiring the application to keep record of the accounts. There is a fundamental shift in the way we manage these users and their accounts in a Claims Based world. The Windows Identity Foundation builds on top of a Claim based architecture and helps solve some real world problems. This session will be a discussion on Claims as well as how WIF fits into the mix.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

Building a Security Token Service In the Time It Takes to Brew a Pot of Coffee

One of the fundamental pieces of a Claims Based Authentication model is the Security Token Service. Using the Windows Identity Framework it is deceivingly simple to build one, so in this session we will.
Track: Microsoft, Security
Style: Lecture
Speaker: Steve Syfuhs

What is PrairieDevCon?

The Prairie Developer Conference is the conference event for software professionals in the Canadian prairies!

Featuring more than 30 presenters, over 60 sessions, and including session styles such as hands-on coding, panel discussions, and lectures, Prairie Developer Conference is an exceptional learning opportunity!
Register for our June 2011 event today!

Okay, how much $$$?

Register early and take advantage of Early Bird pricing!
Get 50% off the post-conference price when you bundle it with your conference registration!

Conference Conference +
Post-Conf Workshop
Bundle
Until February 28 $299.99 $449.99
Until March 31 $399.99 $549.99
Until April 30 $499.99 $649.99
May and June $599.99 $749.99
Post-Conference Workshop Only $299.99

For more information check out the registration section.

Vote for my Mix 2011 Session on Identity!

Mix 2011 has opened voting for public session submissions, and I submitted one!  Here is the abstract:

Identity Bests – Managing User Identity in the new Decade

Presenter: Steve Syfuhs

Identity is a tricky thing to manage. These days every website requires some knowledge of the user, which inevitably requires users to log in to identify themselves. Over the next few years we will start seeing a shift toward a centralized identity model removing the need to manage users and their credentials for each website. This session will cover the fundamentals of Claims Based Authentication using the Windows Identity Foundation and how you can easily manage user identities across multiple websites as well across organizational boundaries.

If you think this session should be presented please vote: http://live.visitmix.com/OpenCall/Vote/Session/182.

(Please vote even if you don’t! Winking smile)

Missing Drive Space? Check IntelliTrace Files

My laptop has a relatively old SSD, so it only has about 128 GB of space.  This works out nicely because I like to keep projects and extraneous files on an external drive.  However, when you’ve got Visual Studio 2005-2010 installed, 2 instances of SQL Server installed, and god knows what else installed, space gets a little tight with 128 GB.  As a result I tend to keep an eye on space.  It came as a surprise to find out I lost 20 GB over the course of a week or two without downloading or installing anything substantial.

To find out where my space went, I turned to a simple little tool called Disk Space Finder by IntelliConcepts.  There are probably a million applications like this, but this is the one I always seems to remember.  It scans through your hard drive checking file sizes and breaks down usage as necessary.

I was able to dig into the ProgramData folder, and then further into the data folder for Visual Studio IntelliTrace:

image

If you leave IntelliTrace enabled for all debugging you could potentially end up with a couple hundred *.itrace files like I did (not actually pictured).  It looks like an itrace file is created every time the debugger is attached to a process, so effectively every time you hit F5 a file is created.  Doubly so if you are debugging multiple launchable projects at once.

You can find the folder containing these files at C:\ProgramData\Microsoft Visual Studio\10.0\TraceDebugging.

The quick fix is to just delete the files and/or stop using IntelliTrace.  I recommend just deleting the files because I think IntelliTrace is an amazing—if not a little undercooked – tool.  It’s a v1 product.  Considering what it’s capable of, this is a minor blemish.

The long term fix is to install Visual Studio 2010 SP1, as there is apparently a fix for this issue.  The downside of course is that SP1 is still in beta.  Hence long term.

Find my Windows Phone 7

For the last month and a half I’ve been playing around with my new Windows Phone 7.  Needless to say, I really like it.  There are a few things that are still a little rough – side-loading application is a good example, but overall I’m really impressed with this platform.  It may be version 7 technically, but realistically its a v1 product.  I say that in a good way though – Microsoft reinvented the product.

Part of this reinvention is a cloud-oriented platform.  Today’s Dilbert cartoon was a perfect tongue-in-cheek explanation of the evolution of computing, and the mobile market makes no exception.  Actually, when you think about it, mobile phones and the cloud go together like peanut butter and chocolate.  If you have to ask, they go together really well.  Also, if you have to ask, are you living under a rock?

This whole cloud/phone comingling is central to the Windows Phone 7, and you can realize the potential immediately.

When you start syncing your phone via the Zune software, you will eventually get to the sync page for the phone.  The first thing I noticed was the link “do more with windows live”.

image

What does that do?

Well, once you have set up your phone with your Live ID, a new application is added to your Windows Live home.  This app is for all devices, and when you click on the above link in Zune, it will take you to the section for the particular phone you are syncing.

image

The first thing that caught my attention was the “Find my Phone” feature.  It brings up a list of actions for when you have lost your phone.

image

Each action is progressively bolder than the previous – and each action is very straightforward.

Map it

If the device is on, use the Location services on the phone to find it and display on a Bing Map.

Ring it

If you have a basic idea of where the phone is and the phone is on, ringing it will make the phone ring with a distinct tone even if you have it set to silent or vibrate.  Use this wisely. Smile

Lock it

Now it gets a little more complicated.  When you lock the phone you are given an option to provide a message on the lock screen:

image

If someone comes across your phone, you can set a message telling them what they can do with it.  Word of advice though: if you leave a phone number, don’t leave your mobile number. Winking smile

Erase it

Finally we have the last option.  The nuclear option if you will.  Once you set the phone to be erased, the next time the phone is turned on and tries to connect to the Live Network, the phone will be wiped and set to factory defaults.

A side effect of wiping your phone is that the next time you set it up and sync with the same Live ID, most settings will remain intact.  You will have to add your email and Facebook accounts, and set all the device settings, but once you sync with Zune, all of your apps will be reinstalled.  Now that is a useful little feature.

Finally

Overall I’m really happy with how the phone turned out.  It’s a strong platform and it’s growing quickly.  The Find my Phone feature is a relatively small thing, but it showcases the potential of a phone/cloud mash up and adds so much value to consumers for when the lose their phone.

In a previous post I talked about the security of the Windows Phone 7.  This post was all about how consumers can quickly mitigate any risks from losing their phone.  For more information on using this phone in the enterprise, check out the Windows Phone 7 Guides for IT Professionals.

Claims, MEF, and Parallelization, Oh My

One of the projects I’ve been working on for the last couple months has a requirement to aggregate a set of claims from multiple data sources for an identity and return the collection.  It all seems pretty straightforward as long as you know what the data sources are at development time as well as how you want to transform the data to claims. 

In the real world though, chances are you will need to modify how that transformation happens or modify the data sources in some way.  There are lots of ways this can be accomplished, and I’m going to look at how you can do it with the Managed Extensibility Framework (MEF).

Whenever I think of MEF, this is the best way I can describe how it works:

image

MEF being the magical part.  In actual fact, it is pretty straightforward how the underlying pieces work, but here is the sales bit:

Application requirements change frequently and software is constantly evolving. As a result, such applications often become monolithic making it difficult to add new functionality. The Managed Extensibility Framework (MEF) is a new library in .NET Framework 4 and Silverlight 4 that addresses this problem by simplifying the design of extensible applications and components.

The architecture of it can be explained on the Codeplex site:

MEF_Diagram.png

The composition container is designed to discover ComposablePart’s that have Export attributes, and assign these Parts to an object with an Import attribute.

Think of it this way (this is just one possible way it could work).  Let’s say I have a bunch of classes that are plugins for some system.  I will attach an Export attribute to each of those classes.  Then within the system itself I have a class that manages these plugins.  That class will contain an object that is a collection of the plugin class type, and it will have an attribute of ImportMany.  Within this manager class is some code that will discover the Exported classes, and generate a collection of them instantiated.  You can then iterate through the collection and do something with those plugins.  Some code might help.

First, we need something to tie the Import/Export attributes together.  For a plugin-type situation I prefer to use an interface.

namespace PluginInterfaces
{
    public interface IPlugin
    {
        public string PlugInName { get; set; }
    }
}

Then we need to create a plugin.

using PluginInterfaces;

namespace SomePlugin
{
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

Then we need to actually Export the plugin.  Notice the namespace addition.  The namespace can be found in the System.ComponentModel.Composition assembly in .NET 4.

using PluginInterfaces;
using System.ComponentModel.Composition;

namespace SomePlugin
{
    [Export(typeof(IPlugin))]
    class MyAwesomePlugin : IPlugin
    {
        public string PlugInName
        {
            get
            {
                return "Steve is Awesome!";
            }
            set { }
        }
    };
}

The [Export(typeof(IPlugin))] is a way of tying the Export to the Import.

Importing the plugin’s requires a little bit more code.  First we need to create a collection to import into:

[ImportMany(typeof(IPlugin))]
List<IPlugin> plugins = new List<IPlugin>();

Notice the typeof(IPlugin).

Next we need to compose the pieces:

using (DirectoryCatalog catalog = new DirectoryCatalog(pathToPluginDlls))
using (CompositionContainer container = new CompositionContainer(catalog))
{
    container.ComposeParts(this);
}

The ComposeParts() method is looking at the passed object and finds anything with the Import or ImportMany attributes and then looks into the DirectoryCatalog to find any classes with the Export attribute, and then tries to tie everything together based on the typeof(IPlugin).

At this point we should now have a collection of plugins that we could iterate through and do whatever we want with each plugin.

So what does that have to do with Claims?

If you continue down the Claims Model path, eventually you will get tired of having to modify the STS every time you wanted to change what data is returned from the RST (Request for Security Token).  Imagine if you could create a plugin model that all you had to do was create a new plugin for any new data source, or all you had to do was modify the plugins instead of the STS itself.  You could even build a transformation engine similar to Active Directory Federation Services and create a DSL that is executed at runtime.  It would make for simpler deployment, that’s for sure.

And what about Parallelization?

If you have a large collection of plugins, it may be beneficial to run some things in parallel, such as a GetClaims([identity]) type call.

Using the Parallel libraries within .NET 4, you could very easily do something like:

Parallel.ForEach<IPlugin>(plugins, (plugin) =>
{
    plugin.GetClaims(identity);
});

The basic idea for this method is to take a collection, and do an action on each item in the collection, potentially in parallel.   The ForEach method is described as:

ForEach<TSource>(IEnumerable<TSource> source, Action<TSource> action)

When everything is all said and done, you now have a basic parallelized plugin model for your Security Token Service.  Pretty cool, I think.

Security for Windows Phone 7

Over the Christmas Holiday I came across a couple interesting articles on the state of security for the Windows Phone 7.  Actually, I came across them while on the train – I didn’t do anything computer related while I was with family and friends.  Weird, I know.

Anyway.  The first thing I came across was the MSDN Article: Security for Windows Phone.  It does a pretty good job of detailing the high-level stuff for why the phone is so secure, as well as the reasoning behind the decisions Microsoft made to provide a secure phone platform.  There are three core categories that encapsulate the reasoning's:

  • Quality of phone experience – The phone belongs to the user.  Therefore, the experience should show it.  The user should feel comfortable knowing exactly what the application is doing, or is trying to do.
  • Access to the user's information – There is a lot of personal data stored on a users phone, ranging from contacts, to emails, to pictures, to even geographic location, etc.  Going back to the first point, the user should feel comfortable about their phone.  They should know exactly what information the application can access.
  • Billable events – The user should know whenever the application tries to do something that could potentially incur costs, such as using the data plan or making a phone call.

These are all well and good, but the developer still has to build the application, therefore part of the responsibility falls to the developer to make sure they are meeting the security requirements.  Microsoft has introduced several security measures to the development cycle to safeguard the user experience.  Like all security practices, there is a process:

  • Sign up for an ISV account – This allows Microsoft to verify the person, group, or company that is building an application.
  • Use the recommended development environment – Visual Studio.  Nuff said.
  • Use .NET managed languages as well as standards and practices associated with good Windows development.
  • Submit the application to the Marketplace for testing and validation – Microsoft actually tests the applications to verify if they comply with security and experience requirements.

This process was created to introduce safeguards for the developer so they don’t inadvertently deploy an application fraught with bugs – possibly security related.  While the testing is fairly invasive, it won’t catch everything.  I liken it to a sanity test.  Use it wisely.

This brings us to the phone OS itself.  Some might call the changes between v6 and v7 a face lift, but in reality, it was a major rip-and-replace for most of the platform.  The Kernel is essentially v6.5, but user-mode was rewritten (as I understand it) from scratch.  This rewrite introduces the concept of chambers. In security parlance, a chamber is a security boundary, as described in the Whitepaper for the Windows Phone 7 Security Model.  There are four chambers:

  • The Trusted Computing Base (TCB) – Kernel mode essentially.  Kernel mode drivers run here, and as a result have access to pretty much everything.  At the moment, only phone hardware manufacturers can write drivers.  Has the most rights.
  • Elevated Rights Chamber (ERC) – Designed for user mode drivers and shared resources across the entire device.  Has less rights than the TCB.
  • Standard Rights Chamber (SRC) – Designed for pre-installed applications such as Outlook.  Has less rights than the ERC.
  • Least Privileged Chamber (LPC) – Designed for all Marketplace and 3rd party applications.  Your application runs here.  It has less rights than SRC and has the least amount of rights.

Each Least Privileged Chamber is isolated from other chambers and cannot communicate with any other applications.  Each chamber has isolated storage only accessible by the owned chamber, effectively creating a sandbox environment for the application.

It’s impressive how much work has gone into the Windows Phone 7 platform, let alone all the security work done.  This post only touches the surface, but hopefully provides an understanding of some of the high-level changes within the platform.  For more information you can download the Windows Phone 7 Guides for IT Professionals.  The title is a bit misleading, as it’s definitely a good read for developers too.

Single Sign-On from Active Directory to a Windows Azure Application Whitepaper

Just came across this on Alik Levin's blog.  I just started reading it, but so far so good!  Here is the abstract:

This paper contains step-by-step instructions for using Windows® Identity Foundation, Windows Azure, and Active Directory Federation Services (AD FS) 2.0 for achieving SSO across web applications that are deployed both on premises and in the cloud. Previous knowledge of these products is not required for completing the proof of concept (POC) configuration. This document is meant to be an introductory document, and it ties together examples from each component into a single, end-to-end example.

You can download it here in either docx or pdf: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=1296e52c-d869-4f73-a112-8a37314a1632