Visual Studio 2010 At the Movies

Yesterday was the third (or fourth) annual ObjectSharp At The Movies event., For this particular version, we were able to celebrate the launch of Visual Studio 2010 and .NET 4.0 with a plethora of sessions covering VS (naturally) to ASP.NET, Silverlight, Sharepoint and TFS. All in all, it was a successful day that seemed to give the attendees what they came for. The slides that I used for my portion of the talk (which was an overview of VS2010 and .NET 4) can be found here. As always, feel free to ask questions. And, for those that missed it, we plan on doing it again next year. Just don’t know (yet) what the theme will be.

VSTS Load Testing Deal of the Day: Why you need to buy a VS 2008 Load Agent before April 12th, 2010

There are important licensing changes happening imminently with VSTS as part of the transition from 2008 to 2010:

  • The current VSTS 2008 Test Edition can load test up to the limit of your machine. On a good day, this is 1000 users. That satisfies a lot of the cases where people need to do load testing. If it’s not, and you have multiple testers, you always all run load from your own machines, however you don’t get the same unified collection of statistics – that’s what the Load Agent & Controller software is for. It’s not a huge deal, and that is why most people don’t buy Load Agents.
  • If you did need centralized collection of statistics, you’d want to buy a number of load agents, one for each CPU, at least in the 2008 SKU. If you wanted to test 10K users, you’d probably want 10 licenses (at least).  But that is changing.
  • In 2010, load testing licensing is no longer done by the CPU, it’s done by the virtual users!
  • When you upgrade to VS 2010 Ultimate come April, your load testing ability changes to only 250 users from your workstation copy of Visual Studio 2010 Ultimate. If you want to test more, you’ll want load agents. The 2010 Load Agent SKU will give you 1000 virtual users. If your hardware is not up to snuff, or your web tests are intensive, you can install a single 2010 SKU on any number boxes, but you’re limited to a total of 1000 users per SKU that you purchase.

This all sounds rather terrible, but as part of the transition, MS is offering this:

If you have purchased a 2008 Load Agent with Software Assurance, as part of the upgrade to 2010, they will give you 5x1000 Virtual Users in the 2010 Load Agent SKU. Wow!

For your benefit of pricing, that means if you buy a 2008 Load Agent with SA today, for about $8,000 you will get 5000 users in 2010. That’s a very good deal. If you wait until after April 12th, you will no longer be able to buy the 2008 SKU and you’ll have to buy the 2010 SKU. At about $8,000 per 1000 users. So if you wanted to test 5000 users come April 12th and you didn’t take advantage to get in on this deal, it will cost you 5x$8000 = $40,000! I’d say that an 80% discount is pretty good – snap it up today.

If you need help purchasing a license prior to April 12th, drop me a note at bgervin @ objectsharp.com and I can hook you up.

The Benefits of Windows Azure

The age of cloud computing is fast approaching. Or at least that's what the numerous vendors of cloud computing would have you believe. The challenge that you (and all developers) face is to determine just what cloud computing is and how you should take advantage of it. Not to mention whether you even should take advantage of it.

While there is little agreement on exactly what constitutes 'cloud computing', there is a consensus that the technology is a paradigm shift for developers. And like pretty much every paradigm shift there is going to be some hype involved. People will recommend moving immediately to the technology en masse. People will suggest that cloud computing has the ability to solve all that is wrong with your Web site. Not surprisingly, neither of these statements is true.

And, as with many other paradigm shifts, the reality is less impactful and slower to arrive than the hype would have you believe. So before you start down this supposedly obvious ‘path to the future of computing’, it's important to have a good sense of what the gains will be. Let's consider some of the benefits that cloud computing offers.

Instant Scalability

If you are tasked with building a customer-facing Web site, then one of the main concerns is scalability. Regardless of the type of site being created, there will be considerable intellectual energy spent determining how to configure the Web servers to maximize the up-time. And in many cases the infrastructure design must also consider issues not related solely to reliability. The ability to handle peak times, which can be a large multiple of the normal level of activity, must also be designed into the architecture.

These spikes in usage come in a couple of different varieties. Sometimes, the spikes come at predictable times. Think of the holiday season for a retail site or a price sale for a travel site. Sometimes the spikes cannot be predetermined, such as a breaking news event for a current events site. But regardless of the type of spikes, the infrastructure architect must create an infrastructure that is capable of absorbing these variations in stride. The result, especially if the peak is 10 times or higher than the average load, is that extra (and mostly unused) capacity must be built into the design. Capacity that must be paid for, yet remains idle.

Into this picture comes cloud computing. Regardless of the cloud platform for which you develop, the ability to scale up and down with the click of a mouse is readily available. For Windows Azure, there are a number of different scalability points, including the number of virtual machines assigned to the application, the number of CPUs in each of the virtual machines, and so on. Within the application itself, you as the designer would have already partitioned the application into the various roles that are then deployed onto the virtual machines.

As the demand on the Web site increases, additional machines, CPUs or roles can be added to ensure a consistency of responsiveness through all of the loads. More importantly, when demand decreases, the resources can be removed. Since these settings form the basis for price paid for the cloud computing service, companies will end up paying only for the capacity that they require.

The price to be paid for this flexibility is that mostly that the application needs to be designed with the necessary roles in mind. As well, there are other constructs (such as the AppFabric and the ServiceBus) and technologies (such as WCF) that need to be mastered and integrated into the application. As a result, it is easier to build a Web application that works with Windows Azure right from the start. This is not to say that existing Web applications can’t be refactored to take advantage of the cloud…they certainly can. But starting from scratch allows you to take full advantage of the benefits offered by Azure.

Expandable Storage

The ability to avoid idle resources is not the only appeal of cloud computing. Another resource that can be virtualized for most applications is the database. Just like the CPU, database usage can rise and fall with the whims and patterns of the user base. And the fact is that the vast majority of business databases do little more that grow in size as time goes one. Again, infrastructure architects need to consider both growth rate and usage patterns as they allocate resources to the database servers. As with the machine-level resources, over-capacity must be designed into the architecture. By using a database hosted in the cloud, the allocation of disk space and processing power can be modified on an as-needed basis. And you, as the consumer, pay only for the space and power that you use.

There are some additional thoughts that need to be given to the use of a cloud database. In order to provide the described flexibility, cloud database providers freely move data from one server to another. As a result, there must be a fairly high level of trust in the provider, particularly if the data is sensitive in nature. For the traditional non-cloud database, the owner of the Web site maintains physical control over the data (by virtue of their physical control over the database servers). Even if the server is hosted at a co-location facility, the Web site owner ‘knows’ where the data is at all times.

When the data is persisted to the cloud, however, this is no longer the case. Now the data is physically in control of the cloud provider. The owner has no idea on which server the data is stored. Or even, when you get right down to it, which city. For some companies, this is a level of trust well beyond what they might have been comfortable with in the past.

As a person who lives abroad, (I’m from Canada), there is one more consideration: privacy. Data privacy laws vary from country to country. When data is stored ‘in the cloud’, there is little weight given to the physical location of the data. After all, the actual location has been virtualized out through the cloud concept. Information can (and does) move across national boundaries based on the requirements of the application. And when data resides in another country, it may very well be subject to the privacy laws of that country. If those laws are significantly different than your own, you might need to modify your corporate policies or the Web application itself to address whichever requirements are more stringent. This sort of situation brings rise to a common approach to cloud storage – data segregation.

In data segregation, the data required by the Web application is stored in multiple locations. Data that is static and/or not particularly sensitive is stored in the cloud. Data that is sensitive is stored in a traditional (and more subject to owner control) location. Naturally, the Web application needs to be structured to combine the data from the different sources. And the traditionally located data needs to be stored in an infrastructure that is reliable and scalable…with all of the problems that the implementation of those features entail.

The functionality offered by cloud computing will be enticing to some, but definitely not all, Web sites. For those who fit the target audience (Web sites that have a wide fluctuation in usage patterns) or just those who want to outsource their Internet infrastructure, cloud computing is definitely appealing. For developers of these sites, platforms such as Windows Azure represents a significant change in the necessary development techniques. And even with the inherent complexity, the shift to cloud computing is beneficial to developers (the resulting applications tend to be more modular, composable and testable), enough to make further exploration of the details worthwhile.

Order Fixation

One of my current tasks for a client has been to facilitate the integration between a Java client and a WCF service. The mechanism for doing this is JSON and I have been quite grateful that WCF makes it relatively easy to implement this type of communication. However, there is one area that has been causing some grief for me (and my Java colleagues) that I finally create a workaround for yesterday.

The source of my problem starts with the contract used by the WCF service. A number of the methods include parameters that are abstract data types. Consider the following class declaration.

[KnownType(ButtonCommand)]
[KnownType(MouseCommand)]
[DataContract]

public abstract class AbstractCommand

The AbstractCommand class is just a base class for a couple of concrete classes (ButtonCommand and MouseCommand). The WCF method is declared as accepting an AbstractCommand type, but the value passed into the method will be either a ButtonCommand or a MouseCommand.

[OperationContract]
[WebInvoke(BodyStyle=WebMessageBodyStyle.Wrapped,
   RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
StatusCode ExecuteCommand(AbstractCommand command);

Naturally, when the client invokes this method, the expectation is that a concrete class instance will be passed along. However, if all you do is pass in the JSON for one of the concrete classes in, you get an exception saying that an abstract class cannot be instantiate. The reason for the error can be found in how WCF dispatches message. When the request arrives, WCF examines the message to determine which method is to be invoked. Once identified, the parameters for the method (found in the JSON) are created. This creation involves instantiating an object using the default parameterless constructor for the type and then assigning property values as found in the JSON. However, the AbstractCommand type cannot be instantiated (it is abstract, after all). And there is no immediately apparent mechanism to determine which concrete type should be used.

To address this final problem, Microsoft introduced the idea of a type hint. The JSON for a concrete type passed into the ExecuteCommand method would look something like the following:

{"command":{"__type":"ButtonCommand:#Command.Library", "prop1":"value", "prop2":"value"}}

The name/value pair of “__type” is used by WCF to determine the type that should be instantiated before performing the property assignment. This is conceptually similar to how types are provided through SOAP. This mechanism does have the hard requirement that the “__type” value be the first pair in the JSON list.

Enter Java into the picture. In order to communicate with my WCF service, the client using the Java JSONObject class. This class takes a collection of name/value pairs and converts it into a JSON-formatted array. This array is then sent to my method, where it goes through the previously described process. However the JSONObject seems to take the portion of the JSON specification where it says the name/value pairs are ‘unordered’ to heart. There is (apparently) no way to force a particular name/value pair to be emitted as the first pair. In other words, the JSONObject cannot reliably produce JSON for a concrete type that is being passed into a parameter of an abstract type.

Crap.

Well, it was actually stronger language than that once I figured this out.

The solution was not nearly as complicated as identifying the problem. I created a WCF endpoint behavior extension. In the AfterReceiveRequest event handler, I look at the body of the request (which has already been converted into XML, as it turns out..a surprise, albeit an easier choice format for the next step). If the “__type” property is the first pair in the JSON array, it is converted to a “__type” attribute in the XML element. Otherwise, it appears as a regular child node. So the transformation consists of finding the XML elements that have a name of “__type” and moving the value up into an attribute of the parent node. The modified message is then injected back into the WCF pipeline. And now the request is processed as desired.

Join ObjectSharp for Silverlight on the Silver Screen – July 9 – Scotiabank Theatre Toronto

Silverlight 3 will soon be released.  And to properly celebrate the excitement of its release, ObjectSharp is teaming up with Microsoft to present an action-packed first look at the UX3 platform, live from the Scotiabank Theatre in Toronto. 

As one of the first companies to be featured on Microsoft’s Silverlight gallery, our consultants will share with you their deep knowledge of the next generation of tools.  Whether you are a designer, developer, or purely a marketing geek, you will not want to miss this blockbuster event.  You will see feature-rich demonstrations of Silverlight, Expression Blend, SketchFlow, and  Windows 7 touch technology.  You will also see how these tools can be used to dazzle your customers and gain attention for your brand.

 

 

 

 

 

 

For Developers and Designers:

  • See in-depth demonstrations of Silverlight 3, Expression Blend, and Windows 7 touch technology.
  • Learn how to quickly design user interactions with Microsoft SketchFlow
  • Take Designer/Developer work flow to the next level with Visual Studio Team System
  • Learn how to cut off your bosses head off and paste it on other people’s bodies with Expression Studio

 

For CTOs and Marketing Managers

  • Understand the benefits of creating line-of-business applications with Silverlight and .NET RIA Services
  • Learn how to integrate Rich Media and Advertising with the Microsoft Platform
  • See Touch technology and natural user interfaces bring kiosk applications to life with Windows 7 and WPF

Technologies You Will See:

  • Silverlight 3 featuring WPF & XAML
  • Expression Blend 3 featuring SketchFlow
  • Windows 7 featuring Touch
  • Microsoft Office SharePoint System 2007 (MOSS) for external facing web sites
  • Visual Studio 2010 Team System

Register Online   |   Watch the Movie Trailer

Getting Trained in an Economic Downturn

I’m sure that, for the vast majority of the readers of my blog, becoming more productive with your programming tools is a desirable goal. Not all developers go out of their way to advance their skills, but the very fact that you read blogs means that getting better is of interest to you. And, for most companies, they would also like you to make better use of your existing tools. It’s certainly obvious that it’s in your company’s best interest for this to happen, even if they don’t go out of their way to explicitly advance your skills.

And this is the ugly truth for most companies. In the best of times, a large number of companies don’t provide any significant budget for formal training. Usually developers are expected to pick up any new skills on their own time. Or, worse, they are expected to apply new technologies without spending any ‘exploratory’ time with them. As most of your are aware, the first time you try a new technology, the result is usually only partially successful. Only after you have worked with it a few times does it become possible to take full advantage of it. And yet, management is reluctant to pay for training classes, tech conferences, or even a programming book that will help you get ‘into the zone’ for the stuff that would make a difference in your day-to-day efforts. Here's a few suggestions that might, possibly get your manager to approve educational expenses, even in the economic conditions that exist today.

Working with Books

Over the years, I have worked with a number of different companies. Each of them takes a slightly different view of what appropriate training costs are. For companies that have a large number of developers and a small educational budget, sometimes books are all that fit. For some companies, creating a library of programming books is a viable books. Your company could purchase well-reviewed (reviews are easy to find on Amazon) programming books on the relevant topics. Employees could then ‘borrow’ books that were appropriate to their current tasks. The books end up being purchased just once, but can be shared between developers as the need arises.

A more high-tech solution to the problem can be achieved with a solution to the on-line technology book site Safari. Safari allows for books to be searched electronically and even be downloaded (in a PDF format) or even printed on an as needed basis. This is a decent mix between the need to search for a specific answer and still being able to read a book cover-to-cover when called for.

However, a corporate library is not always the best solution. Finding answers in a book requires, in many cases, that you have some inkling of the solution beforehand. At a minimum, you need to frame your query appropriately, something that is as much art as science. And the pace of technology advances means that books are almost always going to lag new technology and best practices by a period of months, if not years.

Selling Management on Training: Speak Their Language

When you want to convince your boss to let you go to a conference or attend a course, the first thing to do is look at the expenditure from their perspective. After all, the cost of the training is an expense to them. If there is no corresponding benefit, it becomes difficult to justify spending the monies. And, ultimately, you need to convince management that the benefits that they will gain are more than the money that they will spend.

In general, you will find that the attitude that a company has towards the training of developers is dictated by how the company makes money and who is responsible for helping to making that money. Usually, companies whose product is technology-based tend to be better at providing and paying for skill improvements for their employees. When developers productivity is closely aligned with corporate revenues, it is easier to get the boss’ attention. However, if you work on an application that has no direct correlation with how the company makes money, you’re much more likely to face an uphill battle.

But regardless of where you fit in this spectrum, focus your arguments on what they get out of sending you across town or across the country. Make the conversation about their Return on Investment. Show that the training will have concrete and immediate value to the company and you’re a lot closer to getting approval.

One way that you might be able to do this is to offer to share materials and experiences with your team upon return. At ObjectSharp, we call these sessions ‘Lunch & Learns”. You many know them as “Brown Bag Training”. By offering to present such a session after your conference or course, your company gets to spread some of the benefits of sending one person on training across multiple people. And your team benefits from having the new technologies couched in terms that are directly relevant to your environment.

In some cases, it’s also possible to get the trainer to offer to help with these sessions. This is something that ObjectSharp offers to attendees of our courses. We’re more than happy to have one of our instructors speak to your company about the newest technologies. While any course is on-going, instructors work hard to make the content relevant to you. To accomplish this, we ask about the kinds of projects that are being worked on and where the technology will be applied. So by having an ObjectSharp instructor give the Lunch & Learn, you get a person who is well-versed in the technology, but who also has a basic understanding of how it will fit into your corporate development plans.

You might consider shouldering some of the burden of training yourself. I don’t necessarily mean to pay for it directly. But if you take your time to attend user group meetings and Code Camps (both of which take place in non-work hours), you show a dedication to improving your skills that might make a difference. At a minimum, you will get some insight into the latest technologies, even if it’s not quite the same personalized and intensive hand-on experience that going on a course might be.

Finally, I’d like to leave you with one final, and surprisingly creative, argument. One of the most common questions we get from potential training clients is 'What if I train our developers and they leave?'" Our answer is invariably 'What if you don’t train them and they stay?' This usually gets an 'Aha' moment from management, followed by a realization that perhaps investing more in staff development might not be quite the pure expense that they think.

Data Bondage in WPF presentation at Toronto Code Camp

My final presentation in my April World Speaking tour was at the Toronto Code Camp this afternoon. As always, the code camp was a huge success. The efforts of many people went into making it so, but the organization was top notch.

As part of the lead-up to my presentation, Joey de Villa made good on a promise to wear Microsoft branded assless chaps. And he even regaled the crowd with his version of Hit Me With Your Best Shot, a choice completely in character with the theme of the presentation.

As for the presentation, it went very well. Something like 70-80 people where there and I was pleased by the questions that were asked. I have always preferred an interactive audience because it means that they are probably listening. :)

As I promised at the end of the presentation, here are links to the slides and demos. Any questions are most welcome.

Slides: here

Demos: Download

Update: For those who want a more complete story surrounding the title of the presentation and the assless chaps references, check out Joey's blog post here.

Never Test Alone – Presentation at KWSQA Testing Conference

I finished presentation four of my April World Tour of the GTA earlier today. It was actually a co-presentation with Deb Forsyth and I was basically code-monkey and the 'developer’ that she could point to with here ‘bad developer’ stories. This was an unusual conference for me, in that I was a lone developer in a room full of testers. Daniel never had it so bad with the lions. :)

Anyway, as I mentioned in the presentation, the slides are now available for download at the following link. As always, questions are welcomed.

Slides – Download

Dropping Cookies in IE7

I was asked an unusual question yesterday about cookies, Silverlight and WCF. The scenario was that a Silverlight application was being used in a consumer-facing situation. The application itself communicates with the server using WCF. The service which is the target of the communication uses ASP.NET Authentication to authenticate the user. It’s an implementation detail (but critical to this post) that the method of storing the authentication token is a cookie called .ASPXAUTH.

In a normal (that is, working) scenario with Silverlight, the credentials are sent to the server and an .ASPXAUTH cookie is returned. The browser strips off the cookie and stores it. On any subsequent requests, Silverlight creates a request and sends it the the server through the browser’s networking API. The browser is responsible for determining which, if any, cookies should be send with the request and adding them to the outgoing header. In other words, the Silverlight application has no specific knowledge of or interaction with the .ASPXAUTH cookie.

As you would expect, this mechanism works the vast majority of the time. If it didn’t, I think it would have been a significant story long before now. But my questioner was running into a situation where the Silverlight application was unable to communicate with the server even after authentication was performed. What’s worse, this behavior was only happening on IE7. When Silverlight was run through Firefox, it worked exactly as it was supposed to.

The diagnostic step in a situation like this is to use Fiddler (or whatever your favorite TCP trace application is) to view the raw messages. And what was seen is that although the authentication response had the .ASPXAUTH cookie in it, any requests sent back to the server after authentication did not. Given when I’ve already explained about the processing of cookies with Silverlight requests, this eliminates the Silverlight application as the most likely culprit. But it also makes one scratch your head, as we can be pretty certain it’s not a widespread failure of IE7 to process cookies.

The answer likes in a strange bug in IE7. It turns out that if a domain name has a underscore in it, IE7 doesn’t persist the cookies. Let me repeat that, because it’s such a bizarre sounding problem. In IE7, if the domain name has an underscore (‘_’) in it, then any cookies returned from the domain will not be persisted. Which also means that subsequent requests will be ‘cookie-free’.

I’m guessing that most domain names don’t have an underscore, which is why this bug didn’t get widespread notice. In this particular case, the domain was one used for development, which would keep the problem from being a production issue. But I have no reason to believe that the bug would be restricted to a local problem. Deploy a ‘underscored’ domain name to the public internet and no authentication, shopping carts or other state information can be saved.

Fortunately, the solution was a simple one. If the domain name in the endpoint configuration is replaced with the raw IP address, IE7 is more than happy to save the cookie. I wouldn’t be surprise if an entry in your local hosts file would have the same effect. And the final solution would be to have your domain administrator create a DNS Alias entry…one that doesn’t have an underscore, of course.

Design 101 – The Color Wheel

One of the most common comments from Silverlight and WPF developers is their lack of design sense. Over the next little while, I’ll be posting (interspersed with other topics) on some of the basics of color theory and how they can be applied to WPF and Silverlight.

To start with, let’s talk about one of the fundamental artifacts of color theory – the color wheel.

Color WheelOriginally conceived by Sir Isaac Newton, color wheel is a representation of the colors in the visual spectrum. In the representation, the three primary colors are placed equidistant from one another. The gaps between the the primary colors is then filled with with secondary and tertiary colors.

Now, already I’ve used three terms, only one of which I would expect you to be familiar with. Primary colors (red, blue and yellow) are something that we learned about in elementary school. Secondary colors (orange, green and violet) are created by combining the primary colors. Tertiary colors are those that are formed by combining primary colors with secondary colors.

So now that we have a color wheel, what good is it? Well, it helps identify harmonious colors. When selecting colors to use in a user interface, it is important so select colors that are, in combination, pleasing to the eye. Personally, I understand the challenge to this. As a person born without the color sense gene, I think that pink and lime green go well together. But apparently, I’m in the minority. :)

There are numerous theories about the combinations of colors that promote harmony. We’ll look at some of them in more detail in upcoming blog entries, but to give you a second, two of the most commonly used ones are complimentary and analogous. Complimentary colors are found opposite one another on the wheel. For example, red-green, yellow-violet, and blue-orange are all complimentary. These colors promote stability and contrast in the image.

Analogous colors are sets of three colors that are adjacent to one another on the color wheel. In images using analogous colors, one of the colors tends to be the dominant one. The result is an image that appears to be saturated in the dominant color with the other colors offering a subtle nuances of difference.