Architecting with Services

As I alluded to in a previous post, I have been thinking about the architecture of services for the past couple of weeks.  This is quite different (at least as I see it) from the many descriptions of how to implement web services.  Because I'm relatively known for Web services in the user group community here in Toronto, I get asked things like “Where do my service boundaries begin and end?” and “What parameters need to be passed into a service?”.  While it is easy (and sometimes even fun) to be flip with my comments regarding, my annoying streak of professionalism means that occasionally I have to consider the ramifications of my answers.

Naturally, the basis for the answer should come from some sound theory.  For service orientation, there is little more sound than the Four Basic Tenets of Service Orientation first espoused by Don Box.  In summary, they are:

1.  Service boundaries are explicit and costly to traverse

2.  Services are autonomous

3.  Services expose schema and contract, not class and type

4.  Services negotiate using policy

While is there an on-going discussion about the completeness of these tenets, they are certainly a good starting point for this post.  In particular, I've been trying to decide if all of these tenets apply equally to the design process.  My conclusion is that they don't. Let's start with the first one: Service Boundaries are Explicit and Costly

This is a double-edged sword, from a design perspective.  First, I believe it is important to consider the possibility that the cost to traverse a boundary is high. This is a necessary (and frequently lacking) mind set to adopt when creating and using web services.  I don't believe that it should be any different with the design of the web service. 

On the other hand, if the high cost for boundary traversal is always the assumption, the design decisions may not result in the most efficient application.  In particular, consider the need for a service that provides information about a business entity, such as a customer. The kind of service I'm thinking about is one that implements basic CRUD functionality.  Now if the assumption is made that boundary traversals are costly, the exposed interface will be such that all possible information about the business entity will be passed to the service for every method.  This would reduce the chatty nature of the typical CRUD methods. 

However having a chunky CRUD service is not necessarily the most efficient use of resources.  Why pass all possible information when only a little might be sufficient.  Especially when, as would be the case in most CRUD services, the service is more likely to be a near link than a far link.  All of this leads to my slight variation on the Tenet:

Service Granularity is Proportional to Expected Latency

When designing a service-oriented application, it is certain necessary to consider the cost involved in calling across the various service boundaries.  However, it is also necessary to consider the expected latency.  In this way, the appropriate design decisions regarding the number and content of the exposed messages can be made with an eye to both flexibility and performance.

I've got some thoughts about the other tenets as they apply to design, but they will have to wait for another blog entry.  And if anyone has thoughts about my logic, I'd be happy to hear about it.

MVP Summit

If you read my last post, there was one additional reason for the lack of recent posts.  I was at the MVP Summit in Seattle last week.  This was my first Summit and I was looking forward to being in the presence of the luminaries of the industry.  It was everything I expected and more. Since the contents of the Summit were covered under an NDA, I'm limited to talking about something that I'm sure is not covered.

I was impressed by the constant request for feedback from the participants of the Summit.  Whether it be focus groups or the various chances that we had to interact with personnel from various Microsoft development groups, there was a constant drumbeat asking what we thought, what problems we or our clients encountered and what could be done to make things better.  This even extended to the third day of the Summit, where it was almost one-on-one with the people who are creating the technology we'll be using for the next 10 years. Even more important, it looked like there were listening.  It will be interesting to see what impact, if any, our suggestions will have.

 

Back in the Saddle

First of all, let me apologize for the relative dearth of post from me over the past couple of months.  My reason/excuse/rationale for my period of absence has to do with the work I have been involved in recent and the source for most of my posts in the first place.

Understand that, for the most part, my inspiration for posting is the particular problem that I'm solving on any given day.  Which means that if I'm not solving a challenging problem, there is little fodder for a post.  Unfortunately (for posting, that is), I have been working as an instructor almost continuously since the end of January.  So the most challenging problem I have been dealing with is getting students to understand the ins and outs of the EnterpriseServices namespace.  Not an easy problem, you understand, but not one that generates post material.

My situation is in the process of changing.  I'm still instructing, but not with the same full time grind as the past two months. So hopefully there will be more frequent posting from me.  In fact, I have been cogitating (in my spare time) on the challenges of designing a service-oriented architecture.  Not the technology behind SOA, but the choices that have to be made by real people trying to implement production applications based on SOA.  Look for some posts along these lines in the next week or so.

Database Access Layers and Testing

I've been doing a lot of testing lately. A lot. I'm building a database agnostic data access layer. It has to be as performant as using typed providers, and even support the optional use of a typed provider. For example, we want to allow developers to build database agnostic data access components (dac). Sometimes however, there is just too much difference and to write standard Sql requires too much of a sacrifice. So in these cases, we want to allow a developer to write a high performance dac for each of the dac's, giving them ultimate control to tweak the data access for one database or another - and use a factory at runtime to instantiate the correct one. Of course they have to implement the same interface so that the business components can talk to an abstract dac. So normally developers can talk to their database through the agnostic DbHelper, but when they want, they can drop down to SqlHelper or OracleHelper.

We also want to support a rich design time environment. Creating DataAdapters and SqlCommands in C# isn't fun. Good developers botch these up too easily - not creating parameters right - or worst not using parameters at all and opening themselves up to sql injection. The typed DbCommand and DbDataAdapter's allow for a rich design time painting of sql and generation of parameters when used on a sub-class of System.Component. Of course, developers aren't stuck with design time - they are free to drop into the code when they want to.

In building this data access layer, I've being doing deep research and testing on a lot of different data access blocks - including ones I've authored in the past. I've taken stuff from examples like PetShop and ShadowFax and of course looked at the PAG groups Data Access Block, including the latest revision. I'm very happy with where the code stands today.

One of the features missing from all of these is a streaming data access component. In some cases we have a need to start writing to the response stream of a web service before the database access is complete. So I've tackled this problem using an event model which works nicely. You can put “delegate“ or “callback“ labels on that technique too and they'll stick.

One of the interesting “tricks” was to use the SqlProvider as an agnostic provider during design time. We wanted to allow developers to use design time support and so I went down the path of creating my own agnostic implementors of IDataAdapter, IDbConnection, IDbCommand, etc. etc. The idea was that at runtime, we'd marshal these classes into the type specific provider objects based on the configuration. I was about 4 hours into this exercise when I realized I was pretty much rewriting SqlCommand, SqlDataAdapter, SqlConnection, etc. etc. What would stop me from using the Sql provider objects as my agnostic objects? At runtime, if my provider is configured for Oracle, I use the OracleHelper's “CreateTyped“ commands to marshal the various objects into Oracle objects, of course talking to them through the Interface. As a shortcut, if my provider is configured for Sql at runtime, I just use the objects as they are.

The neat fall out feature from this is that you can write an entire application against SqlServer, using SqlHelper if you like, and if you are thrown a curve ball to use Oracle, the only managed code changes are to change your reference from SqlHelper to DbHelper and everything else all works. Mileage will of course vary depending on how many times you used Sql'y things like “Top 1“. Just as importantly however, developers using this data block learn only 1 type of data access and the same technique applies to all other databases.

One of the sad things is how this thing evolved. In the beginning there was bits of no less than 3 data access blocks in this class library. I spun my wheels quite a bit going back and forth and doing a lot of prototyping under some heat from developers who need to start writing dac's. Because I was starting from chunks of existing code, somehow NUnit tests didn't magically happen. So I've spent the past few days working to a goal of 90% coverage of the code by NUnit tests. It's tough starting from scratch. Not only have I been finding & fixing lots of bugs, you won't be surprised that my testing has inspired the odd design change. I'm really glad I got the time to put the effort on the NUnit tests because sooner or later those design changes would have been desired by developers using this block - and by that time - the code would have been to brittle to change. Certainly some of my efforts would have been greatly reduced had I made these design changes sooner in the process. I'm not new to the fact that catching bugs earlier makes them cheaper to fix - but nothing like getting it pounded home. I'll be pretty hesitant to take on another exercise of editing some existing code without first slapping on a bunch of NUnit tests that codify my expectations of what the code does.

 

Metro Toronto .NET User Group: Better Web Development - April 1, 2004 - Toronto - -17

In this session, we will focus on some fundamentals in web development, including a special drill-down on security and caching. We will cover an overview of the .NET security, and specifically important aspects in ASP.NET security and best practices. We will also cover, at a high-level, the caching mechanisms used by ASP.NET. More information and registration can be found at the Metro Toronto UG web site.

ASP.NET Whidbey at CTTDNUG Tonight.

I'm presenting an overview on ASP.NET 2.0 tonight at CTTDNUG.

There isn't a great abstract on the site - and in fact, I will physically be unable to do the objectspaces stuff since the new version of VSNET CTP doesn't even have it in it anymore. Don't read into that - objectspaces will still be coming out - at some point. I should be able to give some nice objectspaces PPT's if the crowd is interested - but I'm guessing that Demo's are going to be more enjoyable.

So I am going to do my best ScottGu thrie impersonation and give a good solid demo lap around ASP.NET. IDE Improvements, Master Pages, the new datasource stuff, Site Navigation, Security, Personalization, SqlCaching.

CTTDNUG VS.NET "Whidbey" Preview Presentation - Part 2 - Mar 31, 2004 - Toronto - -16

Barry gives an in-depth look at the next release of .NET: Visual Studio .NET 2.0 (code-named "Whidbey"). This release of Visual Studio and the .NET framework will offer innovations and enhancements to the class libraries, CLR, programming languages and the Integrated Development Environment (IDE). He will share the latest material from the Microsoft PDC in L.A. and from the Bigger Better Basic tour. Attend this free event and learn how you can architect your applications today to ensure a smooth transition to the next version of .NET. More information and registration can be found at the Canadian Technology Triangle web site.

It's a right handed world

Left handed people always say it's a right handed world. Its true. Those of us who are right handed don't think much about this because everything is geared toward righties by default, like scissors and mice.

My son is left handed as well and my sister and mother, so although I am right handed I have an appreciation for the plight of the left handed.

My family and I spent march break in Panama. I wanted to really get away and not think about anything but relaxing. So I left my laptop at home. Yeah I know! I took my IPAQ though, I mean come on be reasonable. I needed somewhere to transfer my pictures when the flash card was full right?

So on the plane I was teaching my 9 year old son how to play solitaire. Something I had never done on my hand held. As I was playing it on my IPAQ I realized the game is not conducive to right handed players. As you click on the pile of cards to turn the next card your hand is in the way of the screen. While my son could play very easily without ever moving his hand out of the way to see.

So although it is a right handed world, there are some things in this world for lefties and not righties. If you want to feel good about being a south paw play solitaire on a hand held computer.

If anyone comes across a right handed version of solitaire for windows CE, I'm interested.

Downtown Metro Toronto .NET UG Inaugural Meeting!

Finally a downtown user group.  First week of every month - and the first one is April 1st - no fooling..at 200 Bloor St. East (Manulife) at Jarvis. This is also the first date on the MSDN Canada .NET User Group Tour across Canada. There is also a raffle for an XBox.

The sad news is that this meeting is going to get cut off at the first 200 people - so register soon by sending an email to GrahamMarko@rogers.com.

http://www.metrotorontoug.com/

speaker: Adam Gallant
location: Manulife Financial Building 1st Floor 200 Bloor Street East Toronto

Better Web Development

In this session, we will focus on some fundamentals in web development, including a special drill-down on security and caching. We will cover an overview of the .NET security, and specifically important aspects in ASP.NET security and best practices. We will also cover, at a high-level, the caching mechanisms used by ASP.NET.

Security for Developers

Why is that you can't plug a fridge into your house until it's been CSA or FCC approved, and that you have to have a licensed electrician install or at least review any modifications to the wiring in your house plugged into the grid - but any yahoo can build a piece of software and install it on their home computer connected to the internet for the world to hack into? Before I make a case that developers should be forced to do some security training or pass some certification....we have to keep in mind that most of the time the software sitting on somebody's home computer that is getting hacked into is Microsoft's. This is largely due to the size of the huge target on their back. What you think Linux is really more secure? What do you think is easier to hack into? It's easier to hack into something when you have the source code.

So, having said all that, there are changes coming for Microsoft developers in the security space:

  • Microsoft is turfing a few of the existing security training offerings. These include the Microsoft Security Clinic (2800) and Security Seminar for Developers (2805).
  • There is a new security course being developed: Developing Secure Applications (2840) and also a MS Press Training Kit both of which I'll be reviewing during their development.
  • Related to the new course and training kit, there is a new security exam for developer which unfortunately because of timing is only an MCAD/MCSD/MCDBA elective (not a required element - sigh). There are 2 versions - 1 for VB and 1 for C#. I guess they figure C++ and J# developers already write secure code. These are going into beta at the end of next month and I'll be auditing the C# version.
    71-330 Implementing Security for Applications with Visual Basic .NET
    71-340 Implementing Security for Applications with Visual C# .NET

I'm a little torn over this direction. Part of me says that security is so important, it needs to be covered in every MS Training course. To a certain extent that is already true, but I think they could go deeper. When I teach a windows, web or services course, I try to go deep on security. Sometimes you can go to far. Some pieces of security are more relevant to the type of application you are building, while other security issues are common regardless of the application architecture. Obviously we don't want to repeat a lot of content in each course - sometimes that is unavoidable. The other issue is that there is a lot to know about security and frankly I don't think every developer can master all of this. So teams need to dedicate a security architecture role on their project. For these folks - then yes I think it makes sense to have specific and deep training and certification for them. I think MS could probably do better than a single exam “elective”. How about an MCSD.NET+Security designation? MCDBA+Security as well - although you could argue that MCDBA's should be forced to have this security. Perhaps that will happen in the wake of Yukon - although I've heard no rumblings of creating Whidbey or Yukon flavours of exams or certifications at this point.

Our industry and profession needs to take a leadership role and be proactive in accepting responsibility and accountability for the important issue of security. We need to move our discipline to a higher level. I'm not convinced it has to be government that steps up to this place. Governments should only do what we can't do for ourself. Microsoft seems to be taking an increasingly proactive role on these security issues. It will be interesting to see how this pays off in 2-3 years from now.