Service Autonomy?

In a previous post, I discussed the impart of the first of the Four Basic Tenets of Service Orientation on the design of a service-based application.  In this post, I consider the second tenet in the same context. 

The second tenet says that services must be autonomous.  This is probably one of the more hotly debated tenets, in part because the definition of autonomy as it applies to services does not appear to be part of the common vernacular.

If you go back to the description of this tenet as written by Don Box here, it would appear that the definition of autonomy involves independence from the client.  Specifically, Don contrasts the autonomy (that is, the independence) of services with the interdependence of objects in a OO application.  In OO, the called object is inextricably linked to the calling object through the call stack.  If the calling object goes awry (a euphemism for death by exception), the called object goes away too.  Or at least doesn't have any place to send the result back to.

A service, on the other hand, has a life outside of the calling application.  If the calling application dies while the service is doing its think, the service will continue functioning properly.  In fact, one of the guarantees of autonomy (in this sense) is the ability for a service to be called aysynchronously with no expected return value.  In this manner, it can be guaranteed that the service remains independent of the calling client.

Another view, one espoused by Rich Turner here, is that the autonomy of services defines how a service interacts with other services.  A method of Service A is invoked.  That method in turn invokes a method on Service B.  Service A is autonomous if it is not hard wired to the location or implementation details of Service B.  So Service A calls Service B.  If Service B isn't where it is expected, then Service A goes through a number of steps to discover Service B's whereabouts.  At no point does Service A require any information from Service B or any one else to try and complete the invoked method.

I have a problem with this description.  Not with the concept being described (that is, that a service, where possible, should be self-healing), but that the word 'autonomous' is used to describe it.  There is a connotation associated with antonomy of being independent.  Indeed, a number of dictionaries include 'independent' as a synonym for the word (see dictionary.com). As Rich aptly points out, no useful service is truly independent. So I don't think that autonomous really describes the state of a service.

Now the question becomes what word or words would better describe the concepts that are being conveyed here.  My own personal choice would be 'sovereign'.  While it is listed as synonomous with autonomous and independent in a number of different references, the connotation is more appropriate for both of these points.  A soverign service has an expectation of a certain level of independence. A sovereign service would be expected to survive the failure of any client.  A sovereign service would be expected to locate and negotiate communication between it and any other soverign service. Maybe I'm being picky here, but I think that considering a service to be sovereign will better prepare an architect for service-orientation. The impact that this change in viewpoint has is the topic of my next post.

MSDN Subscriber Download RSS Feeds

Perhaps I'm not the only one to just learn about this recent feed a month old now I guess:

http://msdn.microsoft.com/subscriptions/rss.xml

My favourite item in that so far has to be the MS Bob 1.0a release that came on March 31st.

Upcoming UG Meetings

Kate Gregory is starting a new UG in the east end in Oshawa. They are meeting Apr 20. The topic is an overview presentation of .NET. gtaeast.torontoug.net to register.

The regular meeting of the Canadian Technology Triangle meets next Tuesday Apr 20th as well. This special meeting is part of the MSDN User Group tour and the topic is The .NET Compact Framework. A special note that this event is not in its usual location, but rather at the Peter Benninger Theatre.  cttdnug.org to register.

Things I don't like about NUnit

Firstly - I love NUnit - and nobody has done more to increase the quality of .NET Applications than the contributors to this project - nobody.

But I know that next generation Unit testing framework authors are listening so I might as well state the things I'd like to see:

  • I'd like to be able to run a single test from the command line. Not just a single fixture, but a specific test.
  • I'd like my tests to be parameterizable. I'd like from the Command Line to run a test and provide the specific values.
  • Wouldn't it be cool to create a batch file of scenarios? What about doing this in XML? Duh - no brainer.
  • A lot of my tests in this case would simply wrap up and call business objects - so why not be able to have a virtual test in the xml file? That is just call a class/method directly with XML.I'm not saying this is the be all / end all - really the only kinds of assertions I could do realistically would be to test for certain exceptions or no exceptions. Return values? Possibly - but really - some times our methods accept and return things other than value types - but this might be a nice thing to have regardless.

With these things in place, one could conceivably:

  • tie a class modelling tool into test scenarios. Hmm, I need a class, with a method and if I pass this data, I should get these results.
  • If an end user reports a bug in my defect tracking system, I should be able to create an NUnit test that exposes this bug. Then my support people can come back from time to time, have the defect tracking system run the test to see if it's been fixed in a given patch/release/build, etc. and update the status on the defect.

 

 

Architecting with Services

As I alluded to in a previous post, I have been thinking about the architecture of services for the past couple of weeks.  This is quite different (at least as I see it) from the many descriptions of how to implement web services.  Because I'm relatively known for Web services in the user group community here in Toronto, I get asked things like “Where do my service boundaries begin and end?” and “What parameters need to be passed into a service?”.  While it is easy (and sometimes even fun) to be flip with my comments regarding, my annoying streak of professionalism means that occasionally I have to consider the ramifications of my answers.

Naturally, the basis for the answer should come from some sound theory.  For service orientation, there is little more sound than the Four Basic Tenets of Service Orientation first espoused by Don Box.  In summary, they are:

1.  Service boundaries are explicit and costly to traverse

2.  Services are autonomous

3.  Services expose schema and contract, not class and type

4.  Services negotiate using policy

While is there an on-going discussion about the completeness of these tenets, they are certainly a good starting point for this post.  In particular, I've been trying to decide if all of these tenets apply equally to the design process.  My conclusion is that they don't. Let's start with the first one: Service Boundaries are Explicit and Costly

This is a double-edged sword, from a design perspective.  First, I believe it is important to consider the possibility that the cost to traverse a boundary is high. This is a necessary (and frequently lacking) mind set to adopt when creating and using web services.  I don't believe that it should be any different with the design of the web service. 

On the other hand, if the high cost for boundary traversal is always the assumption, the design decisions may not result in the most efficient application.  In particular, consider the need for a service that provides information about a business entity, such as a customer. The kind of service I'm thinking about is one that implements basic CRUD functionality.  Now if the assumption is made that boundary traversals are costly, the exposed interface will be such that all possible information about the business entity will be passed to the service for every method.  This would reduce the chatty nature of the typical CRUD methods. 

However having a chunky CRUD service is not necessarily the most efficient use of resources.  Why pass all possible information when only a little might be sufficient.  Especially when, as would be the case in most CRUD services, the service is more likely to be a near link than a far link.  All of this leads to my slight variation on the Tenet:

Service Granularity is Proportional to Expected Latency

When designing a service-oriented application, it is certain necessary to consider the cost involved in calling across the various service boundaries.  However, it is also necessary to consider the expected latency.  In this way, the appropriate design decisions regarding the number and content of the exposed messages can be made with an eye to both flexibility and performance.

I've got some thoughts about the other tenets as they apply to design, but they will have to wait for another blog entry.  And if anyone has thoughts about my logic, I'd be happy to hear about it.

MVP Summit

If you read my last post, there was one additional reason for the lack of recent posts.  I was at the MVP Summit in Seattle last week.  This was my first Summit and I was looking forward to being in the presence of the luminaries of the industry.  It was everything I expected and more. Since the contents of the Summit were covered under an NDA, I'm limited to talking about something that I'm sure is not covered.

I was impressed by the constant request for feedback from the participants of the Summit.  Whether it be focus groups or the various chances that we had to interact with personnel from various Microsoft development groups, there was a constant drumbeat asking what we thought, what problems we or our clients encountered and what could be done to make things better.  This even extended to the third day of the Summit, where it was almost one-on-one with the people who are creating the technology we'll be using for the next 10 years. Even more important, it looked like there were listening.  It will be interesting to see what impact, if any, our suggestions will have.

 

Back in the Saddle

First of all, let me apologize for the relative dearth of post from me over the past couple of months.  My reason/excuse/rationale for my period of absence has to do with the work I have been involved in recent and the source for most of my posts in the first place.

Understand that, for the most part, my inspiration for posting is the particular problem that I'm solving on any given day.  Which means that if I'm not solving a challenging problem, there is little fodder for a post.  Unfortunately (for posting, that is), I have been working as an instructor almost continuously since the end of January.  So the most challenging problem I have been dealing with is getting students to understand the ins and outs of the EnterpriseServices namespace.  Not an easy problem, you understand, but not one that generates post material.

My situation is in the process of changing.  I'm still instructing, but not with the same full time grind as the past two months. So hopefully there will be more frequent posting from me.  In fact, I have been cogitating (in my spare time) on the challenges of designing a service-oriented architecture.  Not the technology behind SOA, but the choices that have to be made by real people trying to implement production applications based on SOA.  Look for some posts along these lines in the next week or so.

Database Access Layers and Testing

I've been doing a lot of testing lately. A lot. I'm building a database agnostic data access layer. It has to be as performant as using typed providers, and even support the optional use of a typed provider. For example, we want to allow developers to build database agnostic data access components (dac). Sometimes however, there is just too much difference and to write standard Sql requires too much of a sacrifice. So in these cases, we want to allow a developer to write a high performance dac for each of the dac's, giving them ultimate control to tweak the data access for one database or another - and use a factory at runtime to instantiate the correct one. Of course they have to implement the same interface so that the business components can talk to an abstract dac. So normally developers can talk to their database through the agnostic DbHelper, but when they want, they can drop down to SqlHelper or OracleHelper.

We also want to support a rich design time environment. Creating DataAdapters and SqlCommands in C# isn't fun. Good developers botch these up too easily - not creating parameters right - or worst not using parameters at all and opening themselves up to sql injection. The typed DbCommand and DbDataAdapter's allow for a rich design time painting of sql and generation of parameters when used on a sub-class of System.Component. Of course, developers aren't stuck with design time - they are free to drop into the code when they want to.

In building this data access layer, I've being doing deep research and testing on a lot of different data access blocks - including ones I've authored in the past. I've taken stuff from examples like PetShop and ShadowFax and of course looked at the PAG groups Data Access Block, including the latest revision. I'm very happy with where the code stands today.

One of the features missing from all of these is a streaming data access component. In some cases we have a need to start writing to the response stream of a web service before the database access is complete. So I've tackled this problem using an event model which works nicely. You can put “delegate“ or “callback“ labels on that technique too and they'll stick.

One of the interesting “tricks” was to use the SqlProvider as an agnostic provider during design time. We wanted to allow developers to use design time support and so I went down the path of creating my own agnostic implementors of IDataAdapter, IDbConnection, IDbCommand, etc. etc. The idea was that at runtime, we'd marshal these classes into the type specific provider objects based on the configuration. I was about 4 hours into this exercise when I realized I was pretty much rewriting SqlCommand, SqlDataAdapter, SqlConnection, etc. etc. What would stop me from using the Sql provider objects as my agnostic objects? At runtime, if my provider is configured for Oracle, I use the OracleHelper's “CreateTyped“ commands to marshal the various objects into Oracle objects, of course talking to them through the Interface. As a shortcut, if my provider is configured for Sql at runtime, I just use the objects as they are.

The neat fall out feature from this is that you can write an entire application against SqlServer, using SqlHelper if you like, and if you are thrown a curve ball to use Oracle, the only managed code changes are to change your reference from SqlHelper to DbHelper and everything else all works. Mileage will of course vary depending on how many times you used Sql'y things like “Top 1“. Just as importantly however, developers using this data block learn only 1 type of data access and the same technique applies to all other databases.

One of the sad things is how this thing evolved. In the beginning there was bits of no less than 3 data access blocks in this class library. I spun my wheels quite a bit going back and forth and doing a lot of prototyping under some heat from developers who need to start writing dac's. Because I was starting from chunks of existing code, somehow NUnit tests didn't magically happen. So I've spent the past few days working to a goal of 90% coverage of the code by NUnit tests. It's tough starting from scratch. Not only have I been finding & fixing lots of bugs, you won't be surprised that my testing has inspired the odd design change. I'm really glad I got the time to put the effort on the NUnit tests because sooner or later those design changes would have been desired by developers using this block - and by that time - the code would have been to brittle to change. Certainly some of my efforts would have been greatly reduced had I made these design changes sooner in the process. I'm not new to the fact that catching bugs earlier makes them cheaper to fix - but nothing like getting it pounded home. I'll be pretty hesitant to take on another exercise of editing some existing code without first slapping on a bunch of NUnit tests that codify my expectations of what the code does.

 

Metro Toronto .NET User Group: Better Web Development - April 1, 2004 - Toronto - -17

In this session, we will focus on some fundamentals in web development, including a special drill-down on security and caching. We will cover an overview of the .NET security, and specifically important aspects in ASP.NET security and best practices. We will also cover, at a high-level, the caching mechanisms used by ASP.NET. More information and registration can be found at the Metro Toronto UG web site.

ASP.NET Whidbey at CTTDNUG Tonight.

I'm presenting an overview on ASP.NET 2.0 tonight at CTTDNUG.

There isn't a great abstract on the site - and in fact, I will physically be unable to do the objectspaces stuff since the new version of VSNET CTP doesn't even have it in it anymore. Don't read into that - objectspaces will still be coming out - at some point. I should be able to give some nice objectspaces PPT's if the crowd is interested - but I'm guessing that Demo's are going to be more enjoyable.

So I am going to do my best ScottGu thrie impersonation and give a good solid demo lap around ASP.NET. IDE Improvements, Master Pages, the new datasource stuff, Site Navigation, Security, Personalization, SqlCaching.