TechEd 2004 Sesion Slides & Videos Available Online to Public

The site doesn't yet contain everything but things are trickling in.

Unit Test Case Stub Generator for 100% Code Coverage

I've been a fan of Jonathan de Halleux's blog for a while now. He takes Unit testing to a new level with his mbUnit project. He's done some funky stuff with graphs. I really like his Reflector add in for generating call graphs and assembly references. For me, his add-ins were the reason for me to switch from an Anakrino to Reflector as my reverse engineering tool of choice (although I should have done that anyway).

I'm totally impressed with how much code this guy turns out in a given day. On Friday I was intrigued by his Automatic Unit Test Case generator - that is a Reflector add-in. He uses an IL graph to extract a code path to get full code coverage. OK so you still have to write the guts, but if  “the man” is making you write unit tests to demonstrate high %'s of code coverage, then this is what you need. Thanks Jonathan.

No wonder MS hired this guy.

White-box Unit Testing - in whidbey

James Newkirk shows how to write a white-box test in Whidbey He shows how to test the value in a private field and invoke a private method. While you can (not so) easily do that today in NUnit, he demonstrates the PrivateObject class that lets you easily invoke private methods and look at private fields.

Could this “PrivateObject“ class be used for evil? Yes it could be used for evil - I imagine - but no more evil than that of reflection.

James has some great holy war type feedback about Should I or Shouldn't I white-box test. Those who say no probably haven't fought with unit testing a singleton class where we have to test it under multiple configurations that *normally* get set during the initial (private) constructor and/or the private methods called by the constructor. So yes, I can see a use. Is this an excuse? I haven't really thought of another technique....but to quote Michaelangelo, “I am still learning...” (via a silly set of fridge magnets I saw at Chapters today). The silly part of course is how one can learn from (and quote) a famous historical figure through something as silly as a fridge magnet.

Smart Client Deep Dive

Myself and Adam Gallant delivered an MSDN Deep Dive last week about developing Smart Client applications. I covered the overview & secure data access sections. The samples and IssueVision (1.0 C# & VB) along with the slides are available over here. Thanks to those who came out.

Update: If you want to take advantage of getting this stuff (and more) on the DevDays CD, you can fill in the form here.

Data Driven Development

So we have Test Driven Development and Model Driven Development or Design by Contract (similar perspective). But in the past, I've been a fan of Data Driven Development. This is a technique I haven't had the pleasure of using recently....because it relies on you building new applications with new databases.

What is this technique you ask? Well for me it is designing the data model first. In the early days of Client/Server, PowerBuilder and ERwin were my tools of choice. New applications. New databases. My design process (and that of many of my associates) was not so much to design a database but to document the data that existed in the organization - and do that in 3rd normal form. ERwin still stands as one of the best modeling tools ever because it actually made the job of coding up a database schema easier and faster than any other alternative. I could also use my model throughout the entire lifecycle since it did an excellent job at full round trip engineering/synchronization.

One of the cool features of PowerBuilder was your ability to annotate your database schema with UI hints. So you could say that a given column in your database should by default be shown as a checkbox, and that checked should be saved as “true“ and unchecked as “false“ - or whatever weird thing your DBA said it had to store. Whenever you designed a screen with that column, bam you'd have it the way you'd expect - as a checkbox. The downside of PowerBuilder's datawindows of course was that the data store/entity/container was quite pretty tied to your database and they made no attempts to hide that fact. But boy, productivity was really high - although I was producing tightly coupled, loosely coupled code :( .NET let's me build better code now, but productivity is still lacking.

At TechEd a couple of weeks ago, I stopped by the DeKlarit booth for a demo of their product by their lead architect Andres Aguiar. I was happy to see a tool that builds upon the Data Driven Development process. Of course, you don't have to start with an empty database, but this tool does an excellent job of making your job easy when starting from scratch. Andres promised to send me an eval so I can play with it some more to see how it works with existing databases but this tool so stay tuned. I could easily see this tool paying for itself in a matter of a couple of weeks.

As for ERwin, I'm still a fan although it really hasn't changed much in the past 10 years. I remember the first copy I had fit on a single floppy. So did the 200 table model I created with it. I was using LBMS System Designer who stored my model in some kind of 10mb black hole and took 10 minutes to generate a schema. When I first installed ERwin, I had it installed and reverse engineered by LBMS model - and forward engineered to from Oracle to SqlServer inside of 10 minutes. I couldn't believe the schema generation took 20 seconds compared to LBMS at 10 minutes.

What do you do when your breakpoint won't break?

Go straight to Andy Pennel's Breakpoint Helper. It's not some sw to download - it's a nice interactive q&a that will help you pinpoint the problem. It has tips on 2002, 2003 and 2005. Ahhh. Thanks Andy.

ps. Andy is a dev lead in C# and owns the debugger (as also used by VB.NET, C++, Script and SQL)

SOA Challenges: Entity Aggregation

Ramkumar Kothandaraman has a good article just released on MSDN discussing SOA Challenges: Entity Aggregation. Aggregation is a much better name than “composable entities“ since it's definition implies that property sets of an entity grow as more child entities are merged into it. This also implies that you need a mapping layer and conflict resolution to resolve duplicate property names or just rename them for that matter.

This is becoming an important technique for passing xml documents up the stack of web services, each one adding their own value to the entity - or aggregating in a master/slave hierarchy topology. Either way, one of the subtle things about entity aggregation is that you can also think of it as a lightweight form of multiple inheritance for the properties of your domain objects. Is that useful or am I just bent?

How do you feel about the VS.NET Query Designer

The VS Data Team wants your input. Head over here. (BTW, don't you love these surveys? They're the best and tell me that MS really cares about what we think).

Layered Design and DataSets for Dummies

Scott Hanselman does a nice 30 second intro into layered design. If any of this is new to you, run quickly to read this.

Scott does a quick bash at Datasets (although doesn't say why) and in my new role as DataSet boy I have to disagree with him and evangelize how simple datasets can make a lot of the code written by the typical programmer: CRUD stuff for example. He even mentions “Adapter“ in describing a data access layer - come on, use the DataAdapter - don't be afraid. In general, if anybody tells you to never do something, you need to question that a bit and dig into the reasons why a technology exists. Of course things may just end up being rude and the answer is indeed never - but always try and get the why.

We've been running a developer contest at the end of some of our training courses (the big 3 week immersion ones). The competition has developers build a solution build on a services oriented architecture which includes smart client, web services, enterprise services/com+ and of course data access. It's only a 1 day event, but the teams are built up of 5-6 people each. Inevitably, if one team decides to use datasets/dataadapters and the other team doesn't, the team that choose the dataset wins. This competition isn't judged or skewed for datasets by any means. But inevitably this is the thing that gives the other team more time to work on the interesting pieces of the application (like business logic: features and functions).

I over heard Harry Pierson tell a customer last week that they shouldn't use datasets in a web service because they aren't compatible with non .NET platforms. This isn't true. A dataset is just XML when you return it out of a dataset. And you probably more control over the format that is generated via the XSD than most people realize. If you want a child table nested, no problem. You want attributes instead of elements, no problem. You want some columns elements and others attributes, no problem. You want/don't want embedded schema, no problem. You don't want the diffgram, no problem. Somebody in the J2EE world has actually gone to the extent of creating a similar type of base object in Java that can deserialize the dataset in most of it's glory. (Link to come - can't find it right now).

In February I posted a “Benefits of Datasets vs. Custom Entities“ which has generated some excellent feedback. It's still in my plans to write the opposite article - when Customer Entities are better than Datasets but I'm still looking for the best template or example entity. Everyone somebody has sent me to date is somewhat lacking. To be fair, I end up always comparing them to a dataset. The things typically missing out of a custom entity are the ability to deal with Null values and the ability to track original values for the purposes of optimistic concurrency. The answer to the question of “When to use a Custom Entity over a Dataset?“ is of course when you don't need all the stuff provided for you by a dataset. So is that typically when you don't care about Null Values or Optimistic Concurrency? Perhaps. But I know there is more to it than that.

I will say there is some crummy binary serialization in the dataset (it's really XML). This is really a problem if you are doing some custom serialization or need to do some .NET remoting. But you can change the way they are serialized (and indeed it's changed in Whidbey). There are some good examples here, here, here, here, here and here.

I'm working on an article making the cases for the custom entity, but in the meantime, datasets represent a good design pattern for entities that is easy and quick to implement by the average developer - and scalable too.

TechEd (Day 3): Hands On Lab Manuals downloads available to the public

No need to have a TechEd commnet password. You can download ALL the pdf's for the plethora of topics. Some good stuff to see how the newly announced stuff (Team System, etc.) works.

Update These links are broken, give this a try: