Say goodbye to Whidbey's System.Data.SqlServer namespace

Pablo Castro writes in the new Data Access Team Blog that they are unifying System.Data.SqlServer classes into System.Data.SqlClient.

If you are asking “What is System.Data.SqlServer?” the answer is that it's a ADO.NET managed provider for accessing SQL Server 2005 - from within SQL Server 2005 (managed procs, triggers, functions, etc.). Why a different provider? Well SqlClient uses a TDS (tabular data stream) external access protocol to connect to SQL Server. Obviously if you are already in SQL Server you don't want to go outside and back into your database. The SqlServer namespace provides similar classes (SqlConnection, SqlCommand, etc. etc. that work within the current context of your connection to the database.

In the existing betas, MS was so nice to give them the same class names so that you might have an easy chance of porting data access code from client code to server code by changing the using/Imports <namespace> at the top of your class. You have a little bit of work to tweak your connection (if you really need one), but otherwise this porting process works pretty well. If you want the most efficient code, you also need to break your SqlCommands into static SqlDefinitions and SqlExecutionContexts for the actual executions. You also may need to output differently using the SqlContext's Pipe object.

So the product team has decided to unify, removing the SqlServer namespace and making the SqlClient components serve dual duty. This magic switch happens simply by using a connection string of “context connection=true”.

I have mixed emotions about this move. On one hand, it can make the porting process a bit smoother. You don't have to changing your using/Imports and you don't have to remove your connection object. It also means you'll instantiate SqlCommands the same way.

On the other hand, it's not obvious to me by reading somebody's code if they are talking in-context to the  database, or out of context externally. In fact, this could very well switch based on a configuration setting for the connection string. The differences are not just related to performance, but also affect the transactional semantics. It would be very easy to switch code that is acting in-context as part of a transaction, to an external command that is not participating in a transaction by a simple configuration change. This is a big deal. I don't think transactional semantics should be a “configurable” thing. My opinion extends to the COM+ component catalog, but that is a whole other blog entry.

This move ends up simplifying an API so much that a programmer's intentions are lost, and that's not a good thing. Pablo says this is coming in SQL Server Beta 3, but he wants your opinions.

End of VB6 Support, it could be worst.

As you may have heard, VB6 is coming to it's end of free support shortly. Apparently a bunch of VB6 developers who haven't made the move to .NET are a bit upset. I've declined to comment on the issue as I don't really have much to say one way or the other. But it is interesting to question how MS compares to the competition in supporting their developer tools?

MS has supported VB6 for 7 years. That is also the current plan with SQL Server 2000.

IBM you ask? Web Sphere Application Server 3.5 - only support for slightly more than 3 years. They got a bit better with 4.0 supporting it for about 6 months longer than 3.5 - not quite 4 years.

I'm not a WebSphere expert, but I suppose another argument could be that there was a more direct migration path between those version and the current version 6 as compared to VB6 and VB.NET.

Getting Schema information from a Database

Yesterday in my class I was going over the new GetSchema API in Whidbey and I learned from one of my students about another technique that I wasn't aware of - SQL ANSI 92 Information Schema Views. They are also supported in Oracle.

There are almost too many ways of getting database schema information but I'll try to summarize here to see how they compare. Depending on what you want to do, one of these will be more appropriate for you.

  • A common technique is to execute a SQL statement and describe the result set. Execute a “Select * FROM known_table_name”. Some API's would have you add a “WHERE 1=2“ if you can't just “describe“ the result set without incurring row retrieval payload. You can take advantage of this technique using the DataReader's GetSchemaTable method. In this example, you are really getting the schema of a query and not per se the underlying table structure. As such, it doesn't have to be constrained to a “*“ or 1 table query either.
  • Query the SQL Server System tables - this is close to the metal - which can burn you, but if you don't mind built in fragility. This technique is going to likely become obsolete in SQL Server Yukon with the new “Sys.*“ views.
  • Slightly better than using the system tables would be using system stored procedures such as sp_tables
  • Use the SQL-92 ANSI Standard Information Schema views such as “SELECT * FROM Northwind.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = N'Customers'. Much nicer, and generic across anything that implements this standard (SQL Server 7.0 and up, and Oracle 9i? and up)
  • In .NET 1.0, you could also use the GetOleDbSchema method of an OleDbConnection. This is somewhat hard coded to a RDBMS and requires an OleDb driver.
  • In .NET 2.0, you can use new GetSchema method on the connection object. In theory, the idea is you get schematic information specific to the provider - so things that are specific to the provider can be made available/discoverable. But I'll warn you, it's a pretty abstract API. It's also remotely similar at the root level to the MSXML 4.0 “GetSchema“ method in that a) the methods have the same name, and b) they accept an argument of “Schema“ to return. In theory, you'll get more accurate schema information for the thing you are connecting too. Collections can be defined by the provider which differs from GetOleDbSchema which normalizes all db objects into a known set of objects. That may not work so well for new providers, Object Relational databases, or who knows what next.

VSTS Work Item: Percentage Completed

During one of my demos this past week on VSTS, somebody commented that typically developers want to provide project managers more information on a task other than “completed” or not. What they really want is a percentage complete. John Lawrence on the team at MS developing this stuff. The good news is that they have reworked the work items a bit, and now there is support for fields indicating “Completed Work” and “Remaining Work”. These numbers will synchronize with “percentage complete“ in MS Project.

I'm happy to see this as it also ties in with better metric tracking. We don't just want to know that an item was completed, but how much work it required - which maybe different than the estimate. All to often item “estimates” turn into “budgets”. This is one small way that projects often take longer. Developers often don't report that a task took less than the estimate - only more or the same as the estimate. This is one of the reasons why I like estimates that aren't time-based but effort based. In other words, we esimate an item in terms of some arbitrary scale - 1-5 let's say where 1 is easy and 5 is hard. Project managers can figure out what those numbers mean later on and do things like calculate team velocity.

More Class Designer Productivity Potential: Batch Editing.

Daniel Moth says that he's not excited about the properties box in the class designer and would prefer to use the code editor to make those kinds of changes. It may not be obvious but one of the things you can do with that properties pane that you can't do in the code editor is make multiple changes across several class or several members at the same time.

Select all of the items that you want to make a mass change to, and any common properties are show in the properties dialog. I find this useful for decorating properties of my own components with custom attributes. Perhaps I want to change a bunch of methods to Static.

Daniel mentions another limitation. There is no full signature support on the model surface in the class designer. This makes it impossible to see the differences between your overloads. In fact, overloads are all grouped together and a count is shown.

Another mass editing scenario would be to change the XML comments on a bunch of methods - for example several overloads. You can't see the individual overloaded methods - just one of them with a “+1 overloads“ next to them. Furthermore, when you change the comment for a method that is overloaded (and shown as “+2 overloads“) one would hope the comment would be applied to all of the overloads, however the comment is only applied to the first one. Hopefully this is a bug and will be fixed. I've logged it with MS in the Product Feedback Center.

Installing Visual Studio Team System Dec CTP

I'm getting a fair amount of questions about this topic lately so worth a blog entry.

The best way to install any beta (or even more so CTP's) is to use Virtual PC. This will save you from having to reformat your entire machine a few times. I don't think I've ever know a VS.NET beta release that uninstalled properly.

So if you are going to use Virtual PC - the best way to get started is to find a friend who has already done the install successfully and get them to give you a copy of their Virtual Machine's.

In general with VPC's, you get better performance if the VHD files are located on a drive other than what your host OS is installed on. If you have a 2nd internal drive, great, otherwise, a good 7200 RPM external USB 2.0 drive will give a good performance boost. You'll also get best performance if you don't use a differential drive or an undo disk. To save memory and CPU cycles, turn off any unessential services and running programs in both the host and guest operating systems.

Yes, you need 2 machines for VSTS - one for the server/data tier, and a second for the client. You can't currently install everything on one box - that is not a supported scenario - at least for now. Your server should also be a domain controller. Unless you have 2 GB of ram, you'll likely want to host each one of those VPC's on a separate box. I've had good results having the server/data tier hosted in Virtual Server. You also have no real need to log in/have a UI open for the server box once it's all installed and configured.

There is a good document here with more detailed installation instructions: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnvs05/html/vstsinstallguide.asp

Having said all of that, the next beta is due out this month or early next month so you might want to wait to get a much better experience. As always, keep in mind that beta's are flaky and CTP's are worst than that.

Temperature User Defined Type for SQL Server Yukon

In SQL Server 2005/Yukon you can create user defined types using managed code. User defined types can be used in declaring variable and table columns. It is generally considered to only make sense for scalar types - things that can be sorted, compared, etc. not unlike the native numeric types and datetime. This is not however a fixed requirement, but its definitely the primary scenario being addressed by UDTs.

Here is an example of a temperature type, which is in fact a scalar. All UDT's have to be inserted as strings (i.e. in quotes). Not surprisingly, as part of the contract to implement a user defined type you should have a ToString and Parse method. This example allows the user to insert values in either Kelvin, Fahrenheit, or Centigrade by suffixing the entry with K, F, or C respectively. The default is Kelvin should you not include a suffix for the scheme, so 273, 273K, 0C, and 32F are all equivalent entries. All entries are converted and stored into Kelvin. This makes it easy to have the type byte orderable for easy sorting/comparisons, however this implementation also keeps track of the input format. So if you enter in 0C, you'll get OC back by default. You can however call x.Centigrade, x.Fahrenheit, or x.Kelvin properties to get the measurement scheme you desire. And since the type is byte orderable, you can sort by columns of this type and you'll get something like....

0K, -272C, 32F, 0C, 273K, 274K, 100C, all ordered by their underlying storage which in Kelvin is.
0, 1, 273, 273, 273, 274, 373.

Finally I have also declared as an example a TempAdder user defined aggregate to round out the example. I'm no scientist, but I'm not sure that temperatures are exactly additive, so this is really just for example purposes. The UDA is relatively self explanatory. You are responsible for keeping a variable around to accumulate values into. That leads into the accumulate method which gets called for every item in the result set to be aggregated. There is an init method for initializing your variables and finally a terminate method which is what is called when the result set is finished and you can return the result. The odd method is the merge method - which takes another aggregator. Because of parallel query processing, it is possible for two threads to individually deal with different parts of the entire result set. For this reason, when one thread finishes it's part of the aggregation, it will pass the other aggregator it's aggregator so they can be merged. For this reason I have my interim totals as a public property on the aggregator so i have access to that in the merge method. Furthermore, if you are calculating averages for example you may need the count as well as a public property.

Bottom line - is that creating user defined types and aggregates is a pretty straightforward. Any complexity will be a result of the actual complexity of your types. Consider a lineage type that must work with multiple measurement schemes (metric and imperial) and also multiple units of measurement (cm, m, km) but also multiple units of measurement at one time (i.e. 4' 10”). Don't even get me started on 2”x4”s which aren't actually 2” by 4”. The nice thing about UDT's is that it allows you to encapsulate a lot of this code deep down in your storage layer, allowing your business applications to deal with things more abstractly.

using System;
using System.Data.Sql;
using System.Data.SqlTypes;

[Serializable]
[SqlUserDefinedType(Format.SerializedDataWithMetadata, IsByteOrdered = true, MaxByteSize = 512)]
public class Temperature : INullable
{
 
 private double _kelvin;
 private bool n = true;
 private string inputFormat;

 public string InputFormat
 {
  get { return inputFormat; }
  set { inputFormat = value; }
 }

 public double Kelvin
 {
  get { return _kelvin; }
  set
  {
   n = false;
   _kelvin = value;
  }
 }

 public double Centigrade
 {
  get { return (_kelvin -273); }
  set
  {
   n = false;
   _kelvin = value + 273;
  }
 }

 public double Fahrenheit
 {
  get { return (this.Centigrade*1.8 + 32); }
  set
  {
   n = false;
   this.Centigrade = (value - 32)/1.8;
  }
 }
 public override string ToString()
 {
  string s = this.Kelvin + "K";

  if (this.InputFormat == "C")
  {
   s = this.Centigrade + "C";
  }
  else if (this.InputFormat == "F")
  {
   s = this.Fahrenheit + "F";
  }

  return s;
 }

 public bool IsNull
 {
  get
  {
   // Put your code here
   return n;
  }
 }

 public static Temperature Null
 {
  get
  {
   Temperature h = new Temperature();
   return h;
  }
 }

 public static Temperature Parse(SqlString s)
 {
  if (s.IsNull || s.Value.ToLower().Equals("null"))
   return Null;
  Temperature u = new Temperature();
  string ss = s.ToString();

  if (ss.EndsWith("C"))
  {
   u.Centigrade = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "C";
  }
  else if (ss.EndsWith("F"))
  {
   u.Fahrenheit = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "F";
  }
  else if (ss.EndsWith("K"))
  {
   u.Kelvin = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "K";
  }
  else
  {
   u.Kelvin = double.Parse(ss);
   u.InputFormat = "K";
  }

  return u;
 }

}

using System;
using System.Data.Sql;
using System.Data.SqlTypes;
using System.Data.SqlServer;


[Serializable]
[SqlUserDefinedAggregate(Format.SerializedDataWithMetadata, MaxByteSize = 512)]
public class TempAdder
{
 public void Init()
 {
  total = new Temperature();
  total.Kelvin = 0;
 }

 private Temperature total;

 public Temperature Total
 {
  get { return total; }
  set { total = value; }
 }

 public void Accumulate(Temperature Value)
 {
  this.Total.Kelvin  = this.Total.Kelvin + Value.Kelvin;
 }

 public void Merge(TempAdder Group)
 {
  this.Total.Kelvin = this.Total.Kelvin + Group.Total.Kelvin;
 }

 public Temperature Terminate()
 {
  return this.Total;
 }
}


 

Modeling as a Productivity Enhancer in Visual Studio Team System

It's been a theme for me over the past couple of weeks where people have mentioned that they can't afford the time to do modeling. If you've done a lot of UML modeling, you know what I'm talking about. But it doesn't have to be that way. Now just to ward of the UML flames, even when UML modeling seems like a waste of time, it may actual pay for itself if you uncover a requirement or a design flaw that you wouldn't have otherwise. Certainly catching this kind of thing early than later pays for itself. But that's not what I'm talking about.

In my days as an ERwin/ERX user, doing data modeling was done in this tool not only because it was good to visualize something quickly before committing to code. Using ERwin/ERX was just plain faster than cutting DDL code manually - or heck, even using the Database diagramming in SQL Server. One simple feature was foreign key migration - you drew a relationship from parent to child and bam - it copied down the pk from the parent table and set it up as a fk on the child table.

For the VSTS class designer to be successful (or any of the designers for that matter) MS needs to make using them just plain faster than stubbing out the code manually. Visual Studio has a great editor so they have their work cut out for them. It gets even better with things like code snippets. Why not enable some of the code snippets in the class designer? I can still create a single field wrapped up by a public property faster in the code editor using a snippet (in C# “prop“) than using the class designer - but I don't see why they couldn't add support for that in the class designer too.

A feature I stumbled upon last week was the ability to override a member. Simply right click on the descended class, and select “Override Member“ and you'll see a list of members from the ancestor that are overridable. Select the member to override and bam - you have the code stub. This reminds me a bit of the Inherited Class Skeleton Generator. This represents the kinds of productivity features that can make models/designers more usable, even if just for productivity sake.

There are obviously some types of edits that are better performed in the code editor, such as actually editing code fragments. Other types of edits can be performed more appropriately in the model such as stubbing out the api/members of a class, overriding members, etc. Let's not forget the other types of edits which are much better done in a WYSIWYG designer such as the windows or web forms designer.

One thing I'd like to see come in the Class Designer is a flattened out view of a class that collapses all inherited members into one layer. I'll call this the consumer or intellisense view of a class. It's helpful for viewing the entire interface to a class so I can see how the whole thing makes sense to a user. I would propose a greyed out notation or perhaps a lock icon or something similar to the online help view of a class.

Copy & Paste Support in VSTS Class Designer/Whitehorse

You may have noticed that the refactoring menu's that you see in the code editor are also available in the Class Designer. Furthermore, you can also copy & paste things from one class to another. So if you copy a property from one class to another, not only do you get the property added to the class, but also all the code in your getter's and setters. Ok so this is a nice touch that can help with a refactoring effort that requires moving stuff around - but don't consider this code reuse :)

New Course! Architecting Applications with Visual Studio Team System (Bootcamp)

On March 17th in Toronto, I'll be teaching our first delivery of this new course. It's a one day hands on, instructor led - whirlwind bootcamp style course that gives you a lap around Visual Studio Team System - specifically with a Solution Architect's perspective. We touch a little bit on the project management stuff, testing and some of the developer tools, but primarily we'll spend more than half the day on modeling as we walk through the Whitehorse designers.

Now how can we teach a course on a product that's not even in beta you ask? Very carefully. A lot of people are evaluating if and how they'll use VSTS in their shops once it is released so by popular demand we're offering this course a little earlier than we would normally. This course probably needs to be 3 days to properly focus on best practices and true “architecture” but for the purposes of evaluating this technology and letting people make their own minds, we are focusing primarily on the tools provided. With that in mind, this abbreviated 1 day course is being delivered in 6 cities across Canada: Toronto, Montreal, Ottawa, Vancouver, Calgary and Quebec City. There is also a promotional price of $299 CAD. For more details, full schedule and registration, visit http://www.objectsharp.com/ttdinvitation/vstsbootcamp.aspx