Temperature User Defined Type for SQL Server Yukon

In SQL Server 2005/Yukon you can create user defined types using managed code. User defined types can be used in declaring variable and table columns. It is generally considered to only make sense for scalar types - things that can be sorted, compared, etc. not unlike the native numeric types and datetime. This is not however a fixed requirement, but its definitely the primary scenario being addressed by UDTs.

Here is an example of a temperature type, which is in fact a scalar. All UDT's have to be inserted as strings (i.e. in quotes). Not surprisingly, as part of the contract to implement a user defined type you should have a ToString and Parse method. This example allows the user to insert values in either Kelvin, Fahrenheit, or Centigrade by suffixing the entry with K, F, or C respectively. The default is Kelvin should you not include a suffix for the scheme, so 273, 273K, 0C, and 32F are all equivalent entries. All entries are converted and stored into Kelvin. This makes it easy to have the type byte orderable for easy sorting/comparisons, however this implementation also keeps track of the input format. So if you enter in 0C, you'll get OC back by default. You can however call x.Centigrade, x.Fahrenheit, or x.Kelvin properties to get the measurement scheme you desire. And since the type is byte orderable, you can sort by columns of this type and you'll get something like....

0K, -272C, 32F, 0C, 273K, 274K, 100C, all ordered by their underlying storage which in Kelvin is.
0, 1, 273, 273, 273, 274, 373.

Finally I have also declared as an example a TempAdder user defined aggregate to round out the example. I'm no scientist, but I'm not sure that temperatures are exactly additive, so this is really just for example purposes. The UDA is relatively self explanatory. You are responsible for keeping a variable around to accumulate values into. That leads into the accumulate method which gets called for every item in the result set to be aggregated. There is an init method for initializing your variables and finally a terminate method which is what is called when the result set is finished and you can return the result. The odd method is the merge method - which takes another aggregator. Because of parallel query processing, it is possible for two threads to individually deal with different parts of the entire result set. For this reason, when one thread finishes it's part of the aggregation, it will pass the other aggregator it's aggregator so they can be merged. For this reason I have my interim totals as a public property on the aggregator so i have access to that in the merge method. Furthermore, if you are calculating averages for example you may need the count as well as a public property.

Bottom line - is that creating user defined types and aggregates is a pretty straightforward. Any complexity will be a result of the actual complexity of your types. Consider a lineage type that must work with multiple measurement schemes (metric and imperial) and also multiple units of measurement (cm, m, km) but also multiple units of measurement at one time (i.e. 4' 10”). Don't even get me started on 2”x4”s which aren't actually 2” by 4”. The nice thing about UDT's is that it allows you to encapsulate a lot of this code deep down in your storage layer, allowing your business applications to deal with things more abstractly.

using System;
using System.Data.Sql;
using System.Data.SqlTypes;

[Serializable]
[SqlUserDefinedType(Format.SerializedDataWithMetadata, IsByteOrdered = true, MaxByteSize = 512)]
public class Temperature : INullable
{
 
 private double _kelvin;
 private bool n = true;
 private string inputFormat;

 public string InputFormat
 {
  get { return inputFormat; }
  set { inputFormat = value; }
 }

 public double Kelvin
 {
  get { return _kelvin; }
  set
  {
   n = false;
   _kelvin = value;
  }
 }

 public double Centigrade
 {
  get { return (_kelvin -273); }
  set
  {
   n = false;
   _kelvin = value + 273;
  }
 }

 public double Fahrenheit
 {
  get { return (this.Centigrade*1.8 + 32); }
  set
  {
   n = false;
   this.Centigrade = (value - 32)/1.8;
  }
 }
 public override string ToString()
 {
  string s = this.Kelvin + "K";

  if (this.InputFormat == "C")
  {
   s = this.Centigrade + "C";
  }
  else if (this.InputFormat == "F")
  {
   s = this.Fahrenheit + "F";
  }

  return s;
 }

 public bool IsNull
 {
  get
  {
   // Put your code here
   return n;
  }
 }

 public static Temperature Null
 {
  get
  {
   Temperature h = new Temperature();
   return h;
  }
 }

 public static Temperature Parse(SqlString s)
 {
  if (s.IsNull || s.Value.ToLower().Equals("null"))
   return Null;
  Temperature u = new Temperature();
  string ss = s.ToString();

  if (ss.EndsWith("C"))
  {
   u.Centigrade = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "C";
  }
  else if (ss.EndsWith("F"))
  {
   u.Fahrenheit = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "F";
  }
  else if (ss.EndsWith("K"))
  {
   u.Kelvin = double.Parse(ss.Substring(0, ss.Length - 1));
   u.InputFormat = "K";
  }
  else
  {
   u.Kelvin = double.Parse(ss);
   u.InputFormat = "K";
  }

  return u;
 }

}

using System;
using System.Data.Sql;
using System.Data.SqlTypes;
using System.Data.SqlServer;


[Serializable]
[SqlUserDefinedAggregate(Format.SerializedDataWithMetadata, MaxByteSize = 512)]
public class TempAdder
{
 public void Init()
 {
  total = new Temperature();
  total.Kelvin = 0;
 }

 private Temperature total;

 public Temperature Total
 {
  get { return total; }
  set { total = value; }
 }

 public void Accumulate(Temperature Value)
 {
  this.Total.Kelvin  = this.Total.Kelvin + Value.Kelvin;
 }

 public void Merge(TempAdder Group)
 {
  this.Total.Kelvin = this.Total.Kelvin + Group.Total.Kelvin;
 }

 public Temperature Terminate()
 {
  return this.Total;
 }
}


 

Sending Mail through SMTP with Authentication

If you have looked at the process of sending emails from within .NET, odds are pretty good that you have stumbled across the SmtpServer class.  To send email, you create a MailMessage object, assign the necessary properties and then use the Send method on SmtpServer.  The SmtpServer class can be pointed to any mail server that you would like. 

MailMessage message = new MailMessage();

message.From = "bjohnson@objectsharp.com";
message.To = "who@ever.com";
message.Subject = "Testing";
message.Body = "This is a test";

SmtpServer.Server = "mail.server.com";
SmtpServer.Send(message);

So all is well and good right?  Well maybe not so much.  What happens if your email server, like all good servers, doesn't allow relays.  Instead, it requires that a user id and password be provided.  What I found strange is that the SmtpServer class doesn't include properties like UserId or Password to handle the authentication.  So how is this accomplished.

The answer is to utilize a newly added feature(new to .NET 1.1, that is).  The MailMessage class has a Fields collection.  The necessary authentication information gets added to the fields in the message that is being sent out. Certainly not where I expected it to be, but smarter people than I designed the class, so I'm sure there was a reason for this approach.  Regardless, it's knowledge that needs to be easily Googlable, hence the post. An example of the code that adds the three fields follows.

message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate",
  
"1"); //basic authentication
message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/sendusername",
   "userid");
//set your username here
message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/sendpassword",
   "password");
//set your password here

 

Getting Something for Nothing - Almost

It's not often that you can get something for nothing. And this case is no exception. But when you realize what you do get, it's still a very, very good deal.

The offer is basically as follows:  buy some Microsoft Office Licenses, get a check towards to cost of developing software that utilizes the Office application set.  If this sounds lame to you, then you probably aren't aware of all of the development tools that are available for Office.  Nor all of the uses to which it can be put.

For example, we have one client that uses InfoPath as one of the interfaces into a workflow application.  Users enter data through an InfoPath form.  The data is transformed into an XML document which is then submitted to the workflow through a web service.  And that is just one application.  The Visual Studio Tools for Office (VSTO) toolkit is surprisingly powerful.  If your user base is familiar with Office, then the ability to extend Word and Excel to interact with your corporate applications can be quite compelling.

If you want to talk to someone about the possibilities, feel free to drop me a line at bjohnson@objectsharp.com.

 

Adopting Bluetooth

As part of my annual hardware reduction project, I got rid of my desktop. For the past number of years, I've used both a laptop and a desktop. I think this has added a certain amount of disorganization to my life by having files frequently littered between the two machines. The process of cleaning stuff of the desktop was annoying, having to determine sometimes which of the duplicated files were more recent. I'm glad the process is done. One less machine to worry about.

So the other half of this effort is to make my laptop work more like a desktop when I'm at home and that means docking station and full keyboard. I picked up an advanced port replicator for my Inspiron 600m (still waiting on shipment for that). I also picked up a 17“ LCD monitor - a Wintergreen??? Hey, it was cheap ($350) and had good specs. Almost too good. While I like the idea of multi-monitor, I tend to only use the external LCD since by comparison it has much easier on the eyes.

For the keyboard, I decided on the Logitech Bluetooth diNovo Media Desktop. I've seen this keyboard/mouse combination priced up to $499 at FutureShop and BestBuy, but picked it up at TigerDirect. ca for $200. The bluetooth combination includes 4 devices: A bluetooth hub/mouse charger, a rechargeable MX900 mouse, the keyboard, and a separate numeric keypad.

The mouse is great, although I have to get in the habit of putting it in the charger when I'm done. It's more accurate and suffers less noise than both of my other optical mice when on my wood grain desktop. My old optical mice include a Logictech FeelMan and a plain old Microsoft Intellimouse Explorer.

The keyboard is more like a laptop keyboard in it's thin/flat sleek shape. It's a very sexy keyboard and comfortable. I think I still prefer the ergonomic separated keyboards though. I'm not a big fan of keyboards with a ton of extra keys on them that I never remember to use. This keyboard has a limited set which I'm more likely to remember.

The separate keypad is quite a different twist on things. It's nice that they took it off they keyboard, which makes the keyboard more movable around the desk or my lap. The Media pad on its own is cool too. It works like a typical numeric keypad but it also has an LCD display. The pad has 3 modes. Numeric, Navigation (like a NumLock mode) and a calculator mode. When in calculator mode, it works like a standalone desktop calculator. You have the option with the software to keep the last result in the computer clipboard which is interesting. The software will also put email and MSN messenger notices on the LCD and even beep your media pad. It also has a set of media keys for volume, mute, play/pause, stop, back and forward. It will also show track names on the LCD.

Now the thing I wanted to do was use this same mouse when I take my laptop on the road. I had purchase a dlink external usb bluetooth dongle awhile ago, but I opted to buy an internal Dell Trumobile 300 module on ebay for my laptop. I can't believe how small it is, smaller than a stamp. It was a pain to install it - had to remove the keyboard, bezel, display, and palm rest.

The bigger gotcha was on the software side. I installed the dell drivers and software but it didn't detect the device. The bluetooth light was on no problem and device manager showed the device, but the bluetooth software didn't detect it. I figured this was because I now had two bluetooth hubs installed so I uninstalled both the dell and logitech software and then reinstalled the dell only software after disconnecting the logitech hub. Bingo - that worked. I was able to successfully add the keyboard, mouse, media pad, along with my IPAQ 5560 and bluetooth headset.

Without installing any logitech software, I'm still able to use the extra buttons on the keyboard and mouse, except the mouse's application switching button. They keypad works fine too, although I'm not receiving any messages on the LCD, nor is the clipboard support working. I tried to install the full logitech setpoint software, but it wouldn't continue the install without detecting the media hub, and plugging that in is not something I'm going to try. I can live without the features, and the fact I'm not running a bunch of extra software isn't going to bother me either.

Update: I just noticed on the logitech site, the setpoint download for the standalone MX900 says that it's been enhanced to support internal bluetooth modules. I'm downloading that and going to give it a try.

Reading Adobe Illustrator files with Acrobat Reader

Occassionaly I need to view an adobe illustrator file. I usually end up asking our graphics guy to create a PDF or jpeg. Sometimes I search for some freeware tool to convert or display, usually without much success.

Today I thought I'd try using the new Adobe Reader 7.0 to see by chance if they support that. I was surprised to find out they do! Not only does it work in 7.0 as well, but it also works in 6.0, I just never thought to try.

It's not an associated file type, but you can do a File/Open, or select Open With Adobe Reader to view the file.

Caching and Unit Tests

We've been playing with the Caching Application Block here at the client I'm currently working with and have found a couple of important items that will be of interest to anyone doing the same. I should say that we're using NUnit to run our unit tests, so some of the information applies specifically to that. 

Actually, both of the items we ran into had to do with the storage mechanism that we used.  The Caching Application Block includes implementations for three separate modes:  in process, memory-mapped files and SQL Server.  These three modes basically mirror the functionality available with the Cache object in ASP.NET. 

For all of the tests run to this point in the project, we had been using the in-process mode both in our unit tests and in the builds that have been sent to the system verification team.  However we recently made the choice to share the caching across web services, so we switched to the memory-mapped file mode. 

Oops. 

First off (and it should have been obvious), there is a different serialization technique used with memory-mapped files.  That is to say, that the in-process cache doesn't use serialization at all.  The memory-mapped version does serialize and deserialize the object as part of the default processing.  This change in caching mechanism should have caused us to go back to our unit tests, updating the storage type used there.  A couple of issues might very well have been exposed.  But after just a couple of minutes of seeing unexpected errors occur in unusual places, we did just that.

Second oops.

If you didn't realize it, NUnit depends a great deal on the AppDomain model to work its magic.  This allows things like the config file to be taken from the DLL's home directory.  A completely new AppDomain is created for each run of the tests.  So, as a for instance, you had created an in-process cache in one test, that cache wouldn't be there the next time the test was run.  New AppDomain = new memory = no old variables.

But what if you were to change the storage mechanism to a memory mapped file.  A mechanism that is specifically designed to share data across AppDomains.  So now, NUnit does its AppDomain creation thing and pow!  Cached value already exists.  Imagine our surprise.  Imagine how good you feel not having to go through it.

As a pointer, using the Flush method in the TestSetup section of NUnit is a good way to avoid this problem in the future.  Just something for you to consider.

The Memory Reclamation Phase

The project that I have been working on for the past few months has now entered it's performance tuning phase.  This means that code is being gone over in great detail, looking for areas where speed gains can be realized.  Because there is a high concentration of former C++ developers here (the project is done in C#), I have been getting a number of questions regarding the allocation and deallocation of memory.  Questions like when does it happen and how can memory be reclaimed faster. 

Whether you are coming from a VB or C++ background, you should be aware of the allocation and garbage collection process in .NET.  For no other reason than the things you might have done for optimization in the past may very will be completely wrong.  Two examples:  Setting variables to null (based on the Set var = Nothing best practice in VB) and implementing finalizers (a common memory deallocation technique in C++).  For an excellent description of the reasons behind not doing this (and other practices), check out http://weblogs.asp.net/ricom/archive/2003/12/02/40780.aspx.  And if you feel compelled to use the GC.Collect method, read http://blogs.msdn.com/ricom/archive/2004/11/29/271829.aspx.  Very useful information indeed.

Improving the Performance of Asynchronous Web Service Calls

Notice that the name of this post doesn't say that web service performance can be improved through asynchronous calls.  That is on purpose.  This particular post deals with a limitation that impacts applications that utilize async web service methods.  Actually, it has the potential to impact any application that interacts with the Internet.  This is actually one time I wish my blog was more widely read, because I can pretty much guarantee that there are thousands of developers who are unaware that the nice async design they've implemented isn't having the performance boosting effect that they expected.  And thanks to Marc Durand for pointing this out to me.

The limitation I'm talking about is one that is buried in the HTTP/1.1 specification (RFC2616).  The spec says that, to prevent a single computer from overrunning a server, the limit to the maximum number of connections that can be made to a server is two.  What this means is that if your application makes three async calls to the same server, the third call will be blocked until one of the first two is finished.  I know that this came as a big surprise to me.

Fortunately, there is a configuration setting that can adjust this number without requiring any coding changes.  In the app.config (or machine.config) file, add a connectionManagement tag.  Within connectionManagement, the add tag is used to specify the optional server and the maximum connections allowed.  The following example allows up to 10 simultaneous connections to be made to 216.221.85.164 and 40 to any server.

<configuration>
  <system.net>
    <connectionManagement>
      <add address="216.221.85.164" maxconnection="10" />
      <add address="*" maxconnection="40" />
    </connectionManagement>
  </system.net>
</configuration>

For those who like to program, you can accomplish the same behavior on a temporary basis using the ServicePointManager.DefaultConnectionLimit property.  But I was never big into adding code when I didn't have to.

FileNotFoundException in ASP.NET Web Services

As you work with web services in ASP.NET, it is likely that you will eventually come upon a weird file not found exception.  The actual text of the exception will look something like “File or assembly name jdiw83ls.dll, or one of its dependencies, was not found".  Now, since it unlikely that you have a project with a name of jdiw84ls in your solution, the source of this exception and it's resolution will be hard to see.

To understand where the message is coming from requires a little detail about how ASP.NET services moves complex objects across its boundary.  If you pass a complex object to a web service, the client side serializes the class into an XML document.  On the receiving side (the server), the inbound XML document is deserialized back into a class.  This knowledge is part of Web Services 101.

However, what happens when the class

<configuration>
    <system.diagnostics>
         <switches>
            <add name="XmlSerialization.Compilation" value="4"/>
         </switches>
    </system.diagnostics>
</configuration>

 

Web Services and Proxy Servers

I'm currenting working with a team that is putting the finishing touches on a commercial application that uses web services as a major component of the infrastructure.  As part of putting the client portion through its paces, an environment that included a proxy server was tested.  The initial result was not surprising to anyone who has significant experience with web services:  a connection could not be made exception was thrown. The reason for this problem is that the method for defining a proxy server to the web service proxy class on the client side is a little unexpected.  And I apologize in advance for having to use the same word (proxy) to describe to separate items (the server and the web service class). 

If you are running Internet Explorer, then the place where the the proxy server settings are configured is in the Tools | Internet Options dialog.  Within the dialog, click on the Connections tab and the LAN Settings button.  Here you can specify whether proxy server settings are required and which server and port are used.  You can also identify whether local addresses are exempt from the proxy process.  This is fairly well known information, especially if you normally operate from behind a proxy server.

The disconnect occurs when you make calls to web services from this same machine.  Under the covers, the HttpWebRequest class is used to transmit the call to the desired server.  However, even when the proxy server information has been specified as just described, those details are NOT used by HttpWebRequest.  Instead, by default, no proxy is used to handle the request.  This is what causes the connection exception. I know the first time I ran into this issue was a surprise to me.

There are two possible solutions to this problem.  The first is to assign a WebProxy object to the Proxy property on the web service proxy class.  You can programmatically specify the proxy server information, but that is very difficult to modify.  Instead of creating the WebProxy object on the fly, use the GetDefaultProxy static method on the System.Net.WebProxy object.  This method reads “the nondynamic proxy settings stored by Internet Explorer 5.5” and returns them as a WebProxy object that can be assigned to the Proxy property .  This technique works fine assuming that your client has IE installed and that none of the scripts run at login time have modified the proxy settings.

There is an alternative that is a little more flexible.  There is a app.config element called defaultProxy that is part of the System.Net configuration section.  Within this element, the details of the proxy server can be specified independently of the IE settings.  The basic format of the defaultProxy element can be seen below.

<system.net>
   <defaultProxy>
      <proxy
         proxyaddress = "http://proxyserver:80"
         bypassonlocal = "true"
      />
   </defaultProxy>
</system.net>

The proxy element also includes an attribute called usesystemdefault which, when set to true, causes the IE settings to be used.  The benefit of this approach is that no additional coding is required.  The settings defined in defaultProxy are use by HttpWebRequest (and, therefore, web service calls) with no change to the application required.

As an added benefit, this element can be added to machine.config, which allows it to be applied to every HttpWebRequest made from the client machine regardless of the application.  Keeping in mind that an uneducated developer could assign the Proxy object in code, overriding this nice, easily configurable default.