The Code Behind Model of ASP.NET

Almost from it’s beginning, HTML has been a mixture of data, display and logic.  While the original static pages might have only combined data and display, once scripting was introduced the three disciplines have lived together uncomfortably.  ASP only made this situation worse by introducing a different location (the server) where the scripting code could be executed. 

Ugly doesn’t adequately describe this problem.  There is little possibility for functional reuse in this scenario.  It is difficult even to modify existing business logic, much less make sure that it’s capable of being using by other components. This is not how enterprise-class applications are suppose to be created.

While ASP.NET supports this older processing model (known as in-line), it also introduced a newer model known a code-behind.  While this particular model doesn’t eliminate the data/display co-mingling, it does take the business logic away from the web page.  Instead, the events are handled by methods in a separate class.  The code behind class. Let’s take a brief look at how these pieces (the ASPX page and the code behind class) get wired together.

<%@ Page Language="C#" Inherits="ObjectSharp.WebPageClass" %>

   
       
       
       
       
       
   

Above is a standard, if simplistic, ASPX page.  Were it not for that first line, it would look like pretty much a run-of-the-mill HTML page.  And it’s that first line that brings the code-behind class into play.

That first line is known as the page directive.  When the ASPX file is processed, the directive is interpreted to mean that a) the language in any of the script blocks in the file will be in C# and b) that the processor should use methods in ObjectSharp.WebPageClass to handle the events raised by the web form.  As an example, this means that the MyButton_Click method (seen above as the Onclick event handler for MyButton) would be implemented in the assembly that contains the ObjectSharp.WebPageClass class.

using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace ObjectSharp
{
    public class WebPageClass : Page
    {
        protected System.Web.UI.WebControls.Label MyLabel;
        protected System.Web.UI.WebControls.Button MyButton;
        protected System.Web.UI.WebControls.TextBox MyTextBox;

        public void MyButton_Click(Object sender, EventArgs e)
        {
            MyLabel.Text = MyTextBox.Text.ToString();
        }
    }
}

Above is an example of a code-behind file for that example ASPX page.  Again, there are a couple of points worth noting. First off, the class itself inherits from System.Web.UI.Page.  This is the base page for all ASP.NET code behind files.  The fully qualified name for this particular class is ObjectSharp.WebPageClass, which matches the Inherits attribute in the Page directive.  There is a public method called MyButton_Click that will be invoked whenever the MyButton control on the web page is clicked. 

The final item of note for this class definition are the three protected variables MyLabel, MyButton and MyTextBox.  You might have noticed that they correspond, by name and type, to the three elements of the example ASPX page that have asp: as the element’s namespace qualifier.  That is not a coincidence. By utilizing that asp: qualifier, a corresponding object is created in the code-behind.  Then when the methods of the object are manipulated within the code-behind class, the values on the web page are modified as well. 

In the example, the MyButton_Click method sets the Text property on MyLabel.  As a result, the MyLabel control on the page that is sent back to the browser will have it’s text updated.  This mapping is managed automatically by ASP.NET and has the effect of moving the web page development model much closer to the Windows Forms model.  While you can accurately claim that as an abstraction the ASP.NET code-behind model is leaky, as a starting point for ASP.NET developers, it is nice to have a familiar base from which to build.

 

Licensing VSTS

If you're at all interested in using VSTS in your company, the announcement of the pricing for Team Systems was probably quite disturbing.  More so if the number of developers on your team numbered in the single digits.  Fortunately, Microsoft listened to the feedback (some might call it backlash, but that's just semantics).  Here is an announcement by Rick LaPlante outlining some of the changes to Team Systems pricing.  Being able to get VSTS 'Lite' (that is, for less than 5 users) will greatly increase the market for the application.  And given the issues that VSTS is aiming to solve, the wider the reach of the tool, the better.

New Article - Optimizing the Downloading of Large Files in ASP.NET

Just to let everyone know, I have posted a new article on techniques that can be used to optimize the downloading of large files in ASP.NET.  It discusses some of the architectural issues that impact download speed if you need to push multi-megabyte files to a browser client. If you're interested, you can find it here.  As always, comments and suggestions are appreciated.

Sending Mail through SMTP with Authentication

If you have looked at the process of sending emails from within .NET, odds are pretty good that you have stumbled across the SmtpServer class.  To send email, you create a MailMessage object, assign the necessary properties and then use the Send method on SmtpServer.  The SmtpServer class can be pointed to any mail server that you would like. 

MailMessage message = new MailMessage();

message.From = "bjohnson@objectsharp.com";
message.To = "who@ever.com";
message.Subject = "Testing";
message.Body = "This is a test";

SmtpServer.Server = "mail.server.com";
SmtpServer.Send(message);

So all is well and good right?  Well maybe not so much.  What happens if your email server, like all good servers, doesn't allow relays.  Instead, it requires that a user id and password be provided.  What I found strange is that the SmtpServer class doesn't include properties like UserId or Password to handle the authentication.  So how is this accomplished.

The answer is to utilize a newly added feature(new to .NET 1.1, that is).  The MailMessage class has a Fields collection.  The necessary authentication information gets added to the fields in the message that is being sent out. Certainly not where I expected it to be, but smarter people than I designed the class, so I'm sure there was a reason for this approach.  Regardless, it's knowledge that needs to be easily Googlable, hence the post. An example of the code that adds the three fields follows.

message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate",
  
"1"); //basic authentication
message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/sendusername",
   "userid");
//set your username here
message.Fields.Add("http://schemas.microsoft.com/cdo/configuration/sendpassword",
   "password");
//set your password here

 

Pure Math III - The SHA Also Rises

While perusing SlashDot, I came across an entry saying that the SHA-1 hashing algorithm has been broken.  If you have any familiarity with cryptography, you'll realize just what this means.  After all, SHA-1 is at the heart of SSL, digital signature and (wait for it) strongly named .NET assemblies.  So does this signify the end of days?  Total anarchy? Cats and dogs living together? 

Not so much.  From a practical perspective, what has really happened is that brute force is no longer the best way to find a string that will hash to a given value. Consider for a moment what the job of a hashing algorithm is:  to take a given string of arbitrary length and convert it to another string of a specified length (160 bits for SHA-1).  The properties of the hash are that there should be close to a one-to-one correspondence between the initial string and the hashed string.  It should also not be possible to regenerate the original string given only the hash.

Of course, complete one-to-oneness is impossible.  There are more strings of arbitrary length then there are of a specified length.  When two strings hash to the same value, it is called a collision.  And it means that when either of those strings are used, for example, as part of a digital signature, the results will be the same.  If it were easy to go from a hashed value to any string that would generate the hash, the algorithm would be poor.

For a few years, it was believed that the best way to find the collision for a given hash was brute force.  That is, randomly generate strings until a collision was found.  What it means to say that SHA-1 is broken is that brute force is no longer the fastest way to a collision.  The researchers have described a technique that reduces the effort by a factor of 2000.  Now that means that instead of 280 calculations, only 269 are required. Still not a meager number, but quite a breakthrough. 

A more complete description of the implications can be found here.

Changing Gears

First off, I apologise for my relative dearth of recent posts.  At the end of January, I finished off a fairly long contract. And, as yet, I haven't started onto anything new.  Actually, it has been a nice break for me, but for my blog?  Not so much.

Whether you realize it or not, I use the current project that I'm working on as fodder for many of my posts.  The goal of most of my posts is to describe a situation that I have run into, one that I hope is a relatively common one.  For that, I need situations.  No current project, then the bloggable situations are much fewer.

What I have been spending my time on is getting back into ASP.NET.  And what I apparently blocked out of my mind is some of the annoyances associated with data binding on web pages.  Especially the two-way binding necessary to allow web pages to show and update data.  I'm very much looking forward to the improvements in this process that will be forthcoming with Whidbey.

Standards at the speed of thought

I've had to deal with comment spam in the ObjectSharp blogs over the past few days.  Through a search of Google, found a simple, trigger-based solution for .Text (the engine that we use) that I suspect will deal with the majority of the spam that was coming through.  But, as explained in this post, Google is modifying their ranking engine to pay attention to a newly created attribute on the anchor tag that will basically mitigate the benefit of comment spam, that being to artificially raise the Google rank for the offending links.  What impressed me is the speed with which this innovation was implemented, not only by Google but by the list of blog hosts and competitive search engines at MSN and Yahoo.  Whoa. Would that other standards could work that way.

Grabbing the contents of your clipboard

Want to see something that's a little freaky?  Check out this post. Apparently you can grab the current contents of a user's clipboard through Javascript running in a browser.  While this probably isn't as much a security issue as some might think (after all, there really is no context for the data that is retrieved), it is interesting that it's even possible.  Thanks to Blair for the heads up.

Should DSL use UML

I have been quietly following the ongoing conversations regarding Domain Specific Languages (DSL) and whether UML should be used as the mechanism to describe them.  My UML and DSL knowledge is not nearly as deep as some of my colleagues, but I believe that the noise level from this space will only increase over the next 12-18 months.

As part of my blog reading (via Don Box), I came across a number of postings from Grady Booch and Alan Cameron Wills talking about both sides of the issue. But to me, the most interesting part was actually in one of the comments to Alan's post.  Specifically, Lloyd Fischer says:

Anyway, we spent a lot of time thinking about visual representations of software. It turned out that the times we were successful was when the system in question *already* had a visual representaion. Examples are piping and instrumentation diagrams, electronic schematic diagrams, ladder logic diagrams, etc.

In those cases where we tried to create new visual representations we failed. The "business users" invariably rejected our attempts to turn their knowledge into diagrams because they were unnatural to experts in the field. The fact that they had not yet created such diagrams was a telling sign that no such representation was possible. Those fields where such representations were useful had long ago created them.

That is probably the best argument in favor of using something *other* than UML to describe DSLs that I have heard.  That UML (or diagrams, in their world) doesn't fit with the mental model that the domain experts already have is very telling.  After all, the experts aren't constrained by anything when it comes to describing their world outside the realm of software.  They have white board, diagramming tools, everying that a designer would need.  If that was the way they wanted to go, they'd already be there.  And yet they're not.  I think this is one case where we need to listen to the experts and find a way to represent the mental model that they have spent years developing.  Seems more productive than forcing our way upon them.

Refactoring for VB.NET 2005 Going Away?

Well, not completely.  You have to read a little bit into this post, but it appears that the only support for refactoring in VB.NET Whidbey is the Rename Symbol function. To me, this means that one of the major differences between C# and VB.NET in Whidbey will be refactoring support, as C# Refactoring implements a few more functions. 

By the way, this shouldn't completely surprise anyone.  Check out the following post from a year ago.  It describes the refactoring features that C# Whidbey will support (subject to change, of course).  But in one of the comments from Scott Wiltamuth (a C# Product Unit Manager) it is suggested even then the VB.NET might not get much more than Rename Symbol. 

Very prescient.