The Fine Line Between Insanity and Clarity

The BBSM (Building Broadband Service Manager) is a Windows 2000 box that acts as a gateway to the internet for customer access.  It handles that login page when you connect to the open WiFi network.  It is the most convoluted piece of [insert noun here].  The guy who signs my paycheck had asked me a few weeks back to redesign said login page in keeping with corporate designs.  It was also requested that it be mobile browser friendly.  Classic ASP, running JScript (yes, JScript), in IIS 5 on Windows 2000 behind ISA Server 2000.  The new layout was done in about an hour, and it looks pretty good.  It has been 3 weeks and I still can't get the freakin mobile code working.

In a moment of insanity (clarity?) I got the bright idea to install .NET on the box and rewrite all the pages from scratch.  Rewriting took a couple hours, and the mobile support works.  Go to set it up on the box (which must be done via USB key, via Ops guy, via physically walking to box in DataCentre {which I don't have access to}) and come to find permission errors for the ASPNET account doing COM stuff.  Needless to say I hate COM Interop with a passion.  I even sunk to the level of giving the ASPNET account full admin privileges.  Turns out Windows 2000 does not like COM Interop either.

"It looks nice if you use a laptop" was my statement to the boss.  His response was "everyone is using PDA's and their iPhones.  Maybe 10 customers use laptops."

Moral of the story: If the original code was written in the same year you turned 11, run.  Quickly.

Consultation to Salary – Theoretical Head Banging Meets the Real World

A few weeks ago, six or so, I was offered a position as a Software Developer for the Woodbine Entertainment Group.  The position looked appealing so I accepted the job offer.  I am in a probationary period for the next four months and a bit.  Anything I say can be grounds for firing me.  Never liked that part about non-contract jobs.  Ah well.

Woodbine is an interesting company.  I knew very little about it until I got word of the job.  Seems I was the only one in Canada who didn’t know the company.  My grandmother, who moved to California 50 years ago, knew about the company.  Even used to bet there – well, the Woodbine Race Track, before it moved.  It has an interesting history.

It is migrating to be a Microsoft shop, from a more Novell focused infrastructure.  We are working towards standardizing on .NET for our custom applications.

The one thing that caught my eye with Woodbine is that the company is the technology leader for Horse Racing.  Not just in Canada, but throughout the world.  Our services can let you place a bet live, on a track in Australia, and see results immediately.  Can you imagine the infrastructure required for such a feat?  It’s sweet!  The business-people behind this are really keen on letting technology do it’s thing, so we can make money.  Lots of money.  See our Annual Reports on that.  Check back for latest numbers.

Now, some of you may have noticed that our Corporate Portal is written in what looks to be Classic ASP.  For all intensive purposes, it is.  Archive.org shows the portal went live in 2001, and had a major rebuild in 2003.  Since then incremental changes have taken place, most of which have been built using ASP.NET.  We are working on the new portal.  All I can say at the moment is: it’s going to be awesome.  So awesome that a new word will need to created to contain all of its awesomeness.  HorsePlayer Interactive is pretty amazing, but I’d like to think this new site will be just that much more awesomer.  Yes, I said awesomer.

As for the nature of this site, it won’t change.  I’ll still post my thoughts and experiences.  I might need to change stories a little to protect the innocent, but it’s all in good fun.  I may be forced to post details of how horse racing actually works, because I’m still not sure I get all the facets of it.  In time.

More to follow.

Windows LiveID Almost OpenID

liveopenidThe Windows Live team announced a few months ago that their Live ID service will be a new provider for the OpenID system.  The Live team was quoted:

Beginning today, Windows Live™ ID is publicly committing to support the OpenID digital identity framework with the announcement of the public availability of a Community Technology Preview (CTP) of the Windows Live ID OpenID Provider.

You will soon be able to use your Windows Live ID account to sign in to any OpenID Web site.

I saw the potential in OpenID a while ago, long before I heard about Microsoft’s intentions.  The only problem was that I didn’t really find a good way to implement such a system on my website.  Not only that, I didn’t really have a purpose for doing such a thing.  The only reason anyone would need to log into the site would be to administer it.  And seeing as I’m the only person who could log in, there was never a need.

Then a brilliant idea hit me.  Let users create accounts to make comment posting easier.  Originally, a user would leave a comment, and I would log in to verify comments, at which point the comment would actually show up.  Sometimes I wouldn’t log in for a couple days, which meant no comments.  So now, if a user wants to post a comment, all they have to do is log in with their openID, and the comment will appear.

Implementing OpenID

I used the ExtremeSwank OpenID Consumer for ASP.NET 2.0.  The beauty of this framework is that all I have to do is drop a control on a webform and OpenID functionality is there.  The control handles all the communications, and when the authenticating site returns it’s data, you access the data through the control’s properties.  To handle the authentication on my end, I tied the values returned from the control into my already in place Forms Authentication mechanism:

if (!(OpenIDControl1.UserObject
== null)) { if (Membership.GetUser(OpenIDControl1.UserObject.Identity)
== null) { string email = OpenIDControl1.UserObject
.GetValue(SimpleRegistrationFields.Email); string username = ""; if (HttpContext.Current.User.Identity != null) { username = HttpContext.Current.User.Identity.Name; } else { username = OpenIDControl1.UserObject.Identity; } MembershipCreateStatus membershipStatus; MembershipUser user = Membership.CreateUser( username, RandomString(12, false), email, "This is an OpenID Account. You should log in with your OpenID", RandomString(12, false), true, out membershipStatus ); if (membershipStatus != MembershipCreateStatus.Success) { lblError.Text
= "Cannot create account for OpenID Account: "
+ membershipStatus.ToString(); } } }
That’s all there is to it.

Zune Player

At first I was a little skeptical at the quality of the Zune Player, as it’s basically in direct competition with Windows Media Player.  In retrospect, that’s probably what made it what is today.  It’s designed to sync the Zune media player, and it works very well as an alternative to Media Player.  There are a couple problems that I have with it though:

  • It’s a resource hog
  • It requires a good video card to show effects
  • It sorts things in weird ways when meta data is missing

The Zune Player is built on .NET.  It has a very big initial memory footprint.  It gets better as it settles into place.  My assumption is that it’s using WPF to make itself look pretty, and that’s where all the effects come from.  As a result some video cards aren’t capable of handling the effects renderings.  For instance, my laptop’s video card just dies when the effects are on.  Zune will turn them off if need be.  If the Album Artist meta tag is empty, Zune sticks “Various Artists” in place thereof.  Zune sorts based on Album Artist in the default view, so when I loaded my library into it, a whole whack of songs where under Various Artists, which isn’t all that useful.  With that being said, all the (legally) downloaded content had proper meta tags and were sorted perfectly.

However, with all the negatives, comes a few gems.  Sorting is a breeze.  Playlists are extremely easy to build.  Filtering works.  And it’s a really slick UI.

zune

I’m a little miffed the band image is pixilated, but all the extra info brought with it makes up completely.  Talk about slick.

zune2

It also makes decent random playlist decisions.  On the UI and UX side of things it gets the job done.  It’s pretty stable.  It hasn’t blown up on me yet.  I give it my approval.  Check it out: www.microsoft.com/zune.

What Makes us Want to Program? Part 4

In my previous post, I started talking about using Microsoft technologies over PHP and open source technologies.  There were a couple reasons why I chose to make the move.  First, from a development perspective, everything was object oriented.  PHP was just getting started with OOP at the time, and it wasn’t all that friendly.  Second, development time was generally cut in at least half, because of the built in controls of ASP.NET.  Third, the end result was a more rich application experience for the same reason.  The final reason comes down to the data aspect.

Pulling data from a database in PHP wasn’t easy to do.  The built in support was for MySQL, with very little, if next to nothing for SQL Server.  In a lot of cases that isn’t always a bad thing.  MySQL is free.  You can’t argue with that.  however, MySQL wasn’t what you would call ACID compliant.  Defined, MySQL did not have the characteristics of being Atomic, Consistent, Isolated, and Durable.  Essentially, when data goes missing, there is nothing you can do about it.  SQL Server on the other hand is very ACID compliant.  This is something you want.  Period.

Once .NET 2.0 was released, a whole new paradigm came into play for data in a web application.  It was easy to access!  No more, or at least next to very little boiler plate coding was necessary for data access now.  Talk about a selling point.  Especially when the developer in question is 16 going on 17.

Now that I didn’t need to worry about data access code, I could start working on figuring out SQL.  At the time t-SQL scared the crap out of me.  My brain just couldn’t work around datasets.  The idea of working with multiple pieces of data at once was foreign.  I understood single valued iterations.  A for loop made sense to me.  SELECTs and JOINs confused me.  Mind you, I didn’t start Statistics in math until the following year.  Did SQL help with statistics, or did statistics help me finally figure out SQL?  It’s a chicken and the egg paradox.

So here I am, 17 years old, understanding multiple languages, building dozens of applications, and attending developer conferences all the while managing my education in High School.  Sweet.  I have 3 years until the next release of Visual Studio comes out.  It was here that I figured I should probably start paying more attention in school.  It’s not so much that I wasn’t paying attention, it’s just that I didn’t care enough.  I put in just enough effort to skate through classes with a passing mark.  It was also at this point in time that I made an interesting supposition.

Experts tend to agree that people who are programming geniuses are also good at math and critical thinking or reasoning.  Not one or the other, but both.  Now I’m not saying I’m a programming genius, but I suck at math.  It was just never in the cards.  But, according to all those High School exams and the psychological profiling they gather from them, my Critical Thinking and Reasoning skills are excellent.  Top 10% in Canada according to the exam results.  My math skills sit around top 20-30% depending on the type.

Neurologists place this type of thinking in the left hemisphere of the brain.  The left brain is associated with verbal, logical, and analytical thinking. It excels in naming and categorizing things, symbolic abstraction, speech, reading, writing, arithmetic.  Those who live in the left brain are very linear.  Perfect for a software developer.

The supposition I made had more to do with the Pre-Frontal Cortex of the brain.  It does a lot of work, some of which is planning complex cognitive behaviors.  Behaviors like making a list, calculating numbers, abstracting thoughts, etc.  It plans out the processes our brains use to get things done.  This is true for both sides of the brain.  So, suppose you are left brain-oriented.  You are predisposed to be good at development.  Now, suppose your Pre-Frontal Cortex is very well developed, more so than the average person.  It could be reasoned that part of being a programming genius is having a well developed Pre-Frontal Cortex.

So why does this make us want to program?  Find out in Part 5.

What Makes us Want to Program? Part 3

In my second post I discussed my run in with ASP, and how PHP was far better.  I ended the post talking about an invitation to a Microsoft event.  This was an interesting event.  Greg and I were the only people under 30 there.  When that’s a 15 year difference, things get interesting.  Especially when you need your mother to drive you there…  The talk was a comparison between Microsoft based technologies and Linux based technologies.  The presenter was a 10 year veteran of IBM, working on their Linux platform, who then moved to Microsoft.  For the life of me I can’t remember his name.

His goal was simple.  Disprove myths around Linux costs versus Windows costs.  It was a very compelling argument.  The event was based around the Windows Compare campaign.  It was around this time that Longhorn (Longhorn that turned into Vista, not Server 2008) was in pre-beta soon to go beta, and after discussing it with Greg, we decided to probe the presenter for information about Longhorn.  In a situation like that, the presenter either gets mad, or becomes really enthusiastic about the question.  He certainly didn’t get mad.

Throughout the rest of the talk, the presenter made some jokes at mine and Greg’s expense, which was all in good fun.  Based on that, we decided to go one step further to ask how we can get the latest Longhorn build, at one of the breaks.  the conversation went something like this:

Me: So how do people get copies of the latest build for Longhorn?
Presenter: Currently those enrolled in the MSDN Licensing program can get the builds.
Me: Ok, how does one join such a licensing program?
Presenter: Generally you buy them.
Me: How much?
Presenter: A couple thousand…
Me: Ok let me rephrase the question.  How does a student, such as myself and my friend Greg here, get a the latest build of Longhorn when we don’t have an MSDN subscription, nor the money to buy said subscription?
Presenter: *Laughs* Oh.  Go talk to Alec over there and tell him I said to give you a student subscription.
Me:  Really?  Cool!

Six months later Greg and I some how got MSDN Premium Subscriptions.  We had legal copies of almost every single piece of Microsoft software ever commercially produced.  Visual Studio 2005 was still in beta, so I decided to try it out.  I was less than impressed with Visual Studio 2003, but really liked ASP.NET, so I wanted to see what 2005 had in store.  At the time PHP was still my main language, but after the beta of 2005, I immediately switched to C#.  I had known about C# for a while, and understood the language fairly well.  It was .NET 1.1 that never took for me.  That, and I didn’t have a legal copy of Visual Studio 2003 at the time.

Running a Longhorn beta build, with Visual Studio 2005 beta installed, I started playing with ASP.NET 2.0, and built some pretty interesting sites.  The first was a Wiki type site, designed for medical knowledge (hey, it takes a lot to kill a passion of mine).  It never saw the light of day on the interweb, but it certainly was a cool site.  Following that were a bunch of test sites that I used to experiment with the data controls.

It wasn’t until the release of SQL Server 2005 that I started getting interested in data.  Which I will discuss in the my next post.

Windows Live Writer

I finally got around to building a MetaWeblog API Handler for this site, so I can use Windows Live Writer.  It certainly was an interesting task.  I wrote code for XML, SQL Server, File IO, and Authentication to get this thing working.  It’s kinda mind-boggling how many different pieces were necessary to get the Handler to function properly.

All-in-all the development was really fun.  Most people would give up on the process once they realize what’s required to debug such an interface.  But it got my chops in shape.  It’s not every day you have to use a Network Listener to debug code.  It’s certainly not something I would want to do everyday, but every so often it’s pretty fun.

While in the preparation process, there were a couple of procedures that I thought might be tricky to work out.  One in particular was automatically uploading images to my server that were placed in the post.  I could have left it to the manual process, what I started out with, which involved FTP’ing the images to the server, and then figuring out the URL for them, and manually inserting the img tag.  Or, I could let Live Writer and the Handler do all the work.  Ironically, this procedure took the least amount of code out of all of them:

public string NewMediaObject(string blogId, string userName, string password,
string base64Bits, string name) { string mediaDirectory
= HttpContext.Current.Request.PhysicalApplicationPath + "media/blog/"; if (authUser(userName, password)) { File.WriteAllBytes(mediaDirectory + name, Convert.FromBase64String(base64Bits)); return Config.SiteURL + "/media/blog/" + name; } else { throw new Exception("Cannot Authenticate User"); } }

Now its a breeze to write posts.  It even adds drop shadows to images:

1538

Live Writer also automatically creates a thumbnail of the image, and links to the original.  It might be a pain in some cases, but it’s easily fixable.

All I need now is more topics that involve pictures.  Kitten’s optional. :)

Open Source Windows

Some days you just have to shake your head and wonder. As it turns out, I'm a little late to hear about this, but nonetheless, I'm still shaking my head.

It turns out that Windows has gone open source. And (!!) it's not being made by Microsoft anymore. Well, Windows™ is still made by Microsoft. Windows is now made by a group under the gise of ReactOS.
ReactOS® is a free, modern operating system based on the design of Windows® XP/2003. Written completely from scratch, it aims to follow the Windows® architecture designed by Microsoft from the hardware level right through to the application level. This is not a Linux based system, and shares none of the unix architecture.
So essentially, these people are taking the Windows architecture (based on XP/2003), and redesigning it from scratch. Or rather, are re-coding it from scratch, because redesigning would imply making something different. Sounds vaguely familiar to, oh, something called Vista. Except uglier.



Now, that nagging feeling we are all getting right about now should be visualized as a pack of rabid lawyers. Isn't this considered copyright infringement? They outright define the product as a copy.

And what about the end users? Are all programs designed to run on Windows supposed to be able to run on this ReactOS? Why bother with testing? The XP architecture is now almost 8 years old by now. That means anything designed to run on Vista, or soon to be designed to run on Windows 7, wouldn't stand a snowballs chance in hell, running on ReactOS.

I would love to see how a .NET application runs on it.

ADO.NET Entity Framework and SQL Server 2008

Do you remember the SubSonic project? The Entity Framework is kind of like that. You can create an extensible and customizable data model from any type of source. It takes the boiler plate coding away from developing Data Access Layers.

Entity is designed to seperate how data is stored and how data is used. It's called an Object-Relational Mapping framework. You point the framework at the source, tell it what kind of business objects you want, and poof: you have an object model. Entity is also designed to play nicely with LINQ. You can use it as a data source when querying with LINQ. In my previous post, the query used NorthwindModEntities as a data source. It is an Entity object.

Entity Framework
Courtesy of Wikipedia

The Architecture, as defined in the picture:

  • Data source specific providers, which abstracts the ADO.NET interfaces to connect to the database when programming against the conceptual schema.
  • Map provider, a database-specific provider that translates the Entity SQL command tree into a query in the native SQL flavor of the database. It includes the Store specific bridge, which is the component that is responsible for translating the generic command tree into a store-specific command tree.
  • EDM parser and view mapping, which takes the SDL specification of the data model and how it maps onto the underlying relational model and enables programming against the conceptual model. From the relational schema, it creates views of the data corresponding to the conceptual model. It aggregates information from multiple tables in order to aggregate them into an entity, and splits an update to an entity into multiple updates to whichever table contributed to that entity.
  • Query and update pipeline, processes queries, filters and update-requests to convert them into canonical command trees which are then converted into store-specific queries by the map provider.
  • Metadata services, which handle all metadata related to entities, relationships and mappings.
  • Transactions, to integrate with transactional capabilities of the underlying store. If the underlying store does not support transactions, support for it needs to be implemented at this layer.
  • Conceptual layer API, the runtime that exposes the programming model for coding against the conceptual schema. It follows the ADO.NET pattern of using Connection objects to refer to the map provider, using Command objects to send the query, and returning EntityResultSets or EntitySets containing the result.
  • Disconnected components, which locally caches datasets and entity sets for using the ADO.NET Entity Framework in an occasionally connected environment.
    • Embedded database: ADO.NET Entity Framework includes a lightweight embedded database for client-side caching and querying of relational data.
  • Design tools, such as Mapping Designer are also included with ADO.NET Entity Framework which simplifies the job on mapping a conceptual schema to the relational schema and specifying which properties of an entity type correspond to which table in the database.
  • Programming layers, which exposes the EDM as programming constructs which can be consumed by programming languages.
  • Object services, automatically generate code for CLR classes that expose the same properties as an entity, thus enabling instantiation of entities as .NET objects.
  • Web services, which expose entities as web services.
  • High level services, such as reporting services which work on entities rather than relational data.

LINQ and SQL Server 2008

No, Zelda is not back.  LINQ stands for Language Integrated Query. It's a set of query operators that can be called in any .NET language to query, project, and filter data from any type of data source. Types include arrays, databases, IEnumerables, Lists, etc, including third party Data Sources. It's pretty neat.

Essentially LINQ pulls the data into data objects, which can then be used as you would use a Business Object. The data object is predefined by a LINQ Provider. Out of the box you have LINQ to SQL, LINQ to XML, and LINQ to Objects for providers. Once you define the data object based on provider you can start querying data:

LINQ


Within the foreach loop the 'Customers' class is a data class that was defined based on a LINQ to SQL Provider. In this case, the database was Northwind.

Syntactically LINQ is very much like the SQL language, mainly because they both work on the same principle. Query (possible) large amounts of data and act on it appropriately. SQL is designed to work with large datasets. Most other languages work iteratively. So SQL was a good language choice to mimic.

However, there is a small problem that I see with LINQ. If I'm doing all the querying at the DAL layer instead of using things like Stored Procedures within the database, and I need to modify a query for performance concerns, that means the DAL has to be recompiled and redistributed to each application out in the field. That could be 10,000 different instances. Wouldn't it make more sense to keep the query within a Stored Procedure? Just a thought...