Things C# allows you to do, but shouldn't

So the offending line of code is as follows:

string a = String.Empty, b;

If you can explain what this is doing, you're a better person than I.  Just to give you a second hint, the following is also legal syntax.

string a = b, c = d, e = f;

So what do these statements do? Give up?

In the first case, you end up declaring two string variables:  a and b.  And a is initilized to String.Empty.  In the second case, you end up with string variables a, c and e initialized to b, d and f respectively.  Sure having multiple declarations on a single line is legal.  But should you inflict this upon unsuspecting maintenance developers?  I don't think so.

 

The Case for Unit Testing

A posting on Roy Osherove's blog taking issue with a post by Joel Spolsky on the failure of methodologies like MSF got me thinking.  In particular, I (like Roy) took issue with the following section (which is actually written by Tamir Nitzan, but agreed to by Joel).

Lastly there's MSF. The author's complaint about methodologies is that they essentially transform people into compliance monkeys. "our system isn't working" -- "but we signed all the phase exits!". Intuitively, there is SOME truth in that. Any methodology that aims to promote consistency essentially has to cater to a lowest common denominator. The concept of a "repeatable process" implies that while all people are not the same, they can all produce the same way, and should all be monitored similarly. For instance, in software development, we like to have people unit-test their code. However, a good, experienced developer is about 100 times less likely to write bugs that will be uncovered during unit tests than a beginner. It is therefore practically useless for the former to write these... but most methodologies would enforce that he has to, or else you don't pass some phase. At that point, he's spending say 30% of his time on something essentially useless, which demotivates him. Since he isn't motivated to develop aggressively, he'll start giving large estimates, then not doing much, and perform his 9-5 duties to the letter. Project in crisis? Well, I did my unit tests. The rough translation of his sentence is: "methodologies encourage rock stars to become compliance monkeys, and I need everyone on my team to be a rock star".

The problem I have is that the rock stars that are being discussed are not at all like that. Having had the opportunity to work with a number of very, very good developers, I found that they embrace unit tests with enthusiasm.  And their rationale has little to do with the fact that they might be create a bug that needs to be exposed.  While that might be part of the equation, it is not the real reason for creating a unit test. 

The good developers that I've encountered are lovers of good code.  They believe that well crafted code has a beauty all of its own.  They strive to write elegant, performant classes because, well, that's what craftsmen do.  But when put under the time constraints of business, it is not always possible to create the 'best' solution every single time.  An imminent deadline might require that 'good enough' classes be checked into production.  Such is the life of a paid developer.

But you and I both know these good developers are many times more productive than their 'average' conterparts.  As a result, they frequently have time within a project schedule to refactor previously completed classes.  In fact, they enjoy going back to those 'good enough' classes and improving on them.  This attitude is one of the things I've found separates the good developers from the pack.  They are actually embarrassed by some of the 'good enough' code and feel the need to make them right.

This is where the unit tests come in.  If the completed classes are supported by a solid set of unit test, then this refactoring process can be done with a low risk to the project.  The developers know that when a modified class passes the unit tests, it is much less likely to introduce further bugs. So, rather than thinking them a waste of time, the good developers I know relish the idea of creating unit tests. Perhaps this is one of the characteristics that the rest of us would do well to emulate.

A look inside the process

For the past few months, I have been working as part of a large team of developers working on the new release of a product.  By large, I mean that there are probably over 100 working on various pieces of the code.  Going through the process has given me a great deal of respect for the issues that must face any large team of developers. For some reason, the efforts that must be going on at Microsoft as they prep for Whidbey, Longhorn, et al came to mind. Check out the following post by Scott Guthrie about the process that the ASP.NET team is going through as they get ready for Whidbey Beta 2.  Quite enlightening.

Pure Math and Real Life

I'm sure few of you know that my original background is mathematics.  I actually have a Bachelor of Math and am one of those geeky people who think that proofs are actually beautiful. One of the minor frustrations I remember having back then (it has been quite a while) was how difficult it was to find areas in which math actually impacted real life directly.  When you study rings, fields and groups, it's difficult to pin anything physical to these abstract concepts.  But today is one of those rare instances where pure math makes headlines.

Alex Barnett pointed me to an article in the Guardian that mentions how a solution to the Riemann Hypotheses is in the process of being peer reviewed.  While the intricacies of Riemann are too much for most mortals (myself included), the thrust of the article is the impact that a solution will have on the Internet.  You see, security on the Internet is premised on the fact that determining the prime factors of a number is a computationally expensive thing to do.  Ultimately, it boils down to trying to divide the large number by every prime less than the square root of the large number. If you have a really, really large number (such as one represented by 512 bits, for example), determining the prime factor can take longer then the number of seconds that the universe has been around.  Seems like it would be secure.

As much as the Guardian article likes to imply, it's not like this development will immediately threaten Internet commerce.  Certainly not because of the proof.  Riemann has already been validated empirically for the first 1.5 billion values.  If Riemann was capable of helping to cracking today's encryption methods, you could already use it.  Hackers certainly don't need a sophisticated mathematical proof to use it as a tool. 

What the Riemann proof provides, however, is an understanding of how prime numbers are distributed throughout the range of all numbers.  With this understanding, it might be possible to determine the prime factors of a number in much less time.  I emphasis the word 'might' here. Nothing is certain in this regard.  The worst case scenario is that one of the basic assumptions on which security is built will become a little less stable.  The security mechanisms that we use today might become obsolete with further breakthroughs.  However, when you consider that it took almost 150 years to prove Riemann's in the first place, I'm not overly concerned.

 

Legal Analysis from an Easy Chair

So I was cruising around the Internet, as I frequently spend my spare time on the weekend (as sad as that is), and I came across this post by John Dvorak.  He refers to a BBC article that describes how Microsoft is clamping down on sites using peer-to-peer software to distribute XP's SP2.  Specifically, he is of the opinion that this is "wrong on so many levels".  In fact, his sole opinion is pretty much the above quote.  No details about the levels where he believes the choice to be wrong. I guess he things they are 'obvious' to any 'reasonable' person.  Not to me.  In fact, to me, the decision seems quite reasonable. And, while he was sniping at Microsoft, perhaps Mr. Dvorak might menition the levels on which the decision is correct?  Does he really believe that there might not be a legitimate rationale for the choice?

This is actually an on-going pet peeve for me.  There are a group of people, many of whom are journalists, who believe that any legal action taken by a large corporation (say, for example, Microsoft) is an example of all that is wrong with capitalism. In this instance, all the Downhill Battle people were doing was trying to assist Microsoft in distributing the service pack. In another, a Canadian teenager, Mike Rowe, was served with a request to transfer ownership of the domain "mikerowesoft.com" to Microsoft.  In both cases, the extensive legal team at Microsoft mobilised to squash the dreams and ambitions of young and impressionable technologists.

This perspective annoys me.  It gives no weight to the possibility that there might be a rationale beyond world dominance for Microsoft's “heavy-handed” approach.  For example, it is a legal requirement that the holder of a trademark actively defend that trademark against any unlicensed use that it is aware of.  In other words, Microsoft had no choice but to defend the term "Microsoft", wherever it might be use.  A group of baby seals start up a company called My Crow Soft. Regardless of the clubbing analogies that will appear in the press, Microsoft still has to go after them.  They have no choice.  Let me repeat this to make sure that there is no misunderstanding. They have no choice.  Is that ever reported in the mainstream press?  Or the majority of the computer trades?  No.  It's too easy to portray Microsoft as a juggernaut bend on crushing everything in it's sight.

The SP2 peer-to-peer download situation is an offshoot of this same problem.  Let's say that someone were to somehow add a virus to an SP2 update that resides on a peer-to-peer platform.  That virus-laden update would then be installed with no concern by anyone connected to the particular peer.  At some point in the future, the bad things related to that virus would start to happen and be covered by the press.  Because the virus came with the update, SP2 would be blamed.  Which, in turn, would slow the acceptance of an important upgrade.

So what is the solution.  In my ideal world, commentators would be much less biased and provide a more thorough analysis of issues that are being covered. Something a little more substantial than 'wrong on so many levels' But unfortunately, it is easier, and much more popular, to take the 'bash-Microsoft' role.

Guess there's no chance this rant will be slash-dotted.

Once more with NVARCHAR2

You might have guessed from my prior post on NVARCHAR2, that I'm doing a little work with Oracle.  As with many other aspects of technology, it's always a little dangerous to be completely familiar with a competing product, as you tend to bring along the assumptions from that product. But really, this little part of my Oracle interaction goes beyond this.

So I need to expand the size of an NVARCHAR2 field in an existing table.  Specifically, I need the column definition to be NVARCHAR2(264) as this will give me 132 Unicode characters to work with.  So naturally I execute the following DDL command.

ALTER TABLE LIST_USER MODIFY USER_ID NVARCHAR2(264)

This certainly seemed like the appropriate approach to take.  However, once the command worked successfully, I look at the structure of the LIST_USER table and what do I see?  USER_ID is defined as NVARCHAR2(528).  Oracle took the size that I wanted and doubled it.

What the heck are the people who designed this particular piece of logic thinking?  That when I asked for NVARCHAR2(264), I really didn't have a clue what I wanted?  That I was completely oblivious to the environment in which I was working and the size of the field I was looking to create. While that might be true on occasion, it usually takes people a little while to realize it.  And software never does.  Unless it's Oracle, apparantly.  But since an Oracle is capable of seeing future truths, I guess it all makes sense now.

VS2005 Beta Now Available

In case you haven't heard the news Beta 1 for Visual Studio 2005 (Whidbey) is now available.  You can download it from here.

Two things of note.  First of all, there are two versions of the beta available.  The full-fledged version is available for MSDN Subscribers.  As well, there is a new addition to the family of Visual Studio products.  Or, more accurately, a flock of new additions.  There is are a numner of Express versions available as a free, public beta.   These express versions are light-weight editors and compilers focused on the different languages (VB.NET, C#, C++, J#), development environments (Web Dev) and databases (SQL Server Express).  Very nice if you're looking to give .NET a try and don't have an MSDN subscription. 

The only downside to the beta is that there is no provision in the license to deploy beta applications in a production environment.  Word on the street is that a release license won't be available until beta 2.  Bummer.  I wanted to redevelop the ObjectSharp web site in VS 2005 :)

You Know Blogging Has Arrived When...

A quote from Bill Gates' keynote at the Microsoft-hosted CEO Summit.

“Another new phenomenon that connects into this [collaboration] is one that started outside of the business space, more in the corporate or technical enthusiast space, a thing called blogging. And a standard around that that notifies you that something has changed called RSS.”

I guess we can all sleep easier now that our life's work has been validated. ;)

Actually, the most interesting thing about Bill's comments is that he pitches it as being, in some ways, superior to email as a communications medium.  Less intrusive and less prone to CC-spamming (that's when some CC's everyone and their mother on an email to ensure that no one feels left out). And when you're talking about blogging to Warren Buffet, Barry Diller, et al. odds are pretty good that the level of corporate interest in RSS will increase over the next six months. 

 

 

MVP Summit

If you read my last post, there was one additional reason for the lack of recent posts.  I was at the MVP Summit in Seattle last week.  This was my first Summit and I was looking forward to being in the presence of the luminaries of the industry.  It was everything I expected and more. Since the contents of the Summit were covered under an NDA, I'm limited to talking about something that I'm sure is not covered.

I was impressed by the constant request for feedback from the participants of the Summit.  Whether it be focus groups or the various chances that we had to interact with personnel from various Microsoft development groups, there was a constant drumbeat asking what we thought, what problems we or our clients encountered and what could be done to make things better.  This even extended to the third day of the Summit, where it was almost one-on-one with the people who are creating the technology we'll be using for the next 10 years. Even more important, it looked like there were listening.  It will be interesting to see what impact, if any, our suggestions will have.

 

Back in the Saddle

First of all, let me apologize for the relative dearth of post from me over the past couple of months.  My reason/excuse/rationale for my period of absence has to do with the work I have been involved in recent and the source for most of my posts in the first place.

Understand that, for the most part, my inspiration for posting is the particular problem that I'm solving on any given day.  Which means that if I'm not solving a challenging problem, there is little fodder for a post.  Unfortunately (for posting, that is), I have been working as an instructor almost continuously since the end of January.  So the most challenging problem I have been dealing with is getting students to understand the ins and outs of the EnterpriseServices namespace.  Not an easy problem, you understand, but not one that generates post material.

My situation is in the process of changing.  I'm still instructing, but not with the same full time grind as the past two months. So hopefully there will be more frequent posting from me.  In fact, I have been cogitating (in my spare time) on the challenges of designing a service-oriented architecture.  Not the technology behind SOA, but the choices that have to be made by real people trying to implement production applications based on SOA.  Look for some posts along these lines in the next week or so.