Building Cathedrals, Bazaar's, and Mystery Houses

Almost since software development has been a profession has the practice of software development been compared to the construction industry.

In this sphere of analogy, I often use the contrast in the process of architect cathedrals against the evolution of a bazaar during my lectures on software engineering. I often use this analogy in comparing waterfall-style, can't-have-too-much-uml monolithic architecture with a more organically evolving service oriented architecture.

I've heard others use the cathedral and the bazaar analogy for a few other software engineering comparisons, but I believe the first use can be attributed to Eric S. Raymond's book "The Cathedral and the Bazaar" where he observes how Linux was built with an open source model.

One must be careful not to use Bazaar-style engineering as an excuse for not maintaining a strong engineering discipline and a thoughtful planning process. Otherwise you risk having your Bazaar end up like the Winchester Mystery House, another common software-construction analogy, one you don't want to have your software compared to.

PDC 07 Dates Announced

PDC 2007

October 2-5, 2007
Los Angeles, Californa

Important Links:

Calling a Custom External Assembly from a Report

One of the most powerful features of Reporting Services is the ability to call code from within a report. Before we get to that, you may or may not know you can call the .net Framework from within the expression of an objects property on a report. However, there is one rule you must follow, the code must be written in VB.Net.

  • For example: 
    =iif( me.Value < 0, “Red“, “Black“) placed in the color property of a textbox this expression will change the text colour based on the value returned.
    =Format(Fields!BirthDate.Value, "MMM dd,yyyy") placed in the value property this expression will format a date value, (ie Dec 10, 2006)=System.Web.HttpContext.GetGlobalResourceObject("AppResources", "ID14831").ToString() placed in the value property uses your existing resource file to translate a label

Although this is all great and very useful, if you need to do something a bit more complicated you can write your own code and call that from an expression in a report also. This can be done in two ways.

1. You can write code directly in the report using the report properties Code tab. The code tab contains a multi-line textbox where you can write code that can be called from the expressions of the report. From the menu select Report - Report Properties... on the Report properties dialog select the Code Tab. Here you can write code like in the example below. Once again it must be VB.Net.

To use this code in your report place the following into the Value proprty of the textbox where you want to display the Pay Frequency.  

=code.GetPayFrequencyString(Fields!PayFrequency.Value)

2. You may want to reuse the code you write, across many reports. if that is the case, first write your assembly in the .net language of your choice. Then make sure the assembly is in the correct folder so it can be found by the Report Server and the Report Designer. I have a Blog entry here to help you with that.

Now you need to create a reference to the Assembly from the report. There is a reference tab on the Report properties dialog. Make sure you put the namespace in front of the class name or it won't work.

Now you can call your Assembly from any expression by referencing the class and method in the expression. Like this:

=code.empReport.GetSickLeave(Fields!SickLeaveHours.Value)

 

 

 

SRS Filters

I have found a lot of students and clients are not getting the most out of filters in there reports. So I thought I would explain a bit about how useful they can be. There are two places you can put a filter in a report.

  • Data Region (Table, Matrix, List, Graph, Grouping Level)

Applying a filter to a DataRegion means I can show different views of the same data. In other words I can create one Dataset and let it feed multiple regions in my report. Say for example I need a report for management that shows the breakdown of Salaried employees vs hourly. They want to see two pie charts one will show the number of salaried employees by department the other Hourly. Instead of creating two queries one that selects salaried employees and one that selects hourly. I can just select all the data into one Dataset and filter the separate Graphs.

 

When you set the filter don't forget you are applying an expression so the value must begin with an equal sign. This is a common source of frustration the first time people use filters.

I can reuse this data in different ways throughout the report. In a matrix to show the number of employees by region, in a table sorted by Hire Date. All from one dataset.

  • DataSet

You can apply a filter to a Dataset, which at first seems odd. Doesn't the Dataset already have its parameters. Why would I filter the report again? When you create a Cached Instance or a Snapshot the report is stored with its data and the parameters cannot be changed. However filters can still be applied to the report. Filters will use the current report parameter values to filter the report instead of creating a new cached instance or snapshot. This can be useful to avoid creating multiple versions of the same report in the cache.

Dynamically Configuring Pipeline Component Properties in a BizTalk Orchestration

This entry discusses the use of the following to programmatically alter a receive or send pipeline component(s) properties:

BTS.SendPipelineConfig
BTS.SendPipelineResponseConfig
BTS.ReceivePipelineConfig
BTS.ReceivePipelineResponseConfig

There is a link at the end of this entry to download a couple of examples.

Before discussing the above properties, two BizTalk 2006 pipeline enhancements are:

1) The execution of send and receive pipelines in orchestrations. This feature allows a receive or send pipeline to be executed without the use of a physical receive or send port.
2) Per Instance Pipeline Configuration using the BizTalk 2006 Administration Console. This allows changing the properties of receive and send pipeline components that
are set in the various stages of receive and send pipelines. The receive and send pipelines are configured to be invoked in physical receive and send ports. For example, as below the MIME/SMIME decoder pipeline component configuration of a receive pipeline can be altered on a per instance basis using the BizTalk 2006 Administration tool.

Both 1) and 2) are extremely useful features new for BizTalk 2006.

An additional feature that would be useful (in some cases) would be to alter a pipelines component property values in an orchestration.

As below the following properties may be set for a message in an orchestration.

Property                                                            Type of Port 

BTS.SendPipelineConfig                                Send Pipeline Configuration for a Two Way or One Way Send Port 
BTS.SendPipelineResponseConfig                    Receive Pipeline Configuration for a Two Way Send Port

BTS.ReceivePipelineConfig                            Receive Pipeline For a Two or One Way Receive Port
BTS.ReceivePipelineResponseConfig                Send Pipeline for a Two way Receive Port

For the below receive pipeline configured with a MIME/SMIME decoder:

The below XML is used internally by BizTalk to store the above configuration:

The above XML configuration can then be modified using a statement as below in a message assignment shape in an orchestration:

When the above altered message is sent through a physical port that is set to use a receive pipeline configured  with the MIME/SMIME component (in the decode stage):
The new configuration for the MIME/SMIME component (programmatically set in the orchestration) will then temporarily override the hardcoded configuration or the per-instance configuration pipelines configuration.

Ideally this is useful to cut down on the number of pipelines or physical send or receive ports that have to be created.

In order to use this method you can do the following:

1) Create receive or send pipelines set with the appropriate pipeline components. Deploy the pipelines. 
Note: You do not have to configure the pipeline components properties.
Note: You can also use the out of the box XML Receive or Send pipelines.

2) Create physical ports that are set to use the pipelines from 1).

3) Create and deploy orchestrations that temporarily override the hardcoded or per-instance send or receive ports pipeline configuration as was illustrated in an above graphic.
Logical ports in the orchestration are needed to send or receive the messages from an orchestration. The logical ports are then bound to physical ports. The configuration set in the orchestration will then temporarily override the configuration of the physical ports.

Note: To get a sample of the configuration for a pipeline. Do the below:
Using a tool such as Microsoft Sql Server Management Studio, query/open and copy the following values for a port created in 2), from a row in the following table.column in the BizTalkMgmtDB database:

bts_sendport.nvcSendPipelineData
bts_sendport.nvcReceivePipelineData
adm_receivelocation.ReceivePipelineData
adm_receivelocation.SendPipelineData

Note: If the column/row for a port is null, using the BizTalk Console, edit the receive or send ports pipeline configuration so that it overrides the default configuration. The configuration for the pipeline component(s) should then be set in one of the above table.column combinations. You can then revert back to the default configuration, by choosing the passthrough pipeline and then again choosing the original pipeline.
 
Note: Do not modify the contents of the rows/columns directly in the BizTalkMgmtDB database.
Note. The BizTalk Object Model could also be used to retrieve the configurations.
Note: The configuration for the pipeline can be stored and then retrieved from a configuration store (i.e. config file, SSO, database etc) and then set in the orchestration.

4) Bind the logical ports in the deployed orchestrations from 3) to physical ports from 2).

Good Things
1) This is a technique to cut down on the number of pipeline components and/or physical ports.
For example if the same XML received message was to be delivered to the same file directory i.e. (C:\flatfiles) as a delimited and positional flat file,
the following send ports, pipelines and orchestrations would be required:

Executing Send Pipelines in an Orchestration
i) Send Pipeline with FlatFile Assembler component for delimited flat file.
ii) Send Pipeline with FlatFile Assembler component for positional flat file.
iii) Send Port configured with Passthrough pipeline.
iv) Orchestration

Per Instance Pipeline Configuration
i) Send Pipeline with FlatFile Assembler.
ii) Send Port with per instance configuration of above pipeline to create delimited flat file.
iii) Send Port with per instance configuration of above pipeline to create positional flat file.
iv) Optionally an orchestration.
 
Dynamically Configuring Pipeline Component Properties in a BizTalk Orchestration
i) Send Pipeline with FlatFile Assembler.
ii) Send Port configured to use above pipeline.
iii) Orchestration


Not So Good Things
1) To dynamically configure and execute a receive pipeline, a two way send port will have to be used. This might introduce the need for a loopback adapter. A loopback adapter also introduces messagebox hops that will degrade performance.
2) Maintaining a store of the Pipeline configuration XML. The configuration store replaces individual pipeline components or per instance configuration of pipelines in a physical send or receive port.

There are two examples included in the download :
1) Orchestration sets receive pipeline configuration and uses a loopback adapter that executes altered configuration.
2) Orchestration sets send pipeline configuration and uses send file adapter to invoke altered configuration.
Note: A loopback adapter is also included in the download. Please read the ReadMe before installing


Unfortunately
I wish you could do something like the below:

OutputMessage(BTS.ReceivePipelineConfig) = "Pipeline Config XML would go here";
OutputMessage = Microsoft.XLANGs.Pipeline.XLANGPipelineManager.ExecuteReceivePipeline(typeof(PipelinesAndSchemas.ReceivePipeline), InputMessage);

It did not produce a runtime error, but the configuration for the Receive Pipeline was not overridden.

Last of All
I have yet to use this technique in a production environment. I have not tried this technique with pipelines configured to use multiple or custom components, but it should work with these scenarios. Please consider all the pros and cons of using this technique. Again, this could be an alternative to replace an unmanageable number of receive/send ports and pipelines.

 

 

 

 

Extending BizTalk Mapping/Transformation operations into Sql Server

A staple in BizTalk 2004/2006 is to design and implement mapping/transformation operations using the BizTalk mapper. Mapping in BizTalk can be extended and improved by:

a) Using a rich set of Functoids that ship with BizTalk.
b) Using custom XSLT in a map.
c) Writing Custom Functoids that can be used in a map (link)
d) Writing JScript,C# or VB.Net code in a Scripting Functoid.

As mapping operations become more complex, Custom XSLT must often be used to perform the complete transformation or a portion of the transformation.
 
Unfortunately many developers do not know XSLT and in some cases do not have time to learn it. But, most developers do have Sql skills and are comfortable writing Sql Statements that join tables to produce a final result set.

This post will discuss a transformation of a source XML message to a destination XML message using a Sql Server 2005 stored procedure. The code for this example can be downloaded at the end of this entry.

A portion of the source XML message for the transformation is as below.
One or many Applicant nodes are submitted with the message.

A brief description of the <Applicant> node for the above message is as below:
1) <Applicant> nodes contain information such as <ApplicantId>,<FirstName>,<LastName>,<Sex>,<BirthCountryCode> etc.
2) <Applicant> nodes contain 0 to many <NickName> Nodes.
3) <Applicant> nodes contain <TestScore> nodes, that include the results of a test taken.
4) <Applicant> nodes contain <RelationInfoToScores> Nodes that contain IDRefs to relate the ApplicantId to the correct TestScores in the message. The red arrows in the above graphic illustrate the relation.

The Destination XML message for the completed transformation is as below. A separate <ConcatenatedApplicantInfo> node is created for each <Applicant> node in the source message. Information for each Applicant is concatenated with a padding of 25 spaces between items.

The below Sql Server 2005 stored procedure will complete the transformation of the source message to the destination message.

USE [TransformationHelper]

GO

/****** Object:  StoredProcedure [dbo].[Map_ApplicantHierarchical_To_ApplicantFlat]    Script Date: 07/13/2006 17:25:26 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

 

CREATE Proc [dbo].[Map_ApplicantHierarchical_To_ApplicantFlat]

 

-- Parameter to Accept AllApplicantInfo XML message

-- from BizTalk.

@ApplicantXML xml

 

 

As

 

/*

 @Applicant Temp table for Applicant Header Info

 from AllApplicantInfo info XML message.

*/

 

DECLARE @Applicant Table(

    ApplicantId int null,

      FirstName varchar(20) null,

      LastName varchar(20) null,

      MiddleName varchar(20) null,

      Sex char(1) null,

      BirthCountryCode int null,

      BirthProvinceCode int null,

      BirthCountry varchar(20) null,

      BirthProvince varchar(20) null); 

 

/*

 Statement to shred Applicant Header Info from XML message

 into @Applicant Temp table

*/

WITH XMLNAMESPACES ('http://Applicants' as ns0)

Insert Into @Applicant

(ApplicantId,

FirstName,

LastName,

MiddleName,

Sex,

BirthCountryCode,

BirthProvinceCode)

Select ShreddedApplicantInfoXML.nCol.value('@ApplicantId','int'),

ShreddedApplicantInfoXML.nCol.value(('(FirstName)[1]'),'varchar(20)'),

ShreddedApplicantInfoXML.nCol.value(('(LastName)[1]'),'varchar(20)'),

ShreddedApplicantInfoXML.nCol.value(('(MiddleName)[1]'),'varchar(20)'),

ShreddedApplicantInfoXML.nCol.value(('(Sex)[1]'),'char(1)'),

ShreddedApplicantInfoXML.nCol.value(('(BirthCountryCode)[1]'),'int'),

ShreddedApplicantInfoXML.nCol.value(('(BirthProvinceCode)[1]'),'int')

From @ApplicantXML.nodes ('/ns0:AllApplicants/Applicant/Information') as ShreddedApplicantInfoXML(nCol)

Where ShreddedApplicantInfoXML.nCol.value(('(FirstName)[1]'),'varchar(20)') is not null

 

 

/* In below update statement Join to another table to get Country Name:

 

       Join CountryCodes

            On Applicant.BirthCountryCode = CountryCodes.Id

 

      Replaces Database functoids.

      This increases performance as it cuts out the

      Database functoids that would make multiple round trips to

      the database

*/

 

/* Also in below update statement Call Managed Code to get Province name

 

   BirthProvince = dbo.GetProvince(Applicant.BirthCountryCode,Applicant.BirthProvinceCode)  

     

   This is like a scripting functoid or custom functoid in a BizTalk map.

*/

 

Update @Applicant

Set BirthCountry = CountryCodes.Name,

      BirthProvince = dbo.GetProvince(Applicant.BirthCountryCode,Applicant.BirthProvinceCode)

From @Applicant as Applicant

      Join CountryCodes

      On Applicant.BirthCountryCode = CountryCodes.Id

 

--Below is good for debugging purposes.

--Select * from @Applicant

 

 

/*

 @NickNames Temp table for Info

 from AllApplicantInfo info XML message.

*/

DECLARE @NickNames Table

(ApplicantId int null,

 [Name] varchar(20) null) ;

 

/*

 Statement to shred Info from XML message

 into @NickNames Temp table

*/

WITH XMLNAMESPACES ('http://Applicants' as ns0)

Insert Into @NickNames

Select NickNameXML.nCol.value('(../@ApplicantId)[1]','int'),

NickNameXML.nCol.value(('(Name)[1]'),'varchar(20)')

FROM @ApplicantXML.nodes ('/ns0:AllApplicants/Applicant/Information/NickName') as NickNameXML(nCol)

 

--Below is good for debugging purposes.

--Select * from @NickNames

 

/*

 @TestScores Temp table for Info

 from AllApplicantInfo info XML message.

*/

Declare @TestScores Table

(TestScoreId char(2) null,

Test varchar(20) null,

Score int null);

 

/*

 Statement to shred Info from XML message

 into @TestScores Temp table

*/

WITH XMLNAMESPACES ('http://Applicants' as ns0)

Insert Into @TestScores

Select

TestScoresXML.nCol.value('(@TestScoreId)','char(2)'),

TestScoresXML.nCol.value(('(Test)[1]'),'varchar(20)'),

TestScoresXML.nCol.value(('(Score)[1]'),'int')

FROM @ApplicantXML.nodes ('/ns0:AllApplicants/Applicant/TestScores') as TestScoresXML(nCol)

 

--Below is good for debugging purposes.

--Select * from @TestScores

 

 

/*

 @Relation Temp table for Info

 from AllApplicantInfo info XML message.

*/

Declare @Relation Table

(ApplicantId int null,

TestScoreId char(2) null);

 

/*

 Statement to shred Info from XML message

 into @TestScores Temp table

*/

WITH XMLNAMESPACES ('http://Applicants' as ns0)

Insert Into @Relation

Select

RelationXML.nCol.value('(@ApplicantId)','int'),

RelationXML.nCol.value(('@TestScoreId'),'char(2)')

FROM @ApplicantXML.nodes ('/ns0:AllApplicants/Applicant/RelateInfoToScores') as RelationXML(nCol)

 

--Below is good for debugging purposes.

--Select * from @Relation

 

Declare @FinalResults Table

(ApplicantID int not null,

ConcatenatedInfo varchar(8000))

 

Declare @ApplicantId int

Declare @ApplicantInfoText varchar(8000)

Declare @NickNamesText varchar(8000)

Declare @ScoresText varchar(8000)

 

-- Below cursor will do the work of concatenating each Applicants

-- info together into one node.

Declare cur_ApplicantRows Cursor Local Fast_Forward For

SELECT ApplicantId

FROM  @Applicant

     

OPEN cur_ApplicantRows

FETCH NEXT FROM cur_ApplicantRows INTO @ApplicantId

 

While @@Fetch_status = 0

      Begin

 

            Set @ApplicantInfoText = ''

            Set @NickNamesText = ''

            Set @ScoresText = ''

 

            Select @ApplicantInfoText = FirstName + space(25 - len(FirstName)) +

                                                LastName + space(25 - len(LastName)) +

                                                MiddleName +  space(25 - len(MiddleName)) +

                                                Sex +  space(5 - len(Sex)) +

                                                BirthCountry +  space(25 - len(BirthCountry)) +

                                                BirthProvince +  space(25 - len(BirthProvince))

            From @Applicant

            Where ApplicantId = @ApplicantId

 

            Select @NickNamesText = @NickNamesText + [Name] + space(25 - len([Name]))

            From @NickNames

            Where ApplicantId = @ApplicantId

 

            Select @ScoresText = @ScoresText +

                                    TestScores.Test + space(25 - len(TestScores.Test)) +

rtrim(str(TestScores.Score)) + space(25 - len(rtrim(str(TestScores.Score))))

            From @TestScores as TestScores

                  Join @Relation as Relation

                  On TestScores.TestScoreId = Relation.TestScoreId     

                  and Relation.ApplicantId = @ApplicantId

 

            --Select @NickNamesText

            --Select @ApplicantInfoText

            --Select @ScoresText

           

            Insert Into @FinalResults

            values (@ApplicantId, @ApplicantInfoText + ' ' + @NickNamesText + ' ' + @ScoresText)

 

            FETCH NEXT FROM cur_ApplicantRows INTO  @ApplicantId

      End

 

-- Below statement will generate the Final Flat Applicant XML Result.

-- Return back the final result set

Select 1 as Tag,

      0 as Parent,

      ApplicantID as [ConcatenatedApplicantInfo!1!ApplicantID],

      ConcatenatedInfo as [ConcatenatedApplicantInfo!1!!cdata]        

From @FinalResults as ConcatenatedApplicantInfo

FOR XML EXPLICIT

 

 Discussion of Stored Procedure:

1) The AllApplicantInfo source XML Message is passed to the stored procedure via a parameter of type XML.

2) The AllApplicantInfo source XML Message is then shredded into an number of temporary relational tables:

@Applicant
@NickNames
@TestScores
@Relation

3) Sql Statements in the stored procedure are used to join the temp tables to produce the final <ConcatenatedApplicantInfo> node for each <Applicant> node in the source message.

4) The final destination message is returned using a Select statement via an XML Explicit Clause.

5) The Country Code table is directly available and can be joined to in a set operation in the stored procedure. This eliminates using Database Functoids in a BizTalk Map that would invoke multiple round trips to Sql Server.

6) A new feature of Sql Server 2005 is calling managed code. Managed code in Sql Server 2005 allows complex data manipulation that would be difficult to write using straight Transact SQL. In this example, calling managed code from stored procedure replaces the use of functoids in a BizTalk map. The below managed code was called from the stored procedure to determine the Province from a Province code.

 

using System;

using System.Data;

using System.Data.SqlClient;

using System.Data.SqlTypes;

using Microsoft.SqlServer.Server;

 

public partial class UserDefinedFunctions

{

   

    [Microsoft.SqlServer.Server.SqlFunction]

    public static SqlString GetProvince (SqlInt32 country, SqlInt32 province)

    {

       

        string result = "";

        switch (country.Value)

        {

            case 1:

                switch (province.Value)

                {

                    case 1:

                        result = "Newfoundland";

                        break;

                    case 2:

                        result = "PEI";

                        break;

                    case 3:

                        result = "New Brunswick";

                        break;

                    case 4:

                        result = "Nova Scotia";

                        break;

                    case 5:

                        result = "Quebec";

                        break;

                    case 6:

                        result = "Ontario";

                        break;

                    case 7:

                        result = "Manitoba";

                        break;

                    case 8:

                        result = "Saskatchewan";

                        break;

                    case 9:

                        result = "Alberta";

                        break;

                    case 10:

                        result = "British Columbia";

                        break;

                    case 11:

                        result = "Yukon";

                        break;

                    case 12:

                        result = "NWT";

                        break;

                    default:

                         result = "Unknown";

                         break;             

                }

            break; 

            default:

                result = "Unknown";

                break;  

        }

        return result;

    }

};

 

As below, BizTalk can invoke the stored procedure via the Sql Adapter passing the source message as a parameter and then receiving the destination message as the response.

Sql 2005 has added many enhancements to aid in XML processing.
a) XQuery expressions
An example is below:

for $i in /InvoiceList/Invoice/Items/Item
return
{$i/@Quantity}
{string($i/@Product)}

b) Xml Auto -> Nesting XML auto clauses
c) XML datatype
d) Indexing xml data
e) A subset of the FLWOR syntax
f) Validating XML against a Schema.
g) more..


If you are interested in some free online virtual labs that explore the new Sql 2005 XML capabilities, please click on the below links: 

 

SQL Server™ 2005: XML Capabilities (TechNet) Virtual Lab
And:
Using Xquery with SQL Server 2005 XML Data (TechNet) Virtual Lab

Finally:

The not so good things about this method:

1) You have to call Sql server from BizTalk. When using BizTalk maps, the entire
transformation is localized to the BizTalk host process.

The good things about this method:

1) Simplified debugging of the transformation. For example, in the T-SQL you can place in print statements, select statements that return intermediate result sets, debug a stored procedure in Visual Studio, use the Sql Profiler etc.
2) In this example only a stored procedure needs to be deployed to a Sql Server database.
3) In this example, no data is persisted to permanent Sql Server tables. Only temp tables are utilized in the stored procedure.
 
Use this method:

1) As an alternative to writing custom XSLT.
2) If you are comfortable writing Sql Server Stored Procedures.
3) To replace database functoids in a map that incur multiple round trips to a database.
4) To perform transformations on larger messages

Some notes:

1) More than one XML message can be passed to a stored procedure.
2) Transformation operations can be split between the BizTalk maps and stored procedures. For example BizTalk maps can carry out a portion of the transformation and then pass the remainder of the transformation to the stored procedures.
3) For Sql 2000, OpenXML statements can be used to shred XML nodes into relational tables.

Conclusion:

Use the new XML enhancements along with the managed code in Sql 2005 to simplify and to improve the performance of complex BizTalk mapping operations.

Download the sample discussed above HERE. Read the ReadMe.Txt before installing and running.

 

Upgrading our Developer Training Machines

Unless you have already taken a course from ObjectSharp in the past on "our machines", you may not be aware that "our machines" means modern Dell notebooks. Our classroom at 1 Yonge Street is well equipped, but sometimes, companies want the coruse on their site. Our entire class hardware is portable in 2 rolling travel trunks. For students, if they want to take their machine home with them in the evenings, then this is a great option to catch up on labs, play around with stuff, etc.

We pride ourselves on having good hardware. We typically train on the latest & greatest software, so the same standard applies to our hardware. Our current machines are less than 2 years old on average and were top specifications when they were acquired. As we've started to use more and more VPC's for our classes, 1Gb has started to be a bit of a strain, particularly on BizTalk 2006 and VSTS courses that use Team Foundation Server.

By comparison, Microsoft sets the standards that Certified Partner, Learning Solutions (CPLS, formerly CTECs) are required to provide. Microsoft provides standards for levels 1-4. Levels 3 & 4 are the minimum for a course that uses Virtual PC. The top level 4 spec, typically used for server training is as follows:

-Pentium II, 700 Mhz
-16Gb Disk
-1Gb Ram
-4mb Video Card
-Super VGA resolution (800x600)

Today I signed the death sentence for our current machines which are the following specification:

-Pentium 4 Centrino 3Ghz,
-40-60Gb Disk
-1Gb Ram
-32mb Video Card
-1280x1024 resolution

By comparison, the machines we are getting rid of actually dwarf the CPLS highest standard. I'm not sure who thinks you can run a Biztalk or VSTS course on that specification....regardless of what the courseware is. I really pity the student who has to sit through a biztalk course with 800x600 resolution - on a P2, 700Mhz :(

Our next set of classroom machines will be here in 2 weeks and all new "Vista Capable" hardware.

Dell Inspiron 6400
-Centrino Core Duo 1.83ghz
-15.4" WSXGA+ 1680x1050 Ultrabright Displays
-2Gb Ram
-80gb SATA hard Drives
-128mb Intel 950 Integated Graphics, Aero Glass Capable

I've been using a dual core notebook for a few weeks now and they put a noticeable snappiness in everything. These new classroom machines are going to really rock. Not only will the performance rock, but drawing Biztalk Orchestrations or Visual Studio Class Diagrams on 1680x1050 will be so comfortable. I'm also going to be pushing for secondary external monitors so students can have the full dual-monitor experience.

I'm curious to see how the new wide-screen form-factor machines fit our trunks. I'm also going to be doing some perf testing using parallels, VMWare, and Virtual PC/Virtual Server to see how well they fair with the new dual core cpu.

The bottom line though is that I'm quite happy that we are able to make sure we have the latest and greatest hardware in our classroom on a 2yr refresh cycle. This is just one thing that goes into making a great learning environment for students.

Are we still talking about Stored Procedures vs. Dynamic SQL?

Rob Howard and Frans Bouma still are. And I guess, I am now too. Let's summarize a few of the facts from these counter points:

  • Any form of pre-compilation or cached query plan arguments are moot betweem SQL and Procs. Rob has some outdated information and Frans corrects that in his post.
  • Stored Procedures can offer the perf benefits if they are designed properly that Rob claims by avoiding round trips and unncessarily data transfer when trying to get computed or aggregated data out of the database.
  • Both are susceptible to SQL Injection attacks if the SQL is concatenated with parm values.

Let's talk about security. Frans thinks that Role Based security is the way to get fine grained security in your database while using embedded or dynamic SQL. Frans's solution of adding users and roles in the database is a dated technique back to client server 2 tier systems. Web-based or other wise distributed applications typically have a connection pool - and unless you are going to have a connection pool for each role, then you can't rely on SQL Server based role based security to be your cop. Frans goes on to talk about how views can be used to encapsulate security rules just like a stored procedure.

Both Frans and Rob talk about the brittleness of SQL with regards to schema changes. Rob thinks your SQL centralization/encapsulation  should occurr inside of stored procedures. Frans think you should do this in a data access component that is part of your application. Frans hasn't really explained what his application's component does specifically but it sounds like he prefers to dynamically create the SQL on the fly by reflecting on schema of entities in his application.

What both of them has avoided is any realization that talking to a SQL Server database is the same problem as talking to any external service. Whose responsibility is it to provide the encapsulation and deep understanding of the underlying database schema. The answer to that question can't be answer universally. Back in May 2005, I blogged about the notion of DatabaseAsService.

Is your database a shared service between several applications? Some folks might even go as far as to say that their database is an enterprise service. Especially in this case it makes perfect sense to encapsulate complex internal schematics inside of the single shared resource the database. This can be done with Stored Procedures or Views, but do you really want each application to have intimate knowledge of deep schema details? That's brittle way beyond the scope of a single application.

In other cases, your database is more like a file that your application persists its data and it is not a shared resource. In these cases, the database is not really a service in terms of Service Oriented Architecture principles. In fact, I'd go as far to argue in these cases that the db is such an intimate part of your application's design that there should be no “mapping“ of schema inside/outside of the database and that they could/should be the same. Go ahead and make the full set of tables/schema public to your application logic.

 

Class design considerations for extension methods and anonymous types

One of my readers was watching the DNRTV episode I did on LINQ recently and had this question: 

At some point, when you're explaining object initializers and anonymous types, you say something regarding extension methods, like how they could be used with the anonymous types. I'm not sure how that'd work: if the anonymous type gets named dynamically as something like "<Projection>F__4", and if the extension is declared during compile time as something like "method (this type)", how can we extend the dynamic class?

He is correct in pointing out the difficulty. These projection or anonymous types have dynamically declared names and when you declare an extension method, you must specify the type after "this" in the argument clause.

One thing that is not obvious is that the type specified after “this” in an extension method doesn't have to be the exact type, but can be an ancestor or interface implemented by the type you are ultimately wishing to extend.

public static float GetArea(this IPolygon shape){...}

As an example, the above extension method could be used as an extension method over anything that implements IPolygon.

The downside is that anonymous types (and linq projections) inherit directly from object and I don't expect that they will implement any special interfaces (the goal is too keep them simple for C# 3.0). What are you left to do? Create extension methods on "object". That is certainly a theoretical option I suppose, but that seems a little bit extreme.

To be honest though, try to think of real cases when you'd want to extend an anonymous type. If the type is truly anonymous, you know absolutely nothing about it, and what assumptions can you really make about it in an extension method? Truly, in some cases, you are going to choose to implement a named type instead of an anonymous type, and it appears that we'll see refactoring support in the tools to promote an anonymous type to a real type. This is a very likely scenario.

The reasonable cases that I can think of where anonymous types are the preference (i.e. I have no burning need to have a named type) but still need (and can) extend them, is when they are used in the context of a collection or another generic type.

For a good example of that, let's take a look at some of the extension methods provided by Linq itself.

  public static class Sequence {

    public static IEnumerable<T> Where<T>(

             this IEnumerable<T> source,

                  Func<T, bool> predicate) {

      foreach (T item in source)

        if (predicate(item))

          yield return item;

    }

  }

Consider that this is an Extension method for anything IEnumerable<T>. In this case, we're using this extension method against an IEnumerable collection of type <T> - a generic. That generic type could be an anonymous type. But the important information here is that we know something more about the anonymous type here and that is that it's used inside of an Enumerable collection, and hence we can provide the value of iterating through it in the foreach, evaluating some criteria, and yielding the items that pass the criteria into another collection.

The Code Room: Episode 3 - Breaking into Las Vegas

It's the good guys vs. the bad guys, fighting over millions of dollars. Could this happen to you? Maybe it already has. 

http://www.thecoderoom.com/vegas/