SQL Server 2005 "Yukon" Virtual "Hands On" Online Labs

SQL Server 2005 “Yukon” is in beta, but if you'd rather not go through the hassle of installing it (and trying to uninstall it) you can experience it through the magic of online virtual labs.

http://msdn.demoservers.com/login.aspx?group=sql2005

Not only is it a hosted environment, but guided hands on labs are provided for the following topics:

  • SQL Server 2005 Integration Services
  • SQL Server 2005 Introduction to SQL Server Management Studio
  • SQL Server 2005 Reporting Services
  • SQL Server 2005 Server Management Objects
  • SQL Server 2005 SQL CLR Integration
  • SQL Server 2005 SQL Query Tuning
  • SQL Server 2005 SQL Server and ADO.NET 
  • SQL Server 2005 T-SQL Enhancements
  • SQL Server 2005 Web Services
  • SQL Server 2005 XML Capabilites
  • Whidbey, ObjectSpaces and Mexico, Oh My!

    I'm just heading back home from Vancouver tonight on a red eye. When I get home, I have time for a load of laundry or two and then I'm flying to Mexico. Not just for some R&R, but I've also arranged to speak at the Mayan Riviera .NET User Group. (MR. Nug). I'm not sure what is better - going on vacation, or being able to write it off as a business expense. Unfortunately the only Spanish I know is “Dos cervasa por favor“ therefore this talk will be in English. My apologies.

    I'm going to be speaking about the new DataSet that is coming in Whidbey. By the time I get there, beta 2 should be available for download and it should be public knowledge that ObjectSpaces will finally be shipping as part of the whidbey release. I just got a preliminary build last night, so I'm going to try to do a demo of using a dataset with O/R mapping via ObjectSpaces.

    As you may have also heard, bundled into 2.0 Typed Datasets are Typed DataAdapters, also known as TableAdapters. In addition to TableAdapters, beta 2 will introduce typed forms, known as TypedForms. These will be precanned forms for both Web and Windows access that couples an easily painted form right on top of a dataset...with zero code. The cool thing about this is that you can simply add a column to your database table, and this will automatically change your data access layer, your middle tier entity and data entry forms. Likewise, you'll be able to simply drag a text box onto your TypedForms and this will automatically modify your database schema to add a new column to the table.

    There is also some discussion going on about also adding a TypedReport into the dataset that couples Sql Reporting Services Reports directly into your typed dataset class. This stuff is going to be so easy to use, that it is likely going to make it's way into InfoPath as part of the Office product so that end users can create their own data entry forms and reports in InfoPath. This new edition of InfoPath is going to be named “InfoMaker“. More information on this is available here. This new collection of classes in the dataset are now going to be collectively known as a "DataWindow". Again, more details here. No, it's not a snowy day hell. Hard to believe isn't it. This is surely a day to mark on your calendar.

    VSTS Architect's Boot Camp Next Week (Mar 29/31) in Ottawa & Vancouver

    Next week I'm travelling to Ottawa (Tuesday) and then Vancouver (Thursday) to do some boot camp training on Visual Studio Team System. This 1 day hands on, gives folks a chance to play with the new modelling and testing features. I'll also be demoing the project management, process guidance, and integrated source control management features. If you are interested, there are still seats left. Click here for details and here for registration.

    Say goodbye to Whidbey's System.Data.SqlServer namespace

    Pablo Castro writes in the new Data Access Team Blog that they are unifying System.Data.SqlServer classes into System.Data.SqlClient.

    If you are asking “What is System.Data.SqlServer?” the answer is that it's a ADO.NET managed provider for accessing SQL Server 2005 - from within SQL Server 2005 (managed procs, triggers, functions, etc.). Why a different provider? Well SqlClient uses a TDS (tabular data stream) external access protocol to connect to SQL Server. Obviously if you are already in SQL Server you don't want to go outside and back into your database. The SqlServer namespace provides similar classes (SqlConnection, SqlCommand, etc. etc. that work within the current context of your connection to the database.

    In the existing betas, MS was so nice to give them the same class names so that you might have an easy chance of porting data access code from client code to server code by changing the using/Imports <namespace> at the top of your class. You have a little bit of work to tweak your connection (if you really need one), but otherwise this porting process works pretty well. If you want the most efficient code, you also need to break your SqlCommands into static SqlDefinitions and SqlExecutionContexts for the actual executions. You also may need to output differently using the SqlContext's Pipe object.

    So the product team has decided to unify, removing the SqlServer namespace and making the SqlClient components serve dual duty. This magic switch happens simply by using a connection string of “context connection=true”.

    I have mixed emotions about this move. On one hand, it can make the porting process a bit smoother. You don't have to changing your using/Imports and you don't have to remove your connection object. It also means you'll instantiate SqlCommands the same way.

    On the other hand, it's not obvious to me by reading somebody's code if they are talking in-context to the  database, or out of context externally. In fact, this could very well switch based on a configuration setting for the connection string. The differences are not just related to performance, but also affect the transactional semantics. It would be very easy to switch code that is acting in-context as part of a transaction, to an external command that is not participating in a transaction by a simple configuration change. This is a big deal. I don't think transactional semantics should be a “configurable” thing. My opinion extends to the COM+ component catalog, but that is a whole other blog entry.

    This move ends up simplifying an API so much that a programmer's intentions are lost, and that's not a good thing. Pablo says this is coming in SQL Server Beta 3, but he wants your opinions.

    The floodgates are...well...beginning to open

    From a couple of different sources, I'm seeing that the first set of documentation for Indigo is now available at http://winfx.msdn.microsoft.com.  And, as Andy Milligan suggests, the implication is that the bits for Indigo can't be far behind. Personally, my mouse is poised over the MSDN download page.  The rest of you probably have a life. ;)

    Back Home exploring SQL Genetics

    Over the past month I've been doing some Whidbey/Yukon training. First in Ottawa, then in Toronto (home) and most recently last week in Calgary - a city I haven't visited in 10 years, and has grown up a lot since then. I really enjoyed Calgary.

    I'm spending the day getting re-acquainted with my wife and our daughters. My 6 year old used a word this morning I hadn't heard her use before. She was showing me a fancy painting she did at school and while comparing her technique to her peers, said that she “used the minimum amount of colors that the teacher suggested”. You know where I'm going with this don't you?

    I asked her what “minimum” meant and she said “the least amount of work”. Ok, she's my daughter, no DNA testing required. But to further test her understanding I asked her what the minimum of 1, 4 and 7 was and she correctly chose 1. She's no math genius, so I was happy she got it right. She even knew that the maximum was 7. I figured I blow her mind (and her mother's who was listening) by asking her what the sum was. I nearly fell off my chair when she said 12.

    So I'm happy to report that SQL skills are indeed part of the human genome. Next week we'll try her on correlated subqueries.

    Getting Schema information from a Database

    Yesterday in my class I was going over the new GetSchema API in Whidbey and I learned from one of my students about another technique that I wasn't aware of - SQL ANSI 92 Information Schema Views. They are also supported in Oracle.

    There are almost too many ways of getting database schema information but I'll try to summarize here to see how they compare. Depending on what you want to do, one of these will be more appropriate for you.

    • A common technique is to execute a SQL statement and describe the result set. Execute a “Select * FROM known_table_name”. Some API's would have you add a “WHERE 1=2“ if you can't just “describe“ the result set without incurring row retrieval payload. You can take advantage of this technique using the DataReader's GetSchemaTable method. In this example, you are really getting the schema of a query and not per se the underlying table structure. As such, it doesn't have to be constrained to a “*“ or 1 table query either.
    • Query the SQL Server System tables - this is close to the metal - which can burn you, but if you don't mind built in fragility. This technique is going to likely become obsolete in SQL Server Yukon with the new “Sys.*“ views.
    • Slightly better than using the system tables would be using system stored procedures such as sp_tables
    • Use the SQL-92 ANSI Standard Information Schema views such as “SELECT * FROM Northwind.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = N'Customers'. Much nicer, and generic across anything that implements this standard (SQL Server 7.0 and up, and Oracle 9i? and up)
    • In .NET 1.0, you could also use the GetOleDbSchema method of an OleDbConnection. This is somewhat hard coded to a RDBMS and requires an OleDb driver.
    • In .NET 2.0, you can use new GetSchema method on the connection object. In theory, the idea is you get schematic information specific to the provider - so things that are specific to the provider can be made available/discoverable. But I'll warn you, it's a pretty abstract API. It's also remotely similar at the root level to the MSXML 4.0 “GetSchema“ method in that a) the methods have the same name, and b) they accept an argument of “Schema“ to return. In theory, you'll get more accurate schema information for the thing you are connecting too. Collections can be defined by the provider which differs from GetOleDbSchema which normalizes all db objects into a known set of objects. That may not work so well for new providers, Object Relational databases, or who knows what next.

    Not your father's NGEN

    Mr. Reid “NGEN“ Wilkes has a good article in MSDN magazine about NGEN 2.0 coming in whidbey. NGEN is the command-line tool that pre-compiles MSIL into native PE code....which has been around since .NET 1.0. It's can be a good performance kick on large applications to NGEN your assemblies as part of your deployment.

    So what are the highlights?

    • when a dependent shared assembly is updated, your executable's ngen cached image is invalidate. The new NGEN service can queue up requests to do “across the board“ re-ngen's to update your cached image. To accomplish this, NGEN keeps track of all of these dependencies so when an update is deployed - bam - things are queued up and recompiled when your machine has idle time. way cool.
    • NGEN 2.0 blows out your generic parameterized classes into pre-compiled code (almost 100% of the time). Therefore any performance drags on generic expansion at runtime are gone if you NGEN.

    Reid makes some good arguments for taking the time to NGEN your code by describing the down side of the JITer and the overhead that it imposes on memory compared to loading native images. What scares me even more is all the dynamic compilation coming in ASP.NET applications. Isn't this going to preclude the using NGEN on ASP.NET applications in whidbey?

    But alas, these performance improvements are not always a silver bullet and in some cases JITing is more performant. The bottom line - is test to make sure your assumptions of NGEN improving your performance are correct.

    VSTS Work Item: Percentage Completed

    During one of my demos this past week on VSTS, somebody commented that typically developers want to provide project managers more information on a task other than “completed” or not. What they really want is a percentage complete. John Lawrence on the team at MS developing this stuff. The good news is that they have reworked the work items a bit, and now there is support for fields indicating “Completed Work” and “Remaining Work”. These numbers will synchronize with “percentage complete“ in MS Project.

    I'm happy to see this as it also ties in with better metric tracking. We don't just want to know that an item was completed, but how much work it required - which maybe different than the estimate. All to often item “estimates” turn into “budgets”. This is one small way that projects often take longer. Developers often don't report that a task took less than the estimate - only more or the same as the estimate. This is one of the reasons why I like estimates that aren't time-based but effort based. In other words, we esimate an item in terms of some arbitrary scale - 1-5 let's say where 1 is easy and 5 is hard. Project managers can figure out what those numbers mean later on and do things like calculate team velocity.

    More Class Designer Productivity Potential: Batch Editing.

    Daniel Moth says that he's not excited about the properties box in the class designer and would prefer to use the code editor to make those kinds of changes. It may not be obvious but one of the things you can do with that properties pane that you can't do in the code editor is make multiple changes across several class or several members at the same time.

    Select all of the items that you want to make a mass change to, and any common properties are show in the properties dialog. I find this useful for decorating properties of my own components with custom attributes. Perhaps I want to change a bunch of methods to Static.

    Daniel mentions another limitation. There is no full signature support on the model surface in the class designer. This makes it impossible to see the differences between your overloads. In fact, overloads are all grouped together and a count is shown.

    Another mass editing scenario would be to change the XML comments on a bunch of methods - for example several overloads. You can't see the individual overloaded methods - just one of them with a “+1 overloads“ next to them. Furthermore, when you change the comment for a method that is overloaded (and shown as “+2 overloads“) one would hope the comment would be applied to all of the overloads, however the comment is only applied to the first one. Hopefully this is a bug and will be fixed. I've logged it with MS in the Product Feedback Center.