This is a trick I got from Paul Murphy at the Toronto Security Briefing. Which was very good by the way, I thought Paul did a great job. If you missed it and would like to see the slides and demos. Visit Paul's Blog he posted them last night. http://www.paul.bz/blog.aspx.
Back to the tip. You would think after a Security briefing my tip would be about security. But to be honest I need time to digest some of the tons of information presented.
This is a really simple little thing I knew how to do but never thought to do it.
As a .NET developer it is inevitable that you will be opening the Visual Studio Command prompt to run some command line utility. Weather it's Sn.exe or GacUtil.exe or one of the many other utilities you can run from the command line. It is suggested (with good reason) that you develop logged in as a regular user, not administrator. If you are developing as administrator there is a good chance when the application is run by a non administrator there will be code that will not run due to the difference in permissions between the two accounts.
Wouldn't it be nice if the Command prompt looked different when you opened it as administrator. So you know visually right away that you are in that session as an administrator.
Here is how to do it:
- Open the Visual Studio Command Prompt
- Click on the Command Prompt icon (Control Box) and select properties
- Click the advanced button
- Select Run with different credentials (When opening a command prompt you will no be prompted to log in as another user)
- Click OK and then switch to the colors tabs.
- Select another background colour for the command prompt. (You can change what ever you want about the command prompt as you visual cue)
- When you click OK on the properties dialog you will be asked if you want to Apply changes to just this window or Modify the shortcut that started this window. Select the second option.
From now on when you open the Visual Studio Command Prompt as Administrator it will look different than the normal Command Prompt.
Thanks for the tip Paul.
Many people think that datasets are stored internally as XML. What most people need to know is that Datasets are serialized as XML (even when done binary) but that doesn't mean they are stored as XML internally - although we have no easy way of knowing, it's easy to take a look at the memory footprint of datasets compared to XmlDocuments.
I know that if datasets were stored as XML, then in theory, datasets should be larger since BeginLoadData/EndLoadData implies there are internal indexes maintained along with the data.
It's not easy to get the size of an object in memory, but here is my attempt.
long bytecount = System.GC.GetTotalMemory(true);
DataSet1 ds = new DataSet1();
ds.EnforceConstraints = false;
bytecount = System.GC.GetTotalMemory(true) - bytecount;
MessageBox.Show("Loaded - Waiting. Total K = " + (bytecount/1024).ToString());
long bytecount = System.GC.GetTotalMemory(true);
System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument();
bytecount = System.GC.GetTotalMemory(true) - bytecount;
MessageBox.Show("Loaded - Waiting. Total K = " + (bytecount/1024).ToString());
I tried these examples with two different xml files - both storing orders & orderdetails out of the northwind database. The first example was the entire result set of both tables. The dataset memory size was approximately 607K. The XmlDocument was 1894K, over 3 times larger. On a second test, I used only 1 record in both the order and order details tables. The dataset in this case took 24K and the XmlDocument took 26K, a small difference. You will notice that in my dataset example I have turned off index maintenance on the dataset by using BeginLoadData. Taking this code out resulted in a dataset of 669K, an increase of approximately 10%. An interesting note is that if you put in a BeginLoadData and EndLoadData, the net size of the dataset is only 661K. This would imply that leaving index maintenance on during loads is inefficient in memory usage.
The speed of loading from XML is a different story. Because the XmlDocument delays (I'm assuming) the parsing of the XmlDocument, the time to load of the full dataset from an XML file is 1/3rd of the time to load the DataSet from XML. I would be careful in being too concerned about this. Loading a dataset from a relational source like a DataAdapter that involves no Xml parsing and is much faster.
If you load up Anakrino and take a look at how the Dataset stores it's data, each DataTable has a collection of columns, and each column is in fact a strongly type storage array. Each type of storage array has an appropriate private member array of the underlying value type (integer, string, etc.). The storage array also maintains a bit array that is used to keep track of which rows for that array are null. The bit array is always checked first before going to the typed storage array and returns either null or the default value. That's pretty tight.
So you want to see what's in the Gac. Of course if you go to c:\windows\assembly in your explorer - you see a customized shell extension of the global assembly cache. If you want to see the actual files underneath, in the past I've always gone to the command prompt and dir myself into long file name oblivion.
To get rid of that shell extension, just add a new DisableCacheViewer registry entry (type DWORD) underneath the key HKLM\Software\Microsoft\Fusion and set the value to 1 and presto - it's gone. C:\windows\assembly has never looked so good. Of course, don't do this on your end users' machine as this is really just developer requirement to help figure out what's going on and what's really in the GAC.
If that doesn't help you debug your assembling binding problems - don't forget about FUSLOGVW.exe. But that's another blog entry for another day.
For those of you in the greater Toronto area, I will be giving a presentation on Designing Service Oriented Architecture Based Applications at the Toronto Visual Basic Users Group meeting on March 16. For more information, check out their web site at http://www.tvbug.com
By now, you have seen some very exciting previews of the new .NET technologies! This session will drill down behind the scenes to give you a more in depth look. If you have not yet witnessed Whidbey and Longhorn in action, you must absolutely make the time to attend this event. This session will introduce some of the new features that will be available to developers in the next release of the Windows .NET Framework SDK and Visual Studio.NET code-named Whidbey. Additionally, time will be spend examining the new subsystems that are planned for the next major release of the Windows operating system, code-named Longhorn, including the Avalon presentation subsystem, WinFS, and Indigo.
So it would seem I'm upstaged by Steve Ballmer who is coming to town the same day as the CTTDNUG presentation I was making about Whidbey. So in the interest of the greater good - my talk has been postponed until Mar 31.
The “Ballmer Developer Briefing” is mostly about Security...if that interests you?
What do you mean “IF” - of course that should matter to you. It should matter to everybody. Writing secure code isn't just about logging in you know. It's about keeping your code safe and more importantly your end users machines and data safe and not allowing your software to act as a gaping hole into their system or data be it through spoofing or SQL Injection Attacks. Writing code these days is more of a liability than it ever has and we all have to be responsible - so do yourself and the rest of the world a favour and brush up on your knowledge of security. Either that, or hire a good lawyer.
I'm so convinced this is a an important event (and sorry that I had to cancel my presentation) that ObjectSharp is co-sponsoring a bus for members of the CTTDNUG that will travel to Toronto from Kitchener and back. And for those of you no where near Kitchener? Did I mention that parking is free?
See you there.
So you want to build your own entity objects? Maybe you are even purchasing or authoring a code-gen tool to do it for you. I like to use Datasets when possible and people ask why I like them so much. To be fair, I'll write a list of reasons to not use datasets and create your own entities - but for now, this post is all about the pros of datasets. I've been on a two week sales pitch for DataSets with a client so let me summarize.
- They are very bindable.
This is less of an issue for Web forms which don't support 2 way databinding. But for Win forms, datasets are a no brainer. Before you go and say that custom classes are just as bindable and could be, go try an example of implementing IListSource, IList, IBindingList and IEditableObject. Yes you can make your own custom class just as bindable if you want to work at it.
- Easy persistence.
This is a huge one. Firstly, the DataAdapter is almost as important as the DataSet itself. You have full control over the Select, Insert, Update and Delete sql and can use procs if you like. There are flavours for each database. There is a mappings collection that can isolate you from changes in names in your database. But that's not all that is required for persistence. What about optimistic concurrency? The DataSet takes care of remembering the original values of columns so you can use that information in your where clause to look for the record in the same state as when you retrieved it. But wait, there's more. Keeping track of the Row State so you know whether you have to issue deletes, inserts, or updates against that data. These are all things that you'd likely have to do in your own custom class.
- They are sortable.
The DataView makes sorting DataTables very easy.
- They are filterable.
DataView to the rescue here as well. In addition to filtering on column value conditions - you can also filter on row states.
- Strongly Typed Datasets defined by XSD's.
Your own custom classes would probably be strongly typed too...but would they be code generated out of an XSD file? I've seen some strongly typed collection generators that use an XML file but that's not really the right type of document to define schema with.
- Excellent XML integration.
DataSets provide built in XML Serialization with the ReadXml and WriteXml methods. Not surprising, the XML conforms to the schema defined by the XSD file (if we are talking about a strongly typed dataset). You can also stipulate whether columns should be attributes or elements and whether related tables should be nested or not. This all becomes really nice when you start integrating with 3rd party (or 1st party) tools such as BizTalk or InfoPath. And finally, you can of course return a DataSet from a Web Service and the data is serialized with XML automatically.
- Computed Columns
You can add your own columns to a DataTable that are computed based on other values. This can even be a lookup on another DataTable or an aggregate of a child table.
Speaking of child tables, yes, you can have complex DataSets with multiple tables in a master detail hierarchy. This is pretty helpful in a number of ways. Both programmatically and visually through binding, you can navigate the relationship from a single record in master table to a collection of child rows related to that parent. You can also enforce the the referential integrity between the two without having to run to the database. You can also insert rows into the child based on the context of the parent record so that the primary key is migrated down into the foreign key columns of the child automatically.
- Data Validation
DataSets help with this although it's not typically thought of as an important feature. It is though. Simple validations can be done by the DataSet itself. Some simple checks include: Data Type, Not Null, Max Length, Referential Integrity, Uniqueness. The DataSet also provides an event model for column changing and row changing (adding & deleting) so you can trap these events and prevent data from getting into the DataSet programmatically. Finally with the SetRowError and SetColumnError you can mark elements in the DataSet with an error condition that is can be queried or shown through binding with the ErrorProvider. You can do this to your own custom entities with implementation of the IDataErrorInfo interface.
- AutoIncrementing values
Useful for columns mapped to identity columns or otherwise sequential values.
This is not an exhaustive list but I'm already exhausted. In a future post, I'll make a case for custom entities and not DataSets, but I can tell you right now that it will be a smaller list.
One of the reasons that I blog is to help me keep track of the nuggets of information that I come across in my travels to various clients. I long ago gave up the idea that I could remember everything that I learned and as I get older, the volume that I remember seems to be decreasing, especially as a percentage of the knowledge that would be useful to have. As a result, my reasons for posting are not always altruistic. This is one such post.
The initial problem was encountered at a client last year. The situation involved trying to create a configuration for a server-style COM+ application. For the normal application, configuration settings are stored in a file named executable.exe.config. But for server-style COM+ applications, all of then applications are run by dllhost.exe. This means that they would use (by default) the dllhost.exe.config file, the location of which is normally in the %windir%/System32 directory. There are situations, however, where this is not acceptable. So it becomes necessary to create individual config files for different COM+ applications.
The solution involves the application.manifest file. First, in the configuration screen for the COM+ application, set the Application Root Directory to a particular directory. Any directory will do, but it needs to be different for each application that requires its own config file. In the specified directory, two files need to be added. The first is a file called application.manifest. This file is intended to describe the dependencies that are required by a particular application. However, the contents of the file for this particular purpose can be quite simple. For example, the following is sufficient.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
The second file, again placed into the Application Root Directory, is called application.config. The contents of this file is the normal config file. With these two files in place, the COM+ application now has the freedom to access its own configuration information through the regular channels. Flexibility lives!!
When writing services, I often find myself having to attach to processes manually from within VS.NET. When you can't simply run the code directly from VS.NET and step your way through it this is a common choice. Every time I have to do this though I cringe because I'm walking on thin ice. Sometimes it doesn't work, or you hang, or weird things seem to happen.
I was having a particularly difficult time with a client yesterday who was debugging through some HTTP handlers when I remember a question from one of the MCSD.NET exams that clued me into the fact that you can programmatically cause a break point - that's right - I said “programmatically“.
That nifty little class & function call will bring up a dialog box offering to allow you to attach to a new instance of VS.NET or an existing one you might have open - similar to when you get an unhandled exception. For this to work the user running the process requires UIPermission. Not surprisingly the default aspnet user that asp.net normally runs under when the machine.config processmodel section's user is set to “machine” does not have this permission by default. If you are a developer running IIS locally, consider temporarily changing it to “system” or some other user but be careful because doing so causes asp.net to run under the local system account - which is a security risk.
Too bad there is no T-SQL version of this function - maybe in Yukon.
I haven't had much chance to use many of the cool things in Windows 2003 to date, but one of the new things (that incidentally also runs on XP Pro) is a new mode of Active Directory called Application Mode - in total ADAM. I'm finally getting to do some real playing around with this for a large application I've just started working on for a client.
It's basically a standalone active directory that is ideal for storing your own users and roles etc. to be used by your application in an active directory style - even if your company isn't using active directory. If you do go to AD down the road - it's a simple migration for your app. ADAM also acts as an LDAP server as well which makes it a bit more open. You can really put whatever you want into ADAM as it's schema is extensible (not unlike Active Directory). The idea though is that you can have multiple instances of ADAM installed on your server - each containing data specific to a unique application - while AD would store more globally required data throughout the enterprise.
It's pretty typical to store this type of application specific data historically into a SQL database. While that's possible, ADAM - and more specifically the underlying AD is more geared to this type of data. A relational DB remains an ideal choice for transactionally updated data, but ADAM is a great place to store any kind of administrative data that is, for the most part, written to once, and then read frequently by your application.
I'm going to be playing more with this, and specifically doing some performance testing and seeing what kind of improvements can be made by using it in the middle tier, caching some of the data in a wrapper object that is hosted in COM+ and pooled.
As an aside, I find it kind of strange that Whidbey - and specifically the new ASP.NET membership/roles stuff that is built in doesn't use ADAM - but instead opts for the classic database solution. Fortunately the membership/role model in ASP.NET Whidbey is an extensible provider model so I may just take a crack at creating my own provider that uses ADAM.
I should probably google that now as someone has probably already been there and done that.