Deleting Temporary Internet Files from the Command Line

A quicky but a goody.  Sometimes you just need a quick way to delete temp files from IE.  In most cases for me its when I’m writing a webapp, so I’ve stuck this in the build properties:

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 2
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 1
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 16
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 32
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 255
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 4351

It doesn’t require elevated permissions, and has been tested on Vista and Windows 7.  Each command deletes the different types of data: temp files, stored form info, cookies etc.  Enjoy.

Installing SharePoint 2010 on Windows 7

I installed the public Beta of SharePoint 2010 on Windows 7 last night. There were several resources on the web to use as a guide, I found this one to be the best:

Setting Up the Development Environment for SharePoint Server

There are a couple main points you need to be aware of:

  • The setup contains a config file that must be edited to allow SharePoint to be installed on a Windows 7 or Vista
  • There are several prerequisites required before you install
  • There are a few hotfixes required after the install but before running the SharePoint Configuration Wizard
  • Install Visual Studio after you install SharePoint (this isn’t in the guide)

How UAC Actually Works

This post has had a few false starts.  It’s a tough topic to cover, as it’s a very controversial subject for most people still.  Hopefully we can enlighten some people along the way.

From a high level perspective, the UAC was developed to protect the user without necessarily removing administrative privileges.  Any change to the system required a second validation.  On older versions of Windows, an application running with administrative credentials could change any setting on the box.  Viruses and malware became rampant because of this openness, given that the average user had administrative credentials.  Most average users balked at the idea of having a limited user account, so Microsoft came up with an alternative for the new OS, Vista – a second form of validation.  You told the computer you wanted to make a change, it asked “are you sure?” 

Logically it makes sense.  Consider an instance where a devious application wanted to change some setting, and because Windows wanted to verify it’s ok to make this change it asked “are you sure?”  If you responded no, the change didn’t happen.  Simple enough.  However, here we start running into issues.  There are three perspectives to look at. 

First, the end user.  Simple changes to basic settings required validation.  This annoyed most of them, if not all of them.  They didn’t care why it was asking, they just wanted to delete shortcuts from their start menu.  Their reaction: turn off UAC.  Bad idea, but security loses when it comes to usability in the case of the end user.

Second, the irate IT Pro/Developer.  Most people working in IT make changes to system settings constantly.  Given that, the UAC would be seen many times in a day and it would, for lack of a better word, piss that person off.  They didn’t care what security it provided, it was a “stupid-useless-design” that shouldn’t have been created.  Their reaction: turn off UAC.  Once again security loses when it comes to usability.

Third, the knowledgeable IT Pro/Developer.  Not a lot of people fell into this category.  However, these tended to be the same type of people who fit into the Lazy Admin category as well.  When managed properly UAC wasn’t all that annoying because it wasn’t seen all that often.  Set-it-and-forget-it and you don’t ever see the prompt.  If you created the system image properly, you don’t have to constantly keep changing settings.  It’s a simple enough idea.

But…

Application compatibility is a pain.  Most applications didn’t understand the UAC, so they weren’t running with a validation and generally broke when they tried to do things they really shouldn’t be doing in the first place.  These are things like manipulating registry keys that don’t belong to them, writing to system folders, reading data from low-level system API’s etc.  This was reason #1 for disabling UAC.

And now…

With the general availability of Windows 7 in about 2.5 hours from now, it seems like a good time to discuss certain changes to UAC in the latest version of Windows.  The biggest of course being when Windows decides to check for validation.

Windows 7 introduces two new levels of the UAC.  In Vista there was Validate Everything or Off.  Windows 7 added “Do Not Notify Me When I Make Changes to Windows Settings”.  This comes into effect when the user makes a change to a Windows setting like display resolution.  Windows is smart enough to realize it’s the user making the change, and allows it.  It’s second additional level is the same as the first, except it doesn’t hide the desktop.

Now we get into some fun questions. 

  • How does Window’s know to not show the prompt?  It’s fairly straightforward.  All Window’s executables that were released as part of the OS are signed with a certificate.  All executables signed with this certificate are allowed to run if user started.  This is only true for Window’s settings though.  You cannot implement this with 3rd party applications.  There is no auto-allow list.
  • How does Window’s know it’s a user starting the application?  Lots of applications can mimic mouse movements or keyboard commands, but they occur at a higher application level than an actual mouse move.  Input devices like mice and keyboards have an extremely low level driver, and only commands coming from these drivers are interpreted as user input.  You cannot spoof these commands.
  • Can you spoof mouse/keyboard input to accept the UAC request?  No.  The UAC prompt is created in a separate Windows desktop.  Other well known desktops include the Locked screen, login screen, and the Cardspace admin application.  No application can cross these desktops, so an application running in your personal desktop cannot push commands into the UAC desktop.

Mark Russinovich has an excellent article in TechNet Magazine that goes into more detail about changes to the UAC.  Hopefully this post at least covered all sides of the UAC debate.

ASP.NET WebForms are NOT Being Overthrown by MVC

It’s always a fun day when the man himself, ScottGu responds to my email.  Basically it all started last week at Techdays in Toronto (pictures to follow, I promise). 

Quite a few people asked me about MVC, and whether or not it will replace Web Forms.  My response was that it wouldn’t, but I didn’t have any tangible proof.  I discussed new features in .NET 4.0, and how the development is still going strong for future releases.  Some didn’t buy it.

So, earlier today I emailed Scott and asked him for proof.  This was his response:

Hi Steve,

Web Forms is definitely not going away – we are making substantial improvements to it with ASP.NET 4.0 (I’m doing a blog series on some of the improvements now).  ASP.NET MVC provides another option people can use for their UI layer – but it is simply an option, not a replacement.

In terms of the dev team size, the number of people on the ASP.NET team working on WebForms and MVC is actually about equal.  All of the core infrastructure investments (security, caching, config, deployment, etc) also apply equally to both.

Now, MVC is new.  MVC is powerful.  MVC is pretty freakin cool in what it can do.  But it won’t replace WebForms.  Frankly, I like WebForms.  MVC does have it’s place though.  I can see a lot benefits to using it.  It alleviates a lot of boilerplate code in certain development architectures, and that is never a bad thing.

Long Live WebForms!

Roles and Responsibilities for Managing an Enterprise Web Site

The intent of this post is to create a summary definition of roles required to adequately manage an enterprise website. It is designed to be used in tandem with a RACI (Responsibility, Accountability, Consultable, and Informed) document to provide a unified management model for the web Infrastructure developed.

Each role is neither inclusive nor exclusive in that any one person can qualify for more than one role, and more than one person can qualify for the same role, as long as each role has been fulfilled adequately.

In a future post I will discuss the creation of a RACI document.

Roles

  • Database Administrator

Database administrators are charged with controlling website data resources, and use repeatable practices to ensure data availability, integrity and security, recover corrupted data and eliminate data redundancy, as well as leverages tools to improve database performance and efficiency.

  • Application Administrator

Application Administrators are charged with installing, supporting, and maintaining applications, and planning for and responding to service outages and other problems including, but not limited to, troubleshooting end-user issues at the application level.

  • Server/Operating System Administrator

Server Administrators are charged with installing, supporting, and maintaining servers and other systems, as well planning for and responding to server outages and other problems including, but not limited to, troubleshooting Application Administration issues at the Operating System level.

  • User Account/Permissions Administrator

Account Administrators are charged with managing user accounts as well as permissions for users within the system. This includes, but is not limited to, locking and unlocking user accounts, as well as resetting passwords.

  • Hardware Administrator

Hardware Administrators are charged with managing server hardware and resources. This includes, but is not limited to, deployment of servers as well as troubleshooting issues such as faulty hardware.

  • Network Administrator

Network Administrators are charged with managing physical network resources such as routers and switches and logical network resources such as firewall rules and IP settings. This includes, but is not limited to, managing routing rules as well as troubleshooting connectivity issues.

These roles were created in an attempt to define job responsibilities at an executive level.  A RACI document is then suggested as the next step to define what each role entails at the management level.

ASP.NET Application Deployment Best Practices – Part 2

In my previous post I started a list of best practices that should be followed for deploying applications to production systems.  This is continuation of that post.

  • Create new Virtual Application in IIS

Right-click [website app will live in] > Create Application

Creating a new application provides each ASP.NET application its own sandbox environment. The benefit to this is that site resources do not get shared between applications. It is a requirement for all new web applications written in ASP.NET.

  • Create a new application pool for Virtual App
    • Right click on Application Pools and select Add Application Pool
    • Define name: “apAppName” - ‘ap’ followed by the Application Name
    • Set Framework version to 2.0
    • Set the Managed Pipeline mode: Most applications should use the default setting

An application pool is a distinct process running on the web server. It segregates processes and system resources in an attempt to prevent errant web applications from allocating all system resources. It also prevents any nasty application crashes from taking the entire website down. It is also necessary for creating distinct security contexts for applications. Setting this up is essential for high availability.

  • Set the memory limit for application pool

There is a finite amount of available resources on the web servers. We do not want any one application to allocate them all. Setting a reasonable max per application lets the core website run comfortably and allows for many applications to run at any given time. If it is a small lightweight application, the max limit could be set lower.

  • Create and appropriately use an app_Offline.htm file

Friendlier than an ASP.NET exception screen (aka the Yellow Screen of Death)

If this file exists it will automatically stop all traffic into a web application. Aptly named, it is best used when server updates occur that might take the application down for an extended period of time. It should be stylized to conform to the application style. Best practice is to keep the file in the root directory of the application renamed to app_Online.htm, that way it can easily be found if an emergency update were to occur.

  • Don’t use the Default Website instance
    • This should be disabled by default
    • Either create a new website instance or create a Virtual Application under existing website instance

Numerous vulnerabilities in the wild make certain assumptions that the default website instance is used, which creates reasonably predictable attack vectors given that default properties exist. If we disable this instance and create new instances it will mitigate a number of attacks immediately.

  • Create two Build Profiles
    • One for development/testing
    • One for production

Using two build profiles is very handy for managing configuration settings such as connection strings and application keys. It lessens the manageability issues associated with developing web applications remotely. This is not a necessity, though it does make development easier.

  • Don’t use the wwwroot folder to host web apps

Define a root folder for all web applications other than wwwroot

As with the previous comment, there are vulnerabilities that use the default wwwroot folder as an attack vector. A simple mitigation to this is to move the root folders for websites to another location, preferably on a different disk than the Operating System.

These two lists sum up what I believe to be a substantial set of best practices for application deployments.  The intent was not to create a list of best development best practices, or which development model to follow, but as an aid in strictly deployment.  It should be left to you or your department to define development models.

ASP.NET Application Deployment Best Practices – Part 1

Over the last few months I have been collecting best practices for deploying ASP.NET applications to production.  The intent was to create a document that described the necessary steps needed to deploy consistent, reliable, secure applications that are easily maintainable for administrators.  The result was an 11 page document.  I would like to take a couple excerpts from it and essentially list what I believe to be key requirements for production applications.

The key is consistency.

  • Generate new encryption keys

The benefit to doing this is that internal hashing and encrypting schemes use different keys between applications. If an application is compromised, the private keys that can get recovered will have no effect on other applications. This is most important in applications that use Forms Authentication such as the member’s section. This Key Generator app is using built-in .NET key generation code in the RNGCryptoServiceProvider.

  • Version and give Assemblies Strong Names

Use AssemblyInfo.cs file:

[assembly: AssemblyTitle("NameSpace.Based.AssemblyTitle")]
[assembly: AssemblyDescription("This is My Awesome Assembly…")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("My Awesome Company")]
[assembly: AssemblyProduct("ApplicationName")]
[assembly: AssemblyCopyright("Copyright © 2009")]
[assembly: AssemblyTrademark("TM Application Name")]
[assembly: AssemblyCulture("en-CA")]

Strong names and versioning is the backbone of .NET assemblies. It helps distinguish between different versions of assemblies, and provides copyright attributes to code we have written internally. This is especially helpful if we decide to sell any of our applications.

  • Deploy Shared Assemblies to the GAC
    • Assemblies such as common controls
    • gacutil.exe -I "g:\dev\published\myApp\bin\myAssembly.dll"

If any assemblies are created that get used across multiple applications they should be deployed to the GAC (Global Assembly Cache). Examples of this could be Data Access Layers, or common controls such as the Telerik controls. The benefit to doing this is that we will not have multiple copies of the same DLL in different applications. A requirement of doing this is that the assembly must be signed and use a multipart name.

  • Pre-Compile Site: [In Visual Studio] Build > Publish Web Site

Any application that is in production should be running in a compiled state. What this means is that any application should not have any code-behind files or App_Code class files on the servers. This will limit damage if our servers are compromised, as the attacker will not be able to modify the source.

  • Encrypt SQL Connections and Connection Strings

Encrypt SQL Connection Strings

Aspnet_regiis.exe -pe connectionStrings -site myWebSite -app /myWebApp

Encrypt SQL Connections

Add ‘Encrypt=True’ to all connection strings before encrypting

SQL Connections contain sensitive data such as username/password combinations for access to database servers. These connection strings are stored in web.config files which are stored in plain-text on the server. If malicious users access these files they will have credentials to access the servers. Encrypting the strings will prevent the ability to read the config section.

However, encrypting the connection string is only half of the issue. SQL transactions are transmitted across the network in plain-text. Sensitive data could be acquired if a network sniffer was running on a compromised web server. SQL Connections should also be encrypted using SSL Certificates.

  • Use key file generated by Strong Name Tool:

C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\sn.exe

“sn.exe -k g:\dev\path\to\app\myAppKey.snk”

Signing an assembly provides validation that the code is ours. It will also allow for GAC deployment by giving the assembly a signature. The key file should be unique to each application, and should be kept in a secure location.

  • Set retail=”true” in machine.config

<configuration>

<system.web>

<deployment retail="true"/>

</system.web>

</configuration>

In a production environment applications do not want to show exception errors or trace messages. Setting the retail property to true is simple way to turn off debugging, tracing, and force the application to use friendly error pages.

In part 2 I continue my post on more best practices for deployment to a production environment.

Naming Conventions can be Your Enemy

Or your ally in the fight against technology management.  Earlier this week I was given the task of doing some naming for new servers, which is pretty much SOP.  Problem is, we don’t have a naming standard.  As such, I may choose a name that annoys someone, or they choose a name that annoys me.  This becomes very political.  We don’t want to name things in such a way that they annoy people.  It’s a bad idea.  And, much to my dismay, I said something this morning that was pretty much just insulting to one of my team members.

I could have given loads of excuses, but it wouldn’t have mattered.  I was being petty.  Man, that’s a bad idea in an office.  It divides teams, and man, that’s *really* bad in an office.  The reason it came about was because a few people were talking about moving into “fun” server names, as apposed to functional server names.  Examples of this would be Cygnus or Badger, as apposed to GR-SQLCluster1.

The reasons behind it being:

  • It’s more secure if the attacker doesn’t know what the server does, based on it’s name
  • Server roles change over time, so GR-SQLCluster1 might become relegated to an apps server
  • Sections of functional names become redundant
  • Organize names by type; i.e. birds, galaxies, different words for snow, etc

At first glance, they make great sense.  However, after a little time to digest the reasons, a few things become clear.

  • If an attacker is able to get to the server, to the point that they can know the name, you are already screwed
  • A good practice is to rebuild the server if it changes roles, and with that change the name
  • People don’t want to connect to the Badger Server
  • You need a reference list to figure out what the Cygnus server does/where the Cygnus server physically is
  • If you want to create DNS entries to provide functional names to it, that’s another level of complexity to manage
  • What happens when you run out of server names?

Given this list, it now becomes an interesting debate.  But I have one question for you:

As a developer, would you name a variable ‘badger’ if it was holding a shopping cart?  Not a chance.  You would only do that if it were badger related, and even then you are better off with ‘meanLittleWoodlandCreature’ in case you change something.

In my response I called the security reason laughable.  Again – petty and a really, really, really bad idea when in a team discussion.  Obviously I was in a pissy mood for some reason, or maybe a greater than thou mood thinking I knew more about the topic.  I tend to do that.

I think what really made me do it was that we are developers, not administrators.  It’s not our job to name servers.  So why were we?  I didn’t want to piss anyone off, I just wanted to name the server so we could move on to the next stage of the deployment.  This situation could have easily been averted.

If we had a naming convention for our servers, regardless of fun vs functional, I could have followed the convention and washed my hands of the problem.  So I guess the question is, why don’t we have one?  Lot’s of companies don’t have them.  And I think it’s because of stagnant server growth.

If you are only setting up a couple servers every so often, you aren’t bogged down with these types of questions.  You have time to discuss.  The problem we are having, I think, is because we have increased our server growth dramatically in the last little while, which hasn’t given us enough time to discuss names as a group.  I was rushing to get the server into production because the administrators were busy working on other tasks that were filed under the category “Do Now Or ELSE!”

So I think we need a naming convention.  A functional naming convention.  It will prevent a world of hurt down the road.  Now to get buy in, and ask for forgiveness.  I still have lots to learn.

Move Their Cheese! (and Change the Design)

I tend to complain a lot.  Which frankly, doesn't do much for what I'm complaining about.  In most cases, it comes down to "okay, here is a problem, now someone else go and fix it."  There is a direct correlation to how many people I annoy too.  The number of people I annoy increases as the magnitude of my complaining-ness (hey, a new word) increases:

upGraph

If I wanted to change something, obviously I’m going about it the wrong way.  However, there is a direct correlation between how often I do something wrong and the likelihood I will get it right.  See previous image.  What that means is if I keep screwing something up, eventually I am bound to get it right.  However, what is not necessarily apparent in the chart is that if I do nothing, I won’t improve upon my actions.  Maybe it is apparent, I don’t know – I’m still working on it.

The reason I bring this up is because I keep hearing people bash/complain/hate the Office Ribbon and application Ribbons through Windows 7:

ribbon2007 The major complaint has been that people couldn’t find what they are looking for anymore.  There aren’t any menus, so they can’t figure out how to set [insert obscure property].  It doesn’t make sense to them.  They now have to change the way they think about the application.  What is unfortunate about this is that menus are a horrible interface.  You shouldn’t have to dig through 6 layers of menus to change a single property, and that’s what Office 2003 became.  The Ribbon has it’s own problems, but it also increases user productivity greatly when the user knows how to use the Ribbon effectively.  Which in lies a major problem.

Most end-users don’t like when you move their cheese.

Well now we have a problem because people also want improved systems.  Improve the system, but don’t change it.  This paradox is why fundamentally different – game changing – designs aren’t seen all that often.  We stick with what we already know because if we deviate people will complain.  It’s a very tough way to create a better interface.

So how do you create a better interface?  You keep changing it.  Guaranteed the first couple of designs are going to annoy people: i.e. the Ribbon.

This is good.

If you keep failing at designs, that means eventually you are bound to figure out what kind of interface works best.  You will never figure it out if you never change.  Without MicroBating MasterSoft’s (hey look, two new words) ego, I must say that Microsoft is doing well in this area.  They keep making lousy design decisions.  See Expression Blend UI, and listen to most non-technical office workers using Office 2007.  I’m sure there are quite a few instances in other applications as well.  However, and I must make this clear, Microsoft is doing the right thing.  They are actively trying to create better interfaces.  Yes, it will piss people off (it’s pissed me off quite a few times), but at least they are making the effort.  And that’s what counts.

EDIT: P.S. I do like the Ribbon.

Stop Complaining About Software Expenses

It’s been a long week, and it’s only Monday.  It all started with an off-the-cuff comment.  It was of the petty nature, and it certainly wasn’t accurate.  It seems that is usually the case with petty comments.

I was berated for suggesting SharePoint Services as a replacement for our ageing intranet, and the commenter responded with a quick “SharePoint?  Microsoft makes that, it’ll cost too much.  Our current java site works just fine, and it’s free.”  Or something of that nature. 

How do you respond to a petty comment?  It’s pretty damn hard:

  1. While Microsoft Office SharePoint Server 2007 does cost money for licensing, Windows SharePoint Services 3.0 (which MOSS is built on) is free.  Not free as in speech, but free as in beer.  Always has been. 
  2. Java is a terrible language for websites.  It’s slow, and none of the developers in the company know Java.  We all program with .NET languages.
  3. The current intranet is running on an AS/400.
  4. The bulk of the stuff we do on our current intranet could very easily be done in SharePoint, without any development.  And, we can also increase productivity with the added features of team workspaces and free templates for other departments.
  5. The only cost will be in man-hours setting the server up, and migrating content.

Those have been my main arguments since I started working here.  We are a Microsoft shop, but very often choose non-Microsoft products.  Hmm…

The main reason we don’t use Microsoft products is cost.  Plain and simple.  Ironically, that is also the same reason WHY we use Microsoft products.

We use SQL Server, Windows Server 2008, Active Directory (finally!), IIS, MOSS (soon), and program in C#.  We don’t use office 2007, only Office 2003, some computers are still on Windows 2000 and XP.  Only one computer is running Vista, and two are running Windows 7.  But then again, we are a Not-For-Profit company.  Budgets are tight.

This post is NOT a comment on our current state of technology, because like I said in a previous post, we do a pretty good job of staying on the cutting edge in a few cases.

This post IS a comment on the people out there who think cost is the only thing to look at when evaluating a product.  For the love of god, STOP bitching about price.  START bitching about quality.

I can’t stand bad software.  People don’t pay for good software, but then complain about its quality.  Come on!  There is a formula out there that calculates the cost of a piece of software over time.  It takes into account initial cost, and the cost of the updates that follow.  It’s a simple y = mx+b formula.

Now, when you have a higher initial cost, you tend to assume it’s of higher quality.  Put this into the equation, and the number of updates, and the cost to implement these updates goes down.  Over the life of the product, it’s cheaper to go with the software that is initially more expensive.  This is basic business.

What this basic business formula doesn’t show you is the added headaches you get with crappy software.  You tend to end up with silos of systems, and silos of data.  You don’t get integration.  This is where the cost sky rockets.  Or more accurately, this is where productivity decreases.

Ironically…

SharePoint Services 3.0 is free.  It doesn’t cost anything to use.  It’s easy to use, and integrates with most of our internal systems.  I just ruined my entire argument.  Sorta.  SharePoint is a quality piece of software, and over time, it will cost less to use and maintain than any of the other intranet/middleware applications out there.  Most people don’t realize this.

I’ll probably get flack for this one:  Most people don’t complain about software expenses.  They complain about Microsoft expenses.

  • “We give Microsoft too much money, and don’t get enough in return.”
  • “There must be better software vendors out there than Microsoft that are cheaper.”
  • “Why bother upgrading; XP Works fine.”

Have you seen the cost of a friggen Oracle license?  What about IBM’s iSeries?  Novell’s Groupwise?  My jaw dropped when I saw the cost of these things.  I can’t say a single nice thing about Groupwise.  It’s a terrible product.  IBM’s iSeries is pretty good, but it’s limited what you can do with it.  Oracle knows databases, but has a higher license cost than a good chunk of a department’s salary.

Microsoft gets most of our money because it has quality products, at a good price.  Look at a few competing vendors products and compare cost and quality as well as the ability to integrate across platforms.  Revelation is a wonderful thing.  You might think twice before settling on cost.