Working with Default Rows

Occasionally, a situation arises where the contents of a particular data table within a Dataset *might* originate from more than one source.  For example, a client of ours has a reporting system where there is a set of default columns for each report type.  When a user creates a new report (stored in the Report table in the Dataset), the default columns (the ReportColumns table) are loaded from a DEFAULT_COLUMN table.  If any change is made to the information in ReportColumns (like changing a caption, for example), all of the column information is copied and saved in a REPORT_COLUMNS table.  However, if no change is made, no information needs to be saved.  This allows changes in the default columns to be automatically propagated to any report that hasn't been 'customized'.

All of this functionality is easy to add to the Load method of an entity based on this DataSet.  However, what happens when a Report that uses the default columns is deleted.  Normal processing would have the related ReportColumn rows deleted, followed by the corresponding Report row.  However, when the delete is performed on each of the ReportColumn rows, a DBConcurrencyException is thrown.  The message with this exception is as follows:

Concurrency violation: The DeleteCommand affected 0 records.

With a little bit of thought (actually, slight more than a little the first time this error was encountered :), the reason for the error became apparent.  The records in the ReportColumns table didn't exist in the database.  Nor should they, since they were originally populated from DEFAULT_COLUMNS, not REPORT_COLUMNS.  However, the DeleteCommand was trying to remove them from REPORT_COLUMNS.  And when the DataAdapter class attempts to delete a record and doesn't see any rows being changed, it assumes that a concurrency violation has taken place. 

The solution was to take a little more control of the deletion process.

dataAdapter.ContinueUpdateOnError = true;
dataAdapter.Update(reportsData);

if (reportsData.ReportColumn.HasErrors)
{
    DataRow[] drs = reportsData.ReportColumn.GetErrors();
    foreach (DataRow dr in drs)
        if (dr.RowError.Substring(21) == "Concurrency violation")
            reportsData.ReportColumn.RemoveReportColumnRow((ReportsData.ReportColumnRow)dr);
    reportsData.ReportColumn.AcceptChanges();

// If the dataset still has errors, then an exception needs to be thrown

    if (reportsData.ReportColumn.HasErrors)
       
throw new DataException("An exception was raised while updating the ReportColumn data table: " +
           
reportsData.ReportColumn.GetErrors()[0].RowError);
}

The ContinueUpdateOnError property is used to stop the aborting of the updates on a exception.  Once this is done, we check to see if the data table has any errors.  If so, we make sure that every row that has a concurrency violation is removed from the Dataset.  The AcceptChanges method call is required to update the HasErrors flag, if all of the errors had been eliminated in this manner.

Now is this a perfect solution?  No.  It is possible that a legitimate concurrency violation is ignored.  From the point of view of this application, it is considered an acceptable risk given the type of data being used.  If you wanted to trap the 'real' concurrency problems, you could select against the REPORT_COLUMNS table using the primary key (instead of all of the original values) when a concurrency violation is detected. If a record is found, then the concurrency problem is real.  Otherwise the exception can be ignored.