I also have to do data change auditing on our database. Our current implementation has been in place for several years but I am looking at changing it. The new approach is based on an article by Itzik Ben-Gan a couple years ago. Below is a summary of the current approach and the proposed approach. Any comments would be greatly appreciated.
Currently auditing is done with 3 "INSTEAD OF" triggers (trg<tableName>Insert, trg<tableName>Update, trg<tableName>Delete) on each table and 2 auditing tables (AuditLog and AuditDetail).
The insert trigger copies the data from Inserted to a temp table, calls a procedure that populates the AuditLog table with the table name, Table PK, Operation, who made the change, the time of the change and what computer the changes was made on. The procedure then inserts a row for each column of the table into the AuditDetail table. This contains the column name, old value (always NULL in this case) and new value. Once the procedure is completed the temp table is dropped and the data from INSERTED is inserted into the base table.
The Update trigger copies data from Inserted and Deleted into temp tables and calls a procedure that populates the AuditLog table just like the insert trigger and then compares the inserted and deleted temp tables column by column and inserts a row in the AuditDetails table for each of the columns that have changed. The temp tables are then dropped and the base table is updated.
The Delete trigger copies data from Deleted into a temp table and calls a procedure that populates the AuditLog table and inserts a row for each column of the table into the AuditDetail table and then deletes the record in the base table.
Pros: Only 2 additional tables required.
Cons: Performances is terrible in a table with millions of records. Bulk changes to a table take hours.
What I am considering for the new approach is this. One trigger on each table and 3 additional tables per audited table. The trigger identifies the type of statement that fired the trigger and the number of rows affected. Uses table variable instead of temp tables which allow roll back of a change with ability to log the attempted change. Has logic to log attempted changes to the data and block the change (e.g. changing the PK column or integrity violations).
Each audited table will have a <TableName>AuditHeader table that contains the dmltype, date of change, who made the change, application name, what computer the change was made from, a failed flag and comment if the attempted change failed or was blocked.
Each audited table will also have a <TableName>InsAuditDelDetail table that contain a FK to the header table and a column for each of the columns in the base table. This table will store the full row of data for inserts and deletes.
The final table is <TableName>AuditUpdDetail. This table contain a FK to the Header table, the name of the column, the old value and new value for each of the columns that changed in the update.
Pros: Performance is very good. I ran a change of several hundred thousand records and the time went from about 4 minutes to 6 minutes. A comercial package using two audit tables ran for 20 minutes and crashed. Current version ran for several hours and I killed it.
Avoids hot spots created by all user changes resulting in hitting just 2 tables (the detail in particular).
Cons: A lot of extra tables. May be more difficult to generate a view of all changes depending on requirements
Admittedly bulk changes do not happen often but when they do it is a killer and we are seeing performance problems under normal use with the current approach when the log grows too large. Some of this can be mitigated by table and index changes.
The <TableName>AuditUpdDetail could be eliminate and the <TableName>InsAuditDelDetail used. The before and after values would need to be determined by comparison of the 2 records. I have seen this approach also. Another downside to this is that you are storing an entire record even if only one column changed (a big hit for tables with image data, file attachments, etc.)