5 ways to track Database Schema changes – Part 2 – Transaction Log

Last week I published the first post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Transactional Log for this purpose.

You can find my previous post here:

Transaction Log

Another solution that can be used to track changes executed against your database is to read Transaction Log file or Transaction Log Backups. Transaction log file (and backups) consists of every transaction executed against your database. The only requirement for this is to have a database in the Full recovery model. In the Simple recovery model, every committed transaction can be very quickly overwritten by another one.

Also, this is something that you get for free without the need to enable and configure any additional SQL Server functionality. Of course, besides the database backups, but you already do database backups, right?

To test this approach, you have to make some preparations. You need to set the database’s recovery model to Full. As a first step, check database properties to ensure that it is appropriately configured. As a second step, you need to create a full database backup. From this point, the database is in Full recovery model, and every transaction is fully logged. Thanks to this you are able to read logged transactions from Transaction Log file. The same applies to read from Transaction Log backup. To do this, you need to create such a backup after you execute database schema changes.

USE master;
GO

CREATE DATABASE TrackMyChanges;
GO

BACKUP DATABASE TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.bak';
GO

USE TrackMyChanges;
GO

-- Here execute all statements that create, modify, and drop objects.

USE master;
GO

-- Now you can check Transaction log file

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

-- Now you can check Transaction Log backup file

DROP DATABASE TrackMyChanges;
Reading Transaction Log file

To read the transaction log file, we can use the undocumented fn_dblog function. It accepts two parameters:

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the log.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the log file.
SELECT * FROM fn_dblog(null, null);

This query returns a lot of data. Fortunately, we don’t need all columns to check what happened to our objects. We may easily reduce the number of columns and rows to these relevant ones. Each object modification needs to be a part of a transaction. As a first step, we can list only rows with the LOP_BEGIN_XACT operation.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction Name

Based on the [Transaction Name] we can identify transactions that change the schema of our objects. At this point, we don’t know yet what object it is, but we can check who modified it. [Transaction SID] column contains a SID of the Login that was used to execute this operation. We can use SUSER_SNAME() function to get its name. [Transaction Name] column simply describes what was changed.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction SID

To continue, we need to decide which particular change we want to investigate further. Let’s take the second CREATE/ALTER FUNCTION transaction. We need to note down its Transaction ID. For me, it is 0000:0000039e.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null) 
WHERE [Transaction ID] = '0000:0000039e';

Transaction Log - One Transaction

Now, to discover what object was changed we have to dig into [LOCK Information] column. The first LOP_LOCK_XACT operation describes Schema Modification Lock on object ID = 965578478.

HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 5:965578478:0

This is our function:

SELECT * FROM sys.objects WHERE object_id = 965578478;

Transaction Log - Function

OK. At this point, we know what object was changed, when, and by whom. However, this is Transaction Log, and it should also contain detailed information what exactly was changed. Can we get it? Oh yes, we can. To do it – run the following query.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description], 
    CAST(SUBSTRING([RowLog Contents 0],33,LEN([RowLog Contents 0])) AS varchar(8000)) AS [Definition]
FROM fn_dblog(null, null)
WHERE [Transaction ID] = '0000:0000039e'
AND [AllocUnitName] = 'sys.sysobjvalues.clst' 
ORDER BY [Current LSN] ASC;

Transaction Log - Change Details

As you can see, there are two rows. One describes the deleted old object Definition, and the second one represents the inserted new value. That’s really cool! Thanks to this we exactly know what was changed.

Reading Transaction Log backup

Reading changes directly from Transaction Log is one approach, but you can also get the same information from Transaction Log backups. The only difference is that you must use fn_dump_dblog() instead of fn_dblog().  This function accepts 68 parameters (sic!). Fortunately, we have to provide only a few of them.

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the backup file.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the backup file.
  • The third is a type of file (can be DISK or TAPE).
  • The fourth one is a backup number in the backup file.
  • The fifth is a path to the backup file

What about the remaining 63 parameters? They need to be specified only if you use stripped media sets with multiple disk files (max 64). In such a case, you have to provide paths to the rest of the files. If you don’t use this feature, then you must provide DEFAULT values.

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

SELECT * FROM fn_dump_dblog(
    NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT);

When you know what values to provide as parameters, then you can easily use this function to get data as in previous examples.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dump_dblog(
        NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log Backup - Transaction Name

What was captured by Transaction Log?

By its nature Transaction Log contains detailed information about every change. You can extract from it such information as what was changed (in details) when it was modified and by whom.

Data retention

Here, the story is straightforward. You have access to this information as long as you store Transaction Log backups.

Transaction Log summary
Advantages:

  • Contains every detail
  • Info WHAT exactly was changed
  • Access to the old and new object definition
  • Info WHEN object was changed
  • Info WHO changed the changed
Disadvantages:

  • Require database FULL recovery model
  • The complicated and long process of data retrieval
  • In busy systems, it may be difficult to find log entry we are looking for
  • Retention based on Transaction Log backup retention

In the next part I will demonstrate you how to configure Extended Events Session to capture database schema changes.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 1 – Default Trace

In the perfect world, only Database Administrators have access to SQL Server databases. All database schema changes go through strict Change Management Process where they need to be well described and approved by Change Advisory Board. The database schema is stored in a source code repository and deployed version doesn’t drift unexpectedly from its original model.

Unfortunately, we don’t live in the perfect world…

Despite the version control system, change management process, and limited access to the server, sometimes database schema is changed without our knowledge and supervision. It may happen in a development environment where a bunch of people have access and deployment process is not very strict (the common case).  However, it also may happen in higher level environments where only limited number of people have access (the rare case – but not impossible).

Sooner or later such unexpected schema changes start to be very problematic. They may break some functionality or create some other issues (ie. performance degradation). They may block deployment of next changes. They simply may be implemented in an inefficient way or even just be stupid.

In such a case, various questions arise and you as a DBA will have to answer some of them.

  • When XYZ object was changed?
  • What modifications were applied to the XYZ object?
  • Who changed XYZ object?

In this short 5 part series, I will demonstrate you 5 different ways to track such database schema changes. Thanks to this, you always will be able to easily answer such questions. Today’s post describes how to use Default Trace.

Continue reading “5 ways to track Database Schema changes – Part 1 – Default Trace”

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close