5 ways to track Database Schema changes – Part 3 – Extended Events Session

Last week I published the second post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Extended Events Session for this purpose.

You can find my previous post here:

Extended Event Session

The third option is the Extended Events Session. This functionality has been introduced in SQL Server 2008. It provides very lightweight monitoring and event tracking system that can be very helpful in performance troubleshooting and (in our case) changes monitoring.

Microsoft announced it as a successor of SQL Trace (that has been marked as deprecated in SQL Server 2012). That means SQL Trace can be removed in one of the future versions. However, SQL Trace and SQL Server Profiler are still very popular and widely used, so I really doubt that will happen anytime soon.

In the beginning, Extended Events didn’t gain too much popularity because it was tough to use them. Now, with the help of SSMS, it’s much more comfortable.

To test this solution we have to create an Extended Events Session that captures object modification events. In our case these are:

  • object_created
  • object_altered
  • object_deleted
Creating an Extended Events Session

You can create such session in two ways. Using T-SQL or using a Wizard in SQL Server Management Studio. I find the latter much more comfortable.

Create a Session using Wizard

To create a new Session, in Object Explorer expand Instance -> Management -> Extended Events. Right Click on Sessions and choose New Session Wizard.

Extended Events Wizard - Start Wizard

On the “Set Session Properties” page provide a name for your session and decide if this session should start at server startup. Then, click the Next button.

Extended Events Wizard - Session Properties

On the next page, you can decide whether you want to use a predefined template or not. In our case, there is no template that we can use to track object schema changes. Choose “Do not use template” option and click “Next”.

Extended Events Wizard - Template

On the “Select Events To Capture” page, select object_altered, object_created, and object_deleted events.

Extended Events Wizard - Events To Capture

It should look like this:

Extended Events Wizard - Events To Capture 2On the “Capture Global Fields” page, you can decide what data you want to collect. My recommendation is to select the following ones:

  • client_app_name
  • client_hostname
  • database_id
  • database_name
  • server_principal_name
  • session_id
  • sql_text

This gives you an overview of what really happened. You know who performed a change, from which machine, and from which application. Most importantly, you also know what SQL statement was executed. When you set this then click “Next”.

Extended Events Wizard - Capture Global Fields

On the “Set Session Event Filters” page, you can add additional filters. That enables you, for example, to capture events just for one database instead of for all databases on your instance. Then click “Next”.

Extended Events Wizard - Session Event Filters

On the “Specify Session Data Storage” page, you need to decide if you want to log data to file(s) (event_file target) or maybe keep them in memory (ring_buffer target). For real-life usage, you should choose files. Here, for the demo purpose, I use ring_buffer.

Extended Events Wizard - Data Storage

When the session is already created, you can start it immediately and also can watch live data on the screen as events are captured.

Extended Events Wizard - Session Created

Create a Session using T-SQL

The same session can be created using this script.

CREATE EVENT SESSION [CaptureObjectModifications] ON SERVER 
ADD EVENT sqlserver.object_altered(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_created(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_deleted(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text))
ADD TARGET package0.ring_buffer(SET max_events_limit=0,max_memory=102400)
GO
Viewing events captured by the Extended Events Session

When the session is already created and started. It captures all object modifications. To see changes live, start the Wach Live Data view in SSMS.

Extended Events - Start Watch Live Data

You can right click on the column names to add additional columns to this view.

Extended Events - Watch Live Data - Add Columns

Now, let’s test it by executing the whole test case from the beginning of the article. Captured events are automatically displayed in the window.

Extended Events - Watch Live Data

To review historical data from the ring_buffer target, you need to use the T-SQL query. When you log to files you have the possibility to review their content in SSMS with View Target Data option. Use this query to select captured events for our session.

;WITH raw_data(t) AS
(
    SELECT CONVERT(XML, target_data)
    FROM sys.dm_xe_sessions AS s
    INNER JOIN sys.dm_xe_session_targets AS st
    ON s.[address] = st.event_session_address
    WHERE s.name = 'CaptureObjectModifications'
    AND st.target_name = 'ring_buffer'
),
xml_data (ed) AS
(
    SELECT e.query('.') 
    FROM raw_data 
    CROSS APPLY t.nodes('RingBufferTarget/event') AS x(e)
)
SELECT * --FROM xml_data;
FROM
(
  SELECT
    [timestamp]       = ed.value('(event/@timestamp)[1]', 'datetime'),
    [database_id]     = ed.value('(event/data[@name="database_id"]/value)[1]', 'int'),
    [database_name]   = ed.value('(event/action[@name="database_name"]/value)[1]', 'nvarchar(128)'),
    [object_type]     = ed.value('(event/data[@name="object_type"]/text)[1]', 'nvarchar(128)'),
    [object_id]       = ed.value('(event/data[@name="object_id"]/value)[1]', 'int'),
    [object_name]     = ed.value('(event/data[@name="object_name"]/value)[1]', 'nvarchar(128)'),
    [session_id]      = ed.value('(event/action[@name="session_id"]/value)[1]', 'int'),
    [login]           = ed.value('(event/action[@name="server_principal_name"]/value)[1]', 'nvarchar(128)'),
    [client_hostname] = ed.value('(event/action[@name="client_hostname"]/value)[1]', 'nvarchar(128)'),
    [client_app_name] = ed.value('(event/action[@name="client_app_name"]/value)[1]', 'nvarchar(128)'),
    [sql_text]        = ed.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)'),
    [phase]           = ed.value('(event/data[@name="ddl_phase"]/text)[1]',    'nvarchar(128)')
  FROM xml_data
) AS x
WHERE phase = 'Commit'
ORDER BY [timestamp];

Extended Events - TSQL

What was captured by Extended Events Session?

In terms of what was captured, the Extended Event Session looks very well. It has a variety of configuration options that allow you to customize the logged details. Viewing the data collected by session, we know what was changed, when it was changed and by whom. We also have a SQL statement that was executed to perform this change.

Data retention

The Extended Events Session has many retention options for both targets. For files, we can specify the maximum files size. For the ring buffer, we can specify maximum events count and memory size. That gives users a lot of flexibility.

Extended Events summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Plenty of retention options
  • Possibility to save data to memory or files
  • Easy to set up
  • Possibility to set additional filters
Disadvantages:

  • Difficult processing of captured events (T-SQL and XQuery)
  • No access to the old object definition

In the next part, I will demonstrate you how to use DDL Triggers to capture database schema changes.

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 2 – Transaction Log

Last week I published the first post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Transactional Log for this purpose.

You can find my previous post here:

Transaction Log

Another solution that can be used to track changes executed against your database is to read Transaction Log file or Transaction Log Backups. Transaction log file (and backups) consists of every transaction executed against your database. The only requirement for this is to have a database in the Full recovery model. In the Simple recovery model, every committed transaction can be very quickly overwritten by another one.

Also, this is something that you get for free without the need to enable and configure any additional SQL Server functionality. Of course, besides the database backups, but you already do database backups, right?

To test this approach, you have to make some preparations. You need to set the database’s recovery model to Full. As a first step, check database properties to ensure that it is appropriately configured. As a second step, you need to create a full database backup. From this point, the database is in Full recovery model, and every transaction is fully logged. Thanks to this you are able to read logged transactions from Transaction Log file. The same applies to read from Transaction Log backup. To do this, you need to create such a backup after you execute database schema changes.

USE master;
GO

CREATE DATABASE TrackMyChanges;
GO

BACKUP DATABASE TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.bak';
GO

USE TrackMyChanges;
GO

-- Here execute all statements that create, modify, and drop objects.

USE master;
GO

-- Now you can check Transaction log file

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

-- Now you can check Transaction Log backup file

DROP DATABASE TrackMyChanges;
Reading Transaction Log file

To read the transaction log file, we can use the undocumented fn_dblog function. It accepts two parameters:

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the log.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the log file.
SELECT * FROM fn_dblog(null, null);

This query returns a lot of data. Fortunately, we don’t need all columns to check what happened to our objects. We may easily reduce the number of columns and rows to these relevant ones. Each object modification needs to be a part of a transaction. As a first step, we can list only rows with the LOP_BEGIN_XACT operation.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction Name

Based on the [Transaction Name] we can identify transactions that change the schema of our objects. At this point, we don’t know yet what object it is, but we can check who modified it. [Transaction SID] column contains a SID of the Login that was used to execute this operation. We can use SUSER_SNAME() function to get its name. [Transaction Name] column simply describes what was changed.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction SID

To continue, we need to decide which particular change we want to investigate further. Let’s take the second CREATE/ALTER FUNCTION transaction. We need to note down its Transaction ID. For me, it is 0000:0000039e.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null) 
WHERE [Transaction ID] = '0000:0000039e';

Transaction Log - One Transaction

Now, to discover what object was changed we have to dig into [LOCK Information] column. The first LOP_LOCK_XACT operation describes Schema Modification Lock on object ID = 965578478.

HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 5:965578478:0

This is our function:

SELECT * FROM sys.objects WHERE object_id = 965578478;

Transaction Log - Function

OK. At this point, we know what object was changed, when, and by whom. However, this is Transaction Log, and it should also contain detailed information what exactly was changed. Can we get it? Oh yes, we can. To do it – run the following query.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description], 
    CAST(SUBSTRING([RowLog Contents 0],33,LEN([RowLog Contents 0])) AS varchar(8000)) AS [Definition]
FROM fn_dblog(null, null)
WHERE [Transaction ID] = '0000:0000039e'
AND [AllocUnitName] = 'sys.sysobjvalues.clst' 
ORDER BY [Current LSN] ASC;

Transaction Log - Change Details

As you can see, there are two rows. One describes the deleted old object Definition, and the second one represents the inserted new value. That’s really cool! Thanks to this we exactly know what was changed.

Reading Transaction Log backup

Reading changes directly from Transaction Log is one approach, but you can also get the same information from Transaction Log backups. The only difference is that you must use fn_dump_dblog() instead of fn_dblog().  This function accepts 68 parameters (sic!). Fortunately, we have to provide only a few of them.

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the backup file.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the backup file.
  • The third is a type of file (can be DISK or TAPE).
  • The fourth one is a backup number in the backup file.
  • The fifth is a path to the backup file

What about the remaining 63 parameters? They need to be specified only if you use stripped media sets with multiple disk files (max 64). In such a case, you have to provide paths to the rest of the files. If you don’t use this feature, then you must provide DEFAULT values.

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

SELECT * FROM fn_dump_dblog(
    NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT);

When you know what values to provide as parameters, then you can easily use this function to get data as in previous examples.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dump_dblog(
        NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log Backup - Transaction Name

What was captured by Transaction Log?

By its nature Transaction Log contains detailed information about every change. You can extract from it such information as what was changed (in details) when it was modified and by whom.

Data retention

Here, the story is straightforward. You have access to this information as long as you store Transaction Log backups.

Transaction Log summary
Advantages:

  • Contains every detail
  • Info WHAT exactly was changed
  • Access to the old and new object definition
  • Info WHEN object was changed
  • Info WHO changed the changed
Disadvantages:

  • Require database FULL recovery model
  • The complicated and long process of data retrieval
  • In busy systems, it may be difficult to find log entry we are looking for
  • Retention based on Transaction Log backup retention

In the next part I will demonstrate you how to configure Extended Events Session to capture database schema changes.

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 1 – Default Trace

In the perfect world, only Database Administrators have access to SQL Server databases. All database schema changes go through strict Change Management Process where they need to be well described and approved by Change Advisory Board. The database schema is stored in a source code repository and deployed version doesn’t drift unexpectedly from its original model.

Unfortunately, we don’t live in the perfect world…

Despite the version control system, change management process, and limited access to the server, sometimes database schema is changed without our knowledge and supervision. It may happen in a development environment where a bunch of people have access and deployment process is not very strict (the common case).  However, it also may happen in higher level environments where only limited number of people have access (the rare case – but not impossible).

Sooner or later such unexpected schema changes start to be very problematic. They may break some functionality or create some other issues (ie. performance degradation). They may block deployment of next changes. They simply may be implemented in an inefficient way or even just be stupid.

In such a case, various questions arise and you as a DBA will have to answer some of them.

  • When XYZ object was changed?
  • What modifications were applied to the XYZ object?
  • Who changed XYZ object?

In this short 5 part series, I will demonstrate you 5 different ways to track such database schema changes. Thanks to this, you always will be able to easily answer such questions. Today’s post describes how to use Default Trace.

Continue reading “5 ways to track Database Schema changes – Part 1 – Default Trace”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail