5 ways to track Database Schema changes – Part 4 – DDL Trigger

This is the fourth post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today, I describe how to use DDL Trigger for this purpose.

Here you can find my previous posts from this series:

DDL Triggers

The fourth option is a DDL Trigger. These kinds of triggers are executed in response to Data Definition Language (DDL) events. They can be created in the database or server scope. Database-scoped DDL triggers are stored as objects in the database in which they are created. Server-scoped DDL triggers are stored in the master database. They can be created to be fired in response to a single particular event or the event from a predefined event group.

In our case, we are interested in capturing all events from the DDL_DATABASE_LEVEL_EVENTS group.

To get information about the event that fired our DDL trigger, we use the EVENTDATA()function. It returns an XML value that contains details about the event that triggered the execution of the trigger.

Create DDL Trigger

To log objects’ changes we first have to create a table to which we will save captured events. Below code snippet shows the simplest possible table structure that can be used for this purpose.

CREATE TABLE [dbo].[DatabaseLogs](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [DateTime] [datetime] NOT NULL 
    CONSTRAINT DF_DatabaseLogs_DateTime DEFAULT (GETDATE()),
    [EventData] [xml] NULL,
    CONSTRAINT [PK_DatabaseLogs] PRIMARY KEY CLUSTERED ( [Id] ASC ) 
)
GO

CREATE NONCLUSTERED INDEX nix_DatabaseLogs ON [dbo].[DatabaseLogs] ([DateTime] ASC) INCLUDE ([Id]);
GO

The trigger can be created using this query.

CREATE TRIGGER [tr_DatabaseLogs]
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
AS
BEGIN
    SET NOCOUNT ON;

    IF OBJECT_ID('dbo.DatabaseLogs') IS NOT NULL
    BEGIN

        BEGIN TRY
            DECLARE @Eventdata XML;
            SET @Eventdata = EVENTDATA();

            INSERT dbo.DatabaseLogs (
              [DateTime]
            , [EventData]
            )
            VALUES (
              GETUTCDATE()
            , @Eventdata
            );
        END TRY

        BEGIN CATCH
            SET @Eventdata= NULL;
        END CATCH
    END
END
GO

Pay attention to the additional checks that are implemented in case if the table is dropped or if the insert statement fails. It is critical to keep such DDL trigger transparent to the users and applications to not impact their work if something goes wrong.

DDL Triggers - SSMS

Viewing logged events

Execute this simple SELECT statement to view the log of captured object modifications.

SELECT * FROM dbo.DatabaseLogs ORDER BY [DateTime];

DDL Triggers - ResultHere, you have an example of one of the inserted XMLs.

<EVENT_INSTANCE>
    <EventType>CREATE_PROCEDURE</EventType>
    <PostTime>2018-11-17T17:52:51.700</PostTime>
    <SPID>54</SPID>
    <ServerName>MAREK-PC\SS2017</ServerName>
    <LoginName>Marek-PC\Marek</LoginName>
    <UserName>dbo</UserName>
    <DatabaseName>TrackMyChanges</DatabaseName>
    <SchemaName>dbo</SchemaName>
    <ObjectName>usp_NewProc</ObjectName>
    <ObjectType>PROCEDURE</ObjectType>
    <TSQLCommand>
        <SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
        <CommandText>CREATE PROCEDURE dbo.usp_NewProc
AS
BEGIN
SELECT 'Version 1';
END
        </CommandText>
    </TSQLCommand>
</EVENT_INSTANCE>

Of course, it is possible to extract this detailed information in the trigger itself and insert them into additional columns in the logging table.

Extended DDL Trigger

As a first step, we have to extend our logging table.

CREATE TABLE [dbo].[DatabaseLogs](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [DateTime] [datetime] NOT NULL CONSTRAINT DF_DatabaseLogs_DateTime DEFAULT (GETDATE()),
    [ServerName] [nvarchar](128) NULL,
    [ServiceName] [nvarchar](128) NULL,
    [SPID] [int] NULL,
    [SourceHostName] [nvarchar](128) NULL,
    [LoginName] [nvarchar](128) NULL,
    [UserName] [nvarchar](128) NULL,
    [SchemaName] [nvarchar](128) NULL,
    [ObjectName] [nvarchar](128) NULL,
    [TargetObjectName] [nvarchar](128) NULL,
    [EventType] [nvarchar](128) NULL,
    [ObjectType] [nvarchar](128) NULL,
    [TargetObjectType] [nvarchar](128) NULL,
    [EventData] [xml] NULL,
    CONSTRAINT [PK_DatabaseLogs] PRIMARY KEY CLUSTERED ( [Id] ASC ) 
)
GO

CREATE NONCLUSTERED INDEX nix_DatabaseLogs ON [dbo].[DatabaseLogs] ([DateTime] ASC)  INCLUDE ([Id]);
GO

Such table structure should be sufficient to meet most needs. As the next step, we must implement XML parsing logic in our DDL Trigger. To do this, we use SQL Server XQuery methods.

CREATE TRIGGER [tr_DatabaseLogs]
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
AS
BEGIN
    SET NOCOUNT ON;

    IF OBJECT_ID('dbo.DatabaseLogs') IS NOT NULL
    BEGIN
        BEGIN TRY
            DECLARE @Eventdata XML;
            SET @Eventdata = EVENTDATA();

            INSERT dbo.DatabaseLogs (
              [DateTime]
            , [ServerName]
            , [ServiceName]
            , [SPID]
            , [SourceHostName]
            , [LoginName]
            , [UserName]
            , [SchemaName]
            , [ObjectName]
            , [TargetObjectName]
            , [EventType]
            , [ObjectType]
            , [TargetObjectType]
            , [EventData]
            )
            VALUES (
              GETUTCDATE()
            , @@SERVERNAME
            , @@SERVICENAME
            , @Eventdata.value('(/EVENT_INSTANCE/SPID)[1]', 'int')
            , HOST_NAME()
            , @Eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/UserName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/TargetObjectName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/TargetObjectType)[1]', 'nvarchar(128)')
            , @Eventdata
            );
        END TRY

        BEGIN CATCH
            SET @Eventdata= NULL;
        END CATCH
    END
END
GO

Thanks to this solution we can easily filter logged events without the need to write complicated ad-hoc queries using XQuery.

What was captured by DDL Trigger?

DDL Triggers capture very similar pieces of information to the Extended Events Sessions. We know what object was changed, when it was done, and by whom. We also have access to the SQL query that was executed.

Data retention

In this case, there is no default mechanism that clean up old data. If you need one, you have to implement it by yourself. Otherwise, the logging table will contain all events since DDL Trigger has been created. That, of course, has some advantages and disadvantages.

DDL Triggers summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Data logged to the user table
  • Easy to set up
  • Possibility to set additional filters
  • Easy viewing
Disadvantages:

  • No default data retention options
  • No access to the old object definition

In the next part, I will demonstrate you how to use SQL Server Audit to capture database schema changes.

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 3 – Extended Events Session

Last week I published the second post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Extended Events Session for this purpose.

You can find my previous post here:

Extended Event Session

The third option is the Extended Events Session. This functionality has been introduced in SQL Server 2008. It provides very lightweight monitoring and event tracking system that can be very helpful in performance troubleshooting and (in our case) changes monitoring.

Microsoft announced it as a successor of SQL Trace (that has been marked as deprecated in SQL Server 2012). That means SQL Trace can be removed in one of the future versions. However, SQL Trace and SQL Server Profiler are still very popular and widely used, so I really doubt that will happen anytime soon.

In the beginning, Extended Events didn’t gain too much popularity because it was tough to use them. Now, with the help of SSMS, it’s much more comfortable.

To test this solution we have to create an Extended Events Session that captures object modification events. In our case these are:

  • object_created
  • object_altered
  • object_deleted
Creating an Extended Events Session

You can create such session in two ways. Using T-SQL or using a Wizard in SQL Server Management Studio. I find the latter much more comfortable.

Create a Session using Wizard

To create a new Session, in Object Explorer expand Instance -> Management -> Extended Events. Right Click on Sessions and choose New Session Wizard.

Extended Events Wizard - Start Wizard

On the “Set Session Properties” page provide a name for your session and decide if this session should start at server startup. Then, click the Next button.

Extended Events Wizard - Session Properties

On the next page, you can decide whether you want to use a predefined template or not. In our case, there is no template that we can use to track object schema changes. Choose “Do not use template” option and click “Next”.

Extended Events Wizard - Template

On the “Select Events To Capture” page, select object_altered, object_created, and object_deleted events.

Extended Events Wizard - Events To Capture

It should look like this:

Extended Events Wizard - Events To Capture 2On the “Capture Global Fields” page, you can decide what data you want to collect. My recommendation is to select the following ones:

  • client_app_name
  • client_hostname
  • database_id
  • database_name
  • server_principal_name
  • session_id
  • sql_text

This gives you an overview of what really happened. You know who performed a change, from which machine, and from which application. Most importantly, you also know what SQL statement was executed. When you set this then click “Next”.

Extended Events Wizard - Capture Global Fields

On the “Set Session Event Filters” page, you can add additional filters. That enables you, for example, to capture events just for one database instead of for all databases on your instance. Then click “Next”.

Extended Events Wizard - Session Event Filters

On the “Specify Session Data Storage” page, you need to decide if you want to log data to file(s) (event_file target) or maybe keep them in memory (ring_buffer target). For real-life usage, you should choose files. Here, for the demo purpose, I use ring_buffer.

Extended Events Wizard - Data Storage

When the session is already created, you can start it immediately and also can watch live data on the screen as events are captured.

Extended Events Wizard - Session Created

Create a Session using T-SQL

The same session can be created using this script.

CREATE EVENT SESSION [CaptureObjectModifications] ON SERVER 
ADD EVENT sqlserver.object_altered(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_created(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_deleted(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text))
ADD TARGET package0.ring_buffer(SET max_events_limit=0,max_memory=102400)
GO
Viewing events captured by the Extended Events Session

When the session is already created and started. It captures all object modifications. To see changes live, start the Wach Live Data view in SSMS.

Extended Events - Start Watch Live Data

You can right click on the column names to add additional columns to this view.

Extended Events - Watch Live Data - Add Columns

Now, let’s test it by executing the whole test case from the beginning of the article. Captured events are automatically displayed in the window.

Extended Events - Watch Live Data

To review historical data from the ring_buffer target, you need to use the T-SQL query. When you log to files you have the possibility to review their content in SSMS with View Target Data option. Use this query to select captured events for our session.

;WITH raw_data(t) AS
(
    SELECT CONVERT(XML, target_data)
    FROM sys.dm_xe_sessions AS s
    INNER JOIN sys.dm_xe_session_targets AS st
    ON s.[address] = st.event_session_address
    WHERE s.name = 'CaptureObjectModifications'
    AND st.target_name = 'ring_buffer'
),
xml_data (ed) AS
(
    SELECT e.query('.') 
    FROM raw_data 
    CROSS APPLY t.nodes('RingBufferTarget/event') AS x(e)
)
SELECT * --FROM xml_data;
FROM
(
  SELECT
    [timestamp]       = ed.value('(event/@timestamp)[1]', 'datetime'),
    [database_id]     = ed.value('(event/data[@name="database_id"]/value)[1]', 'int'),
    [database_name]   = ed.value('(event/action[@name="database_name"]/value)[1]', 'nvarchar(128)'),
    [object_type]     = ed.value('(event/data[@name="object_type"]/text)[1]', 'nvarchar(128)'),
    [object_id]       = ed.value('(event/data[@name="object_id"]/value)[1]', 'int'),
    [object_name]     = ed.value('(event/data[@name="object_name"]/value)[1]', 'nvarchar(128)'),
    [session_id]      = ed.value('(event/action[@name="session_id"]/value)[1]', 'int'),
    [login]           = ed.value('(event/action[@name="server_principal_name"]/value)[1]', 'nvarchar(128)'),
    [client_hostname] = ed.value('(event/action[@name="client_hostname"]/value)[1]', 'nvarchar(128)'),
    [client_app_name] = ed.value('(event/action[@name="client_app_name"]/value)[1]', 'nvarchar(128)'),
    [sql_text]        = ed.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)'),
    [phase]           = ed.value('(event/data[@name="ddl_phase"]/text)[1]',    'nvarchar(128)')
  FROM xml_data
) AS x
WHERE phase = 'Commit'
ORDER BY [timestamp];

Extended Events - TSQL

What was captured by Extended Events Session?

In terms of what was captured, the Extended Event Session looks very well. It has a variety of configuration options that allow you to customize the logged details. Viewing the data collected by session, we know what was changed, when it was changed and by whom. We also have a SQL statement that was executed to perform this change.

Data retention

The Extended Events Session has many retention options for both targets. For files, we can specify the maximum files size. For the ring buffer, we can specify maximum events count and memory size. That gives users a lot of flexibility.

Extended Events summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Plenty of retention options
  • Possibility to save data to memory or files
  • Easy to set up
  • Possibility to set additional filters
Disadvantages:

  • Difficult processing of captured events (T-SQL and XQuery)
  • No access to the old object definition

In the next part, I will demonstrate you how to use DDL Triggers to capture database schema changes.

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 2 – Transaction Log

Last week I published the first post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Transactional Log for this purpose.

You can find my previous post here:

Transaction Log

Another solution that can be used to track changes executed against your database is to read Transaction Log file or Transaction Log Backups. Transaction log file (and backups) consists of every transaction executed against your database. The only requirement for this is to have a database in the Full recovery model. In the Simple recovery model, every committed transaction can be very quickly overwritten by another one.

Also, this is something that you get for free without the need to enable and configure any additional SQL Server functionality. Of course, besides the database backups, but you already do database backups, right?

To test this approach, you have to make some preparations. You need to set the database’s recovery model to Full. As a first step, check database properties to ensure that it is appropriately configured. As a second step, you need to create a full database backup. From this point, the database is in Full recovery model, and every transaction is fully logged. Thanks to this you are able to read logged transactions from Transaction Log file. The same applies to read from Transaction Log backup. To do this, you need to create such a backup after you execute database schema changes.

USE master;
GO

CREATE DATABASE TrackMyChanges;
GO

BACKUP DATABASE TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.bak';
GO

USE TrackMyChanges;
GO

-- Here execute all statements that create, modify, and drop objects.

USE master;
GO

-- Now you can check Transaction log file

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

-- Now you can check Transaction Log backup file

DROP DATABASE TrackMyChanges;
Reading Transaction Log file

To read the transaction log file, we can use the undocumented fn_dblog function. It accepts two parameters:

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the log.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the log file.
SELECT * FROM fn_dblog(null, null);

This query returns a lot of data. Fortunately, we don’t need all columns to check what happened to our objects. We may easily reduce the number of columns and rows to these relevant ones. Each object modification needs to be a part of a transaction. As a first step, we can list only rows with the LOP_BEGIN_XACT operation.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction Name

Based on the [Transaction Name] we can identify transactions that change the schema of our objects. At this point, we don’t know yet what object it is, but we can check who modified it. [Transaction SID] column contains a SID of the Login that was used to execute this operation. We can use SUSER_SNAME() function to get its name. [Transaction Name] column simply describes what was changed.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction SID

To continue, we need to decide which particular change we want to investigate further. Let’s take the second CREATE/ALTER FUNCTION transaction. We need to note down its Transaction ID. For me, it is 0000:0000039e.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null) 
WHERE [Transaction ID] = '0000:0000039e';

Transaction Log - One Transaction

Now, to discover what object was changed we have to dig into [LOCK Information] column. The first LOP_LOCK_XACT operation describes Schema Modification Lock on object ID = 965578478.

HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 5:965578478:0

This is our function:

SELECT * FROM sys.objects WHERE object_id = 965578478;

Transaction Log - Function

OK. At this point, we know what object was changed, when, and by whom. However, this is Transaction Log, and it should also contain detailed information what exactly was changed. Can we get it? Oh yes, we can. To do it – run the following query.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description], 
    CAST(SUBSTRING([RowLog Contents 0],33,LEN([RowLog Contents 0])) AS varchar(8000)) AS [Definition]
FROM fn_dblog(null, null)
WHERE [Transaction ID] = '0000:0000039e'
AND [AllocUnitName] = 'sys.sysobjvalues.clst' 
ORDER BY [Current LSN] ASC;

Transaction Log - Change Details

As you can see, there are two rows. One describes the deleted old object Definition, and the second one represents the inserted new value. That’s really cool! Thanks to this we exactly know what was changed.

Reading Transaction Log backup

Reading changes directly from Transaction Log is one approach, but you can also get the same information from Transaction Log backups. The only difference is that you must use fn_dump_dblog() instead of fn_dblog().  This function accepts 68 parameters (sic!). Fortunately, we have to provide only a few of them.

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the backup file.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the backup file.
  • The third is a type of file (can be DISK or TAPE).
  • The fourth one is a backup number in the backup file.
  • The fifth is a path to the backup file

What about the remaining 63 parameters? They need to be specified only if you use stripped media sets with multiple disk files (max 64). In such a case, you have to provide paths to the rest of the files. If you don’t use this feature, then you must provide DEFAULT values.

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

SELECT * FROM fn_dump_dblog(
    NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT);

When you know what values to provide as parameters, then you can easily use this function to get data as in previous examples.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dump_dblog(
        NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log Backup - Transaction Name

What was captured by Transaction Log?

By its nature Transaction Log contains detailed information about every change. You can extract from it such information as what was changed (in details) when it was modified and by whom.

Data retention

Here, the story is straightforward. You have access to this information as long as you store Transaction Log backups.

Transaction Log summary
Advantages:

  • Contains every detail
  • Info WHAT exactly was changed
  • Access to the old and new object definition
  • Info WHEN object was changed
  • Info WHO changed the changed
Disadvantages:

  • Require database FULL recovery model
  • The complicated and long process of data retrieval
  • In busy systems, it may be difficult to find log entry we are looking for
  • Retention based on Transaction Log backup retention

In the next part I will demonstrate you how to configure Extended Events Session to capture database schema changes.

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 1 – Default Trace

In the perfect world, only Database Administrators have access to SQL Server databases. All database schema changes go through strict Change Management Process where they need to be well described and approved by Change Advisory Board. The database schema is stored in a source code repository and deployed version doesn’t drift unexpectedly from its original model.

Unfortunately, we don’t live in the perfect world…

Despite the version control system, change management process, and limited access to the server, sometimes database schema is changed without our knowledge and supervision. It may happen in a development environment where a bunch of people have access and deployment process is not very strict (the common case).  However, it also may happen in higher level environments where only limited number of people have access (the rare case – but not impossible).

Sooner or later such unexpected schema changes start to be very problematic. They may break some functionality or create some other issues (ie. performance degradation). They may block deployment of next changes. They simply may be implemented in an inefficient way or even just be stupid.

In such a case, various questions arise and you as a DBA will have to answer some of them.

  • When XYZ object was changed?
  • What modifications were applied to the XYZ object?
  • Who changed XYZ object?

In this short 5 part series, I will demonstrate you 5 different ways to track such database schema changes. Thanks to this, you always will be able to easily answer such questions. Today’s post describes how to use Default Trace.

Continue reading “5 ways to track Database Schema changes – Part 1 – Default Trace”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Step by step installation of SQL Server 2019 (vNext) on Windows

On Monday, September 24, during 2018 Ignite conference, Microsoft announced the public preview of SQL Server 2019 (vNext). CTP 2.0 version (Community Technology Preview) is now accessible for everyone. In this blog post, I will provide a step-by-step guide (with screenshots) on SQL Server 2019 installation on Windows. If you are a novice SQL Server user or have not had the opportunity to install SQL Server instance before, you might find this post very helpful.   Continue reading “Step by step installation of SQL Server 2019 (vNext) on Windows”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

The second birthday of my blog!

That is unbelievable! Exactly two years ago I published my first post on this blog (OK, to be 100% correct, that was the second one but we all can agree that the “Hello World” post doesn’t count). At that time, I had no idea how it would develop, but I knew one thing. I wanted to share my knowledge with others. I wanted to learn new things and then pass it to others curious and hungry for knowledge Database Specialist! Continue reading “The second birthday of my blog!”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

QuickQuestion: How to uninstall a SQL Server feature?

QuickQuestion series is a series of short posts in which I answer database related questions asked by my colleagues, friends, and co-workers, mainly application developers.

Today’s question:

How to uninstall a SQL Server feature?

Continue reading “QuickQuestion: How to uninstall a SQL Server feature?”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

SQL Server Agent Job waiting for a worker thread

Recently one of my teammates experienced a quite interesting issue. He was deploying new PowerShell maintenance SQL Server Agent Jobs on a new SQL Server instance. During the final test run, he noticed that some of the jobs were executing fine and one of them was waiting for a worker thread. In this blog post, I will describe what do to if you encounter a similar issue.

Continue reading “SQL Server Agent Job waiting for a worker thread”

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail

Restore of database failed! What now?

Restoration of a database from a backup file sometimes can be very tricky. Especially when you don’t know on what server (what environment or what SQL Server version) it was taken. Sometimes you, as a DBA, are just asked to restore database from a given backup on the pointed server. You have got a backup file, you do everything as always but for some reason, the restore operation fails.

Restore of database failed

In this blog post, I describe what is the reason behind the below error.

SSMS GUI error

Error message:

TITLE: Microsoft SQL Server Management Studio
------------------------------

Restore of database 'AdventureWorks2017' failed. (Microsoft.SqlServer.Management.RelationalEngineTasks)

------------------------------
ADDITIONAL INFORMATION:

System.Data.SqlClient.SqlError: The database was backed up on a server running version 14.00.1000. That version is incompatible with this server, which is running version 13.00.5026. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. (Microsoft.SqlServer.SmoExtended)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=14.0.17254.0+((SSMS_Rel_17_4).180502-0908)&LinkId=20476

------------------------------
BUTTONS:

OK
------------------------------

So… what we can do in such case?

Let’s ask for help 🙂

First of all, as you already noticed, in the left bottom corner we have a help button. Did you try to use it? I can bet you didn’t. Let’s see how Microsoft will try to help us in our case.

SSMS GUI error - get help 01

As you can see the help for the first error is not available. This option in the menu is grayed out. However, the help for the second, more detailed error seems to be available. It is quite promising, isn’t it? When we click this we’re getting a new dialog with the notification that some data will be sent to Microsoft and we need to agree on that if we want to see the help for our problem.

SSMS GUI error - get help 02

Product Name, Product Version, and LinkId… I think I’m not afraid to share this data if that suppose to give me a solution for my problem. So what I get after clicking [Yes] button? I get nothing… New webpage opens in my browser and the only thing we get is an advertisement to buy new Surface Pro… I’m not kidding…

SSMS GUI error - surface

Additional funny thing is that Microsoft collects data about SSMS version I use: 14.0.17254.0+((SSMS_Rel_17_4).180502-0908), but why they described it as release 17.4 while I use 17.7?

SSMS version

Ok. Now we know that MSFT will not help us in this case.

Let’s try using T-SQL

We’re not able to restore a database using SSMS GUI so maybe it will work when using T-SQL? Let’s give it a try:

USE [master]
RESTORE DATABASE [AdventureWorks2017] 
FROM DISK = N'C:\iso\DB - AdventureWorks\AdventureWorks2017.bak' WITH FILE = 1, 
MOVE N'AdventureWorks2017' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.SS2016\MSSQL\DATA\AdventureWorks2017.mdf', 
MOVE N'AdventureWorks2017_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.SS2016\MSSQL\DATA\AdventureWorks2017_log.ldf', 
NOUNLOAD, STATS = 5
GO

No, it doesn’t work neither.

Msg 3169, Level 16, State 1, Line 2 The database was backed up on a server running version 14.00.1000. That version is incompatible with this server, which is running version 13.00.5026. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. Msg 3013, Level 16, State 1, Line 2 RESTORE DATABASE is terminating abnormally.
That version is incompatible with this server

So what does it mean? It simply means that database backup was taken on newer SQL Server version than the SQL Server version on which you’re trying to restore it. Unfortunately, such an operation is not supported. All SQL Servers are backward compatible and you’re always able to restore a database from a backup taken on an older version to the newer one but not vice versa.

If you want to decrypt build numbers from the error message you can use this simple cheat sheet:

Build number SQL Server version
14.0 SQL Server 2017
13.0 SQL Server 2016
12.0 SQL Server 2014
11.0 SQL Server 2012
10.50 SQL Server 2008 R2
10.0 SQL Server 2008
9.0 SQL Server 2005
8.0 SQL Server 2000
7.0 SQL Server 7.0

You can find much more details about SQL Server builds on this page: https://sqlserverbuilds.blogspot.com/. I recommend, to add it to your bookmarks in your favorite browser. It’s invaluable when you need to quickly check SQL Server version or find latest Service Pack or Cumulative Update.

Now, armed with this knowledge, you know that this database backup file has been created on SQL Server 2017. That is the reason why I cannot restore it on SQL Server 2016.

On what SQL Server version this backup was created?

You don’t have to try to restore a database from a backup file in order to check on what version it was created. You can safely verify it using simple RESTORE HEADERONLY command.

RESTORE HEADERONLY FROM DISK = 'C:\iso\DB - AdventureWorks\AdventureWorks2017.bak';
GO

In the resultset, you will find such information as:

  • Backup Name and Description
  • Who created it and on what Server (Login Name, Server Name, and version)
  • Database Name
  • Creation Date (Start and Finish)
  • and much more…

SQL Server RESTORE HEADERONLY

What to do when we cannot restore database from backup?

You already know that you will not be able to restore your database on the SQL Server you need. What can you do in such a situation? The solution is simple – you need to use a different database migration method. Here is a short list of few possibilities you have:

Option 1

In the case of very small databases, you can use SSMS to generate the SQL script that includes schema and data (INSERT statements). In next step, you can use this script to generate a new database on the target server.

Option 2

For bigger databases, you can generate the SQL script with the schema only and then use it to create an empty database on the target server. In the second step, you can use Import and Export Wizard or BCP command to migrate data from one database to another.

Option 3

You can also use the Export Data-Tier Application functionality to generate BACPAC file consisting of database schema and data. On the target server, you can use the Import Data-Tier Application functionality to create the new database from this file.

Do not mistake DACPAC with BACPAC. The former includes only database schema, and the latter includes database schema and data.

Option 4

Another possibility is to use the Copy Database Wizard with the SMO transfer method.

Option 5

The last solution is to use some available third-party tools that deliver Data Compare functionality.

Thanks for reading!

-Marek

Share it:
Facebooktwittergoogle_plusredditpinterestlinkedintumblrmailFacebooktwittergoogle_plusredditpinterestlinkedintumblrmail