Foreign Keys and Index Usage Stats

Last week I was speaking at SQLDay 2019 conference in Wroclaw, Poland. My session was about things you should focus on during work with Legacy Databases. One of the topics I discussed was concerning the database usage statistics collection and aggregation (mainly indexes and stored procedures).

After the session, one of the attendees came to me and ask me if I know that actions causing check of Foreign Keys and that use indexes underneath, don’t update index usage stats DMV. I was very surprised because, in my opinion, such behavior would be a huge SQL Server defect. So I decided to check it out…

Index usage stats

SQL Server provides use very useful dynamic management view that describes database index usage statistics: sys.dm_db_index_usage_stats. It returns counts of different types of index operations (seek, scan, lookup, and update) that were performed since the last SQL Server restart. We will use it to perform our test.

Test case

To perform our test we will create a new database with two tables. One of them will have a Foreign Key constraint referencing another one. It could be also only one table, but in my opinion, an example with two tables is simpler.

CREATE DATABASE [CheckIndexUsage];
GO

USE [CheckIndexUsage];
GO

CREATE TABLE dbo.Parent(
    [ParentId] int IDENTITY NOT NULL,
    [Name] varchar(50) NOT NULL,
    CONSTRAINT pk_Parent PRIMARY KEY CLUSTERED (ParentId)
);

CREATE TABLE dbo.Child(
    [ChildId] int IDENTITY NOT NULL,
    [ParentId] int NOT NULL,
    [Name] varchar(50) NOT NULL,
    CONSTRAINT pk_Child PRIMARY KEY CLUSTERED (ChildId),
    CONSTRAINT fk_ParentId FOREIGN KEY (ParentId) REFERENCES dbo.Parent (ParentId)
);

As a second step let’s add the first row to the dbo.Parent table and check current index usage statistics for this database.

INSERT INTO dbo.Parent ([Name])
VALUES ('Ned Stark');

SELECT OBJECT_NAME(ius.object_id) AS [TableName], i.[Name] AS [IndexName], ius.user_seeks, ius.user_scans, ius.user_lookups, ius.user_updates 
FROM sys.dm_db_index_usage_stats ius
JOIN sys.indexes i
ON ius.object_id = i.object_id
AND ius.index_id = i.index_id
WHERE ius.database_id = DB_ID();

We can see that only one Update operation was executed on the pk_Parent index. Good, this is something that I expected. Now, as a second step lets add a new row to the dbo.Child table. This will trigger Foreign Key check and we will check if the index is used, and if it is reflected in index usage stats DMV.

INSERT INTO dbo.Child ([ParentId],[Name])
VALUES (1, 'Arya Stark');

In the Query Execution Plan we see Clustered Index Seek on the pk_Parent index.

Execution Plan

Let’s check if it is reflected in our DMV.

SELECT OBJECT_NAME(ius.object_id) AS [TableName], i.[Name] AS [IndexName], ius.user_seeks, ius.user_scans, ius.user_lookups, ius.user_updates 
FROM sys.dm_db_index_usage_stats ius
JOIN sys.indexes i
ON ius.object_id = i.object_id
AND ius.index_id = i.index_id
WHERE ius.database_id = DB_ID();

Index2

Hurray! It’s here. To be 100% sure that it always works let’s make another insert. This one will fail now.

INSERT INTO dbo.Child ([ParentId],[Name])
VALUES (2, 'Jon Snow');
Msg 547, Level 16, State 0, Line 44
The INSERT statement conflicted with the FOREIGN KEY constraint "fk_ParentId". The conflict occurred in database "CheckIndexUsage", table "dbo.Parent", column 'ParentId'.
The statement has been terminated.

How our statistics look?

SELECT OBJECT_NAME(ius.object_id) AS [TableName], i.[Name] AS [IndexName], ius.user_seeks, ius.user_scans, ius.user_lookups, ius.user_updates 
FROM sys.dm_db_index_usage_stats ius
JOIN sys.indexes i
ON ius.object_id = i.object_id
AND ius.index_id = i.index_id
WHERE ius.database_id = DB_ID();

Index3

Everything is good.

To clean up after our test, execute this:

USE [master];
GO

DROP DATABASE [CheckIndexUsage];
GO
Summary

As you can see, it turned out that the case described by this attendee wasn’t true. The sys.dm_db_index_usage_stats DMV is updated even when indexes are used for checking Foreign Key constraints integrity. I’ve tested it on SQL Server 2014, 2016, and 2017.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

SQLDay 2019 conference Working with Legacy database – my slides and demos

Today, I had a great pleasure to speak at the SQLDay 2019 conference in Wroclaw, Poland. It’s the biggest conference in Europe that is focused on Microsoft’s Data Platform.

This year all attendees had the opportunity to meet exceptional speakers and attend full-day workshops delivered by real SQL Server gurus like Rob Farley (b|t) and Grant Fritchey (b|t) (Unfortunately Aaron Bertrand (b|t) didn’t arrive on time :)).

I had an amazing occasion to present my session titled Working with the Legacy database – how to tame the beast (in polish: “Praca z bazą daną – czyli jak okiełznać starego potwora”).

Here you can find my slide deck and materials: SQLDay2019 – Marek Masko – Working with the Legacy Database.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

7 ways to track Database Schema changes – Part 6 – Event Notifications

This is the sixth post from my short series in which I demonstrate you 7 (sic!) different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when.

Originally, I thought that I will publish only 5 posts in this series. However, it seems that I underestimated the multitude of possibilities provided by SQL Server. Today, in this blog post I describe how to use Event Notifications for this purpose.

Here you can find my previous posts from this series:

Event Notifications

The sixth option is Event Notifications. This functionality has been introduced in SQL Server 2005. It offers a programming alternative to DDL triggers and SQL Trace. Event notifications can be used to log changes or activities occurring on the database. They execute asynchronous actions in response to a variety of DDL and SQL Trace events by sending information about these events (XML-formatted) to a Service Broker service.

Thanks to their asynchronous nature they run out of the scope of the running transaction and don’t consume its resources. Unlike SQL Trace, Event Notifications can be used to perform an action inside an instance of SQL Server in response to a SQL Trace event.

Event Notifications Architecture

To implement an event notification, you must first create a target Service to receive event notifications and then create the Event Notification. To create a Service, you must first create a Queue and have a Contract that specifies what Messages’ Types can be used for conversations with this Service.

Fortunately, Service Broker has already built-in following specific Message Type and Contract for event notifications:

https://schemas.microsoft.com/SQL/Notifications/PostEventNotification
Creating an Event Notification solution

As a first step, we need to ensure that Service Broker is enabled for the database we want to monitor. Run this query to check it.

SELECT 
    [Name], 
    [is_broker_enabled] 
FROM sys.databases 
WHERE [name] = 'TrackMyChanges';

If Service Broker is disabled, then you can enable it using this query.

ALTER DATABASE [TrackMyChanges] SET ENABLE_BROKER; 
GO

The second step is to create a queue that will store all messages generated by our Query Notification solution. Fortunately enough we can provide only the Queue name and we don’t have to set any additional options.

USE [TrackMyChanges] 
GO

CREATE QUEUE dbo.EventNotificationQueue 
GO

Use below query to confirm the Queue creation.

SELECT * FROM sys.service_queues 
WHERE name = 'EventNotificationQueue' 
GO

Event Notifications - Queue

The third step is to create a Service that will receive our notification messages and place them in our Queue. To do this, we have to provide three things: the name for our new Service, the Queue name that will be used by this service, and a Contract name. As mentioned earlier, we don’t have to create a new contract because we can use the already existing one.

CREATE SERVICE [//TrackMyChanges/EventNotificationService] 
ON QUEUE dbo.EventNotificationQueue 
([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]) 
GO

Run below query to check if the Service is created.

SELECT * FROM sys.services 
WHERE name = '//TrackMyChanges/EventNotificationService' 
GO

Event Notifications - Service

The fourth and final step is to create an Event Notification that in response to DDL_DATABASE_LEVEL_EVENTS sends their information to our Service. It’s the same event group that we used in Part 4 for DDL trigger.

CREATE EVENT NOTIFICATION NotifyDDLEvents 
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
TO SERVICE '//TrackMyChanges/EventNotificationService' , 'current database' 
GO

Run this query to check if Event Notification is created.

SELECT * FROM sys.event_notifications 
WHERE [name] = 'NotifyDDLEvents'; 
GO

Event Notifications - Event Notification

Now, every DDL event from DDL_DATABASE_LEVEL_EVENTS group is being captured and stored in our queue. You can test it running our test case from this series’ first blog post.

Viewing captured events

Captured Events are stored in a Queue which is actually a table. However, this is a special table. You cannot use INSERT, UPDATE, and DELETE statements to modify its data.

You can use the SELECT statement to review its current content or RECEIVE statement to receive and remove (default behavior) a message from the queue.

To preview messages in the Queue run following query.

SELECT CAST(message_body AS XML) AS message_in_xml 
FROM dbo.EventNotificationQueue 
GO

Event Notifications - Messages in the queue

Here you have an example of one of the messages.

<EVENT_INSTANCE>
    <EventType>CREATE_TABLE</EventType>
    <PostTime>2018-12-02T14:44:09.030</PostTime>
    <SPID>54</SPID>
    <ServerName>MAREK-PC\SS2017</ServerName>
    <LoginName>Marek-PC\Marek</LoginName>
    <UserName>dbo</UserName>
    <DatabaseName>TrackMyChanges</DatabaseName>
    <SchemaName>dbo</SchemaName>
    <ObjectName>NewTable</ObjectName>
    <ObjectType>TABLE</ObjectType>
    <TSQLCommand>
        <SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
        <CommandText>CREATE TABLE dbo.NewTable (
Id int IDENTITY NOT NULL,
Col1 varchar(20) NULL,
Col2 varchar(20) NULL
)</CommandText>
    </TSQLCommand>
</EVENT_INSTANCE>

Use this query to receive a message from the Queue.

DECLARE @MessageContent XML; 
WAITFOR 
( RECEIVE TOP(1) 
@MessageContent = CONVERT(XML, message_body)
FROM dbo.EventNotificationQueue 
), TIMEOUT 1000; 

SELECT 
@MessageContent.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(128)' ) as EventType,
@MessageContent.value('(/EVENT_INSTANCE/PostTime)[1]', 'varchar(128)' ) as PostTime,
@MessageContent.value('(/EVENT_INSTANCE/SPID)[1]', 'int' ) as SPID, 
@MessageContent.value('(/EVENT_INSTANCE/ServerName)[1]', 'varchar(128)' ) as ServerName, 
@MessageContent.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(128)' ) as DatabaseName, 
@MessageContent.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(128)' ) as LoginName, 
@MessageContent.value('(/EVENT_INSTANCE/SchemaName)[1]', 'varchar(128)' ) as SchemaName, 
@MessageContent.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(128)' ) as ObjectName,
@MessageContent.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(128)' ) as ObjectType,
@MessageContent.value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)') AS TSQLCommand

Event Notifications - Received messageNote that every execution of this query returns you subsequent event from the queue. You can use the SELECT statement to see that the number of messages waiting in the queue has decreased.

Persisting changes in a user table

If you don’t feel comfortable with storing information about schema changes in Service Broker Queue you can very easily create an automatic solution that saves this data in a table.

To do it, we need to create a table where we will insert our data. We can reuse the DatabaseLogs table that was introduced in Part 4.

CREATE TABLE [dbo].[DatabaseLogs](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [DateTime] [datetime] NOT NULL CONSTRAINT DF_DatabaseLogs_DateTime DEFAULT (GETDATE()),
    [ServerName] [nvarchar](128) NULL,
    [ServiceName] [nvarchar](128) NULL,
    [SPID] [int] NULL,
    [SourceHostName] [nvarchar](128) NULL,
    [LoginName] [nvarchar](128) NULL,
    [UserName] [nvarchar](128) NULL,
    [SchemaName] [nvarchar](128) NULL,
    [ObjectName] [nvarchar](128) NULL,
    [TargetObjectName] [nvarchar](128) NULL,
    [EventType] [nvarchar](128) NULL,
    [ObjectType] [nvarchar](128) NULL,
    [TargetObjectType] [nvarchar](128) NULL,
    [EventData] [xml] NULL,
    CONSTRAINT [PK_DatabaseLogs] PRIMARY KEY CLUSTERED ( [Id] ASC ) 
)
GO

CREATE NONCLUSTERED INDEX nix_DatabaseLogs ON [dbo].[DatabaseLogs] ([DateTime] ASC)  INCLUDE ([Id]);
GO

To move events from the queue to the table we need to create an Activation Stored Procedure for our Queue. The queue automatically starts the Activation Stored Procedure every time when a message enters the queue. Here you can find an example of such a stored procedure.

CREATE PROCEDURE dbo.usp_EventNotificationActivation
AS
BEGIN
    SET NOCOUNT ON;
 
    DECLARE @messageContent XML;
    DECLARE @messageTypeName sysname;
 
    WHILE (1=1)
    BEGIN
        BEGIN TRANSACTION;
 
        WAITFOR
        (
            RECEIVE TOP (1)
            @messageContent = CAST(message_body AS XML),
            @messageTypeName = message_type_name
            FROM dbo.EventNotificationQueue 
        ), TIMEOUT 5000;
 
        IF (@@ROWCOUNT = 0)
        BEGIN
            ROLLBACK TRANSACTION;
            BREAK;
        END
 
        IF @messageTypeName = N'http://schemas.microsoft.com/SQL/Notifications/EventNotification'
        BEGIN
            BEGIN TRY

                INSERT dbo.DatabaseLogs (
                  [DateTime]
                , [ServerName]
                , [ServiceName]
                , [SPID]
                , [SourceHostName]
                , [LoginName]
                , [UserName]
                , [SchemaName]
                , [ObjectName]
                , [TargetObjectName]
                , [EventType]
                , [ObjectType]
                , [TargetObjectType]
                , [EventData]
                )
                VALUES (
                  @messageContent.value('(/EVENT_INSTANCE/PostTime)[1]', 'varchar(128)' )
                , @@SERVERNAME
                , @@SERVICENAME
                , @messageContent.value('(/EVENT_INSTANCE/SPID)[1]', 'int' )
                , HOST_NAME()
                , @messageContent.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(128)' )
                , @messageContent.value('(/EVENT_INSTANCE/UserName)[1]', 'varchar(128)' )
                , @messageContent.value('(/EVENT_INSTANCE/SchemaName)[1]', 'varchar(128)' )
                , @messageContent.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(128)')
                , @messageContent.value('(/EVENT_INSTANCE/TargetObjectName)[1]', 'nvarchar(128)')
                , @messageContent.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(128)')
                , @messageContent.value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(128)')
                , @messageContent.value('(/EVENT_INSTANCE/TargetObjectType)[1]', 'nvarchar(128)')
                , @messageContent
                );
            END TRY

            BEGIN CATCH
                SET @messageContent= NULL;
            END CATCH

        END;
        COMMIT TRANSACTION;
    END
END;

Now, when the stored procedure is created we need to add it to our queue.

ALTER QUEUE dbo.EventNotificationQueue  
WITH ACTIVATION ( 
    STATUS = ON,
    MAX_QUEUE_READERS = 10,
    PROCEDURE_NAME = dbo.usp_EventNotificationActivation,  
    EXECUTE AS SELF) ; 
GO

Starting from now, every new event inserted to the queue triggers procedure execution that process all messages waiting in the queue. You can easily check it executing any schema change in your test database.

SELECT * FROM dbo.DatabaseLogs;

Event Notifications - table

What was captured by Event Notifications?

Event Notifications capture exactly the same information as DDL Triggers. This is because they are triggered in response to the same DDL events and have access to the same set of data. We know what object was changed, when it was done, and by whom. We also have access to the SQL query that was executed.

Data retention

Event Notifications don’t provide almost any built-in retention options. Captured events are stored in the queue as messages. These messages are there until they are received (or permanently if you set the queue to retain messages). If you save messages to the table, then you have exactly the same situation as with DDL Triggers. If you want to purge old data then you need to implement such functionality by yourself.

Event Notifications summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Data can be logged to the user table
  • Easy to set up
  • Possibility to set additional filters
  • Easy viewing
  • Asynchronous events’ processing
Disadvantages:

  • No default data retention options
  • No access to the old object definition

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 5 – SQL Server Audit

This is the fifth post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today, I describe how to use SQL Server Audit for this purpose.

Here you can find my previous posts from this series:

SQL Server Audit

The fifth option is SQL Server Audit. This functionality has been introduced in SQL Server 2008. It is based on Extended Events and allows you to audit SQL Server instance or an individual database. The results of an audit can be saved to a file, the Windows Security event log, or the Windows Application event log. The audit can be created to log a single event or events from a predefined event group. Sounds familiar? This mechanism is very similar to that used with DDL Triggers.

The SQL Server Audit consists of several elements:

  • The SQL Server Audit – It is created at SQL Server instance level. Specifies the destination where captured events are being stored.
  • The Server Audit Specification – belongs to SQL Server Audit. It is created at SQL Server instance level and collects many server-level action groups raised by Extended Events.
  • The Database Audit Specification – belongs to SQL Server Audit. It is created at the database level and collects many database-level action groups raised by Extended Events.

To test this solution, we must create an SQL Server Audit that captures the group of object modification events. In our case it is:

  • SCHEMA_OBJECT_CHANGE_GROUP

This group tracks create, alter, and drop events on schema and its objects like table, view, stored procedure, function, etc.

Creating an SQL Server Audit

As a first step, we have to create the SQL Server Audit object. It can be done using Wizard or using SQL. To use GUI, in Object Explorer expand Instance and then Security. Right Click on “Audits” and choose “New Audit…”.

Audit - Start Wizard

In the “Create Audit” window provide the audit name, destination and the data retention. Then click “OK”.

Audit - Create Audit

By default, the audit is created in a disabled state.

Audit - Audit is created

As a next step, we must create Audit Specification. If you want to log object schema change events in all databases, then it will be Server Audit Specification. If you’re interested in logging only events from one particular database, then it will be Database Audit Specification.

For demo purposes, I choose the Database Audit Specification. To create it,  in Object Explorer, expand Instance -> Databases -> DB -> Security. Right click on “Database Audit Specifications” and choose “New Database Audit Specification…”.

In the “Create Database Audit Specification” window we need to do a few things. Provide the name for the new Database Audit Specification. Then select the audit created in the previous step. Finally, choose actions (or groups of actions) that should be traced. In our case, it is SCHEMA_OBJECT_CHANGE_GROUP. Then click “OK”.

Audit - Create DB Audit Specification

By default, the Database Audit Specification is created in a disabled state. To start auditing, we must enable it and then enable SQL Server Audit.

The same steps can be done with the usage of T-SQL. You can use this script to create such SQL Server Audit together with Database Audit Specification.

USE [master]
GO

CREATE SERVER AUDIT [ObjectsModification]
TO FILE 
(	FILEPATH = N'C:\Program Files\Microsoft SQL Server\MSSQL14.SS2017\MSSQL\Log\'
    ,MAXSIZE = 10 MB
    ,MAX_ROLLOVER_FILES = 10
    ,RESERVE_DISK_SPACE = OFF
)
WITH
(	QUEUE_DELAY = 1000
    ,ON_FAILURE = CONTINUE
)
GO

USE [TrackMyChanges]
GO

CREATE DATABASE AUDIT SPECIFICATION [DatabaseAuditSpecification]
FOR SERVER AUDIT [ObjectsModification]
ADD (SCHEMA_OBJECT_CHANGE_GROUP)
WITH (STATE = ON)
GO

ALTER SERVER AUDIT [ObjectsModification] WITH (STATE = ON)
GO
Viewing logged events

To view logged events in SSMS right-click on the SQL Server Audit you want to check, and choose “View Audit Logs”.

Audit - View Audit Logs

What was captured by SQL Server Audit?

SQL Server Audit works similarly to the Extended Events Sessions and DDL Triggers and logs very comparable data. We know what object was changed, when it was done, and by whom. We also have access to the SQL query that was executed.

Data retention

If you log to the Windows Application Log or the Windows Security Log, you don’t have any possibilities to configure data retention. In case of logging to files, you can specify the number of files and their maximum sizes.

SQL Server Audit summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Easy to set up
  • Logging to files or Windows Logs
  • Possibility to set additional filters
  • Easy viewing
Disadvantages:

  • No access to the old object definition

In the next post (yes – there will be another one), I will provide you with a general summary of all 5 ways to track database schema changes.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 4 – DDL Trigger

This is the fourth post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today, I describe how to use DDL Trigger for this purpose.

Here you can find my previous posts from this series:

DDL Triggers

The fourth option is a DDL Trigger. These kinds of triggers are executed in response to Data Definition Language (DDL) events. They can be created in the database or server scope. Database-scoped DDL triggers are stored as objects in the database in which they are created. Server-scoped DDL triggers are stored in the master database. They can be created to be fired in response to a single particular event or the event from a predefined event group.

In our case, we are interested in capturing all events from the DDL_DATABASE_LEVEL_EVENTS group.

To get information about the event that fired our DDL trigger, we use the EVENTDATA()function. It returns an XML value that contains details about the event that triggered the execution of the trigger.

Create DDL Trigger

To log objects’ changes we first have to create a table to which we will save captured events. Below code snippet shows the simplest possible table structure that can be used for this purpose.

CREATE TABLE [dbo].[DatabaseLogs](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [DateTime] [datetime] NOT NULL 
    CONSTRAINT DF_DatabaseLogs_DateTime DEFAULT (GETDATE()),
    [EventData] [xml] NULL,
    CONSTRAINT [PK_DatabaseLogs] PRIMARY KEY CLUSTERED ( [Id] ASC ) 
)
GO

CREATE NONCLUSTERED INDEX nix_DatabaseLogs ON [dbo].[DatabaseLogs] ([DateTime] ASC) INCLUDE ([Id]);
GO

The trigger can be created using this query.

CREATE TRIGGER [tr_DatabaseLogs]
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
AS
BEGIN
    SET NOCOUNT ON;

    IF OBJECT_ID('dbo.DatabaseLogs') IS NOT NULL
    BEGIN

        BEGIN TRY
            DECLARE @Eventdata XML;
            SET @Eventdata = EVENTDATA();

            INSERT dbo.DatabaseLogs (
              [DateTime]
            , [EventData]
            )
            VALUES (
              GETUTCDATE()
            , @Eventdata
            );
        END TRY

        BEGIN CATCH
            SET @Eventdata= NULL;
        END CATCH
    END
END
GO

Pay attention to the additional checks that are implemented in case if the table is dropped or if the insert statement fails. It is critical to keep such DDL trigger transparent to the users and applications to not impact their work if something goes wrong.

DDL Triggers - SSMS

Viewing logged events

Execute this simple SELECT statement to view the log of captured object modifications.

SELECT * FROM dbo.DatabaseLogs ORDER BY [DateTime];

DDL Triggers - ResultHere, you have an example of one of the inserted XMLs.

<EVENT_INSTANCE>
    <EventType>CREATE_PROCEDURE</EventType>
    <PostTime>2018-11-17T17:52:51.700</PostTime>
    <SPID>54</SPID>
    <ServerName>MAREK-PC\SS2017</ServerName>
    <LoginName>Marek-PC\Marek</LoginName>
    <UserName>dbo</UserName>
    <DatabaseName>TrackMyChanges</DatabaseName>
    <SchemaName>dbo</SchemaName>
    <ObjectName>usp_NewProc</ObjectName>
    <ObjectType>PROCEDURE</ObjectType>
    <TSQLCommand>
        <SetOptions ANSI_NULLS="ON" ANSI_NULL_DEFAULT="ON" ANSI_PADDING="ON" QUOTED_IDENTIFIER="ON" ENCRYPTED="FALSE" />
        <CommandText>CREATE PROCEDURE dbo.usp_NewProc
AS
BEGIN
SELECT 'Version 1';
END
        </CommandText>
    </TSQLCommand>
</EVENT_INSTANCE>

Of course, it is possible to extract this detailed information in the trigger itself and insert them into additional columns in the logging table.

Extended DDL Trigger

As a first step, we have to extend our logging table.

CREATE TABLE [dbo].[DatabaseLogs](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [DateTime] [datetime] NOT NULL CONSTRAINT DF_DatabaseLogs_DateTime DEFAULT (GETDATE()),
    [ServerName] [nvarchar](128) NULL,
    [ServiceName] [nvarchar](128) NULL,
    [SPID] [int] NULL,
    [SourceHostName] [nvarchar](128) NULL,
    [LoginName] [nvarchar](128) NULL,
    [UserName] [nvarchar](128) NULL,
    [SchemaName] [nvarchar](128) NULL,
    [ObjectName] [nvarchar](128) NULL,
    [TargetObjectName] [nvarchar](128) NULL,
    [EventType] [nvarchar](128) NULL,
    [ObjectType] [nvarchar](128) NULL,
    [TargetObjectType] [nvarchar](128) NULL,
    [EventData] [xml] NULL,
    CONSTRAINT [PK_DatabaseLogs] PRIMARY KEY CLUSTERED ( [Id] ASC ) 
)
GO

CREATE NONCLUSTERED INDEX nix_DatabaseLogs ON [dbo].[DatabaseLogs] ([DateTime] ASC)  INCLUDE ([Id]);
GO

Such table structure should be sufficient to meet most needs. As the next step, we must implement XML parsing logic in our DDL Trigger. To do this, we use SQL Server XQuery methods.

CREATE TRIGGER [tr_DatabaseLogs]
ON DATABASE 
FOR DDL_DATABASE_LEVEL_EVENTS 
AS
BEGIN
    SET NOCOUNT ON;

    IF OBJECT_ID('dbo.DatabaseLogs') IS NOT NULL
    BEGIN
        BEGIN TRY
            DECLARE @Eventdata XML;
            SET @Eventdata = EVENTDATA();

            INSERT dbo.DatabaseLogs (
              [DateTime]
            , [ServerName]
            , [ServiceName]
            , [SPID]
            , [SourceHostName]
            , [LoginName]
            , [UserName]
            , [SchemaName]
            , [ObjectName]
            , [TargetObjectName]
            , [EventType]
            , [ObjectType]
            , [TargetObjectType]
            , [EventData]
            )
            VALUES (
              GETUTCDATE()
            , @@SERVERNAME
            , @@SERVICENAME
            , @Eventdata.value('(/EVENT_INSTANCE/SPID)[1]', 'int')
            , HOST_NAME()
            , @Eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/UserName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/TargetObjectName)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(128)')
            , @Eventdata.value('(/EVENT_INSTANCE/TargetObjectType)[1]', 'nvarchar(128)')
            , @Eventdata
            );
        END TRY

        BEGIN CATCH
            SET @Eventdata= NULL;
        END CATCH
    END
END
GO

Thanks to this solution we can easily filter logged events without the need to write complicated ad-hoc queries using XQuery.

What was captured by DDL Trigger?

DDL Triggers capture very similar pieces of information to the Extended Events Sessions. We know what object was changed, when it was done, and by whom. We also have access to the SQL query that was executed.

Data retention

In this case, there is no default mechanism that clean up old data. If you need one, you have to implement it by yourself. Otherwise, the logging table will contain all events since DDL Trigger has been created. That, of course, has some advantages and disadvantages.

DDL Triggers summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Data logged to the user table
  • Easy to set up
  • Possibility to set additional filters
  • Easy viewing
Disadvantages:

  • No default data retention options
  • No access to the old object definition

In the next part, I will demonstrate you how to use SQL Server Audit to capture database schema changes.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 3 – Extended Events Session

Last week I published the second post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Extended Events Session for this purpose.

You can find my previous post here:

Extended Event Session

The third option is the Extended Events Session. This functionality has been introduced in SQL Server 2008. It provides very lightweight monitoring and event tracking system that can be very helpful in performance troubleshooting and (in our case) changes monitoring.

Microsoft announced it as a successor of SQL Trace (that has been marked as deprecated in SQL Server 2012). That means SQL Trace can be removed in one of the future versions. However, SQL Trace and SQL Server Profiler are still very popular and widely used, so I really doubt that will happen anytime soon.

In the beginning, Extended Events didn’t gain too much popularity because it was tough to use them. Now, with the help of SSMS, it’s much more comfortable.

To test this solution we have to create an Extended Events Session that captures object modification events. In our case these are:

  • object_created
  • object_altered
  • object_deleted
Creating an Extended Events Session

You can create such session in two ways. Using T-SQL or using a Wizard in SQL Server Management Studio. I find the latter much more comfortable.

Create a Session using Wizard

To create a new Session, in Object Explorer expand Instance -> Management -> Extended Events. Right Click on Sessions and choose New Session Wizard.

Extended Events Wizard - Start Wizard

On the “Set Session Properties” page provide a name for your session and decide if this session should start at server startup. Then, click the Next button.

Extended Events Wizard - Session Properties

On the next page, you can decide whether you want to use a predefined template or not. In our case, there is no template that we can use to track object schema changes. Choose “Do not use template” option and click “Next”.

Extended Events Wizard - Template

On the “Select Events To Capture” page, select object_altered, object_created, and object_deleted events.

Extended Events Wizard - Events To Capture

It should look like this:

Extended Events Wizard - Events To Capture 2On the “Capture Global Fields” page, you can decide what data you want to collect. My recommendation is to select the following ones:

  • client_app_name
  • client_hostname
  • database_id
  • database_name
  • server_principal_name
  • session_id
  • sql_text

This gives you an overview of what really happened. You know who performed a change, from which machine, and from which application. Most importantly, you also know what SQL statement was executed. When you set this then click “Next”.

Extended Events Wizard - Capture Global Fields

On the “Set Session Event Filters” page, you can add additional filters. That enables you, for example, to capture events just for one database instead of for all databases on your instance. Then click “Next”.

Extended Events Wizard - Session Event Filters

On the “Specify Session Data Storage” page, you need to decide if you want to log data to file(s) (event_file target) or maybe keep them in memory (ring_buffer target). For real-life usage, you should choose files. Here, for the demo purpose, I use ring_buffer.

Extended Events Wizard - Data Storage

When the session is already created, you can start it immediately and also can watch live data on the screen as events are captured.

Extended Events Wizard - Session Created

Create a Session using T-SQL

The same session can be created using this script.

CREATE EVENT SESSION [CaptureObjectModifications] ON SERVER 
ADD EVENT sqlserver.object_altered(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_created(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text)),
ADD EVENT sqlserver.object_deleted(
    ACTION(sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.sql_text))
ADD TARGET package0.ring_buffer(SET max_events_limit=0,max_memory=102400)
GO
Viewing events captured by the Extended Events Session

When the session is already created and started. It captures all object modifications. To see changes live, start the Wach Live Data view in SSMS.

Extended Events - Start Watch Live Data

You can right click on the column names to add additional columns to this view.

Extended Events - Watch Live Data - Add Columns

Now, let’s test it by executing the whole test case from the beginning of the article. Captured events are automatically displayed in the window.

Extended Events - Watch Live Data

To review historical data from the ring_buffer target, you need to use the T-SQL query. When you log to files you have the possibility to review their content in SSMS with View Target Data option. Use this query to select captured events for our session.

;WITH raw_data(t) AS
(
    SELECT CONVERT(XML, target_data)
    FROM sys.dm_xe_sessions AS s
    INNER JOIN sys.dm_xe_session_targets AS st
    ON s.[address] = st.event_session_address
    WHERE s.name = 'CaptureObjectModifications'
    AND st.target_name = 'ring_buffer'
),
xml_data (ed) AS
(
    SELECT e.query('.') 
    FROM raw_data 
    CROSS APPLY t.nodes('RingBufferTarget/event') AS x(e)
)
SELECT * --FROM xml_data;
FROM
(
  SELECT
    [timestamp]       = ed.value('(event/@timestamp)[1]', 'datetime'),
    [database_id]     = ed.value('(event/data[@name="database_id"]/value)[1]', 'int'),
    [database_name]   = ed.value('(event/action[@name="database_name"]/value)[1]', 'nvarchar(128)'),
    [object_type]     = ed.value('(event/data[@name="object_type"]/text)[1]', 'nvarchar(128)'),
    [object_id]       = ed.value('(event/data[@name="object_id"]/value)[1]', 'int'),
    [object_name]     = ed.value('(event/data[@name="object_name"]/value)[1]', 'nvarchar(128)'),
    [session_id]      = ed.value('(event/action[@name="session_id"]/value)[1]', 'int'),
    [login]           = ed.value('(event/action[@name="server_principal_name"]/value)[1]', 'nvarchar(128)'),
    [client_hostname] = ed.value('(event/action[@name="client_hostname"]/value)[1]', 'nvarchar(128)'),
    [client_app_name] = ed.value('(event/action[@name="client_app_name"]/value)[1]', 'nvarchar(128)'),
    [sql_text]        = ed.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)'),
    [phase]           = ed.value('(event/data[@name="ddl_phase"]/text)[1]',    'nvarchar(128)')
  FROM xml_data
) AS x
WHERE phase = 'Commit'
ORDER BY [timestamp];

Extended Events - TSQL

What was captured by Extended Events Session?

In terms of what was captured, the Extended Event Session looks very well. It has a variety of configuration options that allow you to customize the logged details. Viewing the data collected by session, we know what was changed, when it was changed and by whom. We also have a SQL statement that was executed to perform this change.

Data retention

The Extended Events Session has many retention options for both targets. For files, we can specify the maximum files size. For the ring buffer, we can specify maximum events count and memory size. That gives users a lot of flexibility.

Extended Events summary
Advantages:

  • Contains a lot of details
  • Info WHAT exactly was changed
  • Access to the executed SQL statement
  • Info WHEN object was changed
  • Info by WHO object was changed
  • Plenty of retention options
  • Possibility to save data to memory or files
  • Easy to set up
  • Possibility to set additional filters
Disadvantages:

  • Difficult processing of captured events (T-SQL and XQuery)
  • No access to the old object definition

In the next part, I will demonstrate you how to use DDL Triggers to capture database schema changes.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 2 – Transaction Log

Last week I published the first post from my short 5 part series in which I demonstrate you 5 different ways to track database schema changes. Thanks to this, you always can easily answer questions about what has changed, who changed it and when. Today’s post describes how to use Transactional Log for this purpose.

You can find my previous post here:

Transaction Log

Another solution that can be used to track changes executed against your database is to read Transaction Log file or Transaction Log Backups. Transaction log file (and backups) consists of every transaction executed against your database. The only requirement for this is to have a database in the Full recovery model. In the Simple recovery model, every committed transaction can be very quickly overwritten by another one.

Also, this is something that you get for free without the need to enable and configure any additional SQL Server functionality. Of course, besides the database backups, but you already do database backups, right?

To test this approach, you have to make some preparations. You need to set the database’s recovery model to Full. As a first step, check database properties to ensure that it is appropriately configured. As a second step, you need to create a full database backup. From this point, the database is in Full recovery model, and every transaction is fully logged. Thanks to this you are able to read logged transactions from Transaction Log file. The same applies to read from Transaction Log backup. To do this, you need to create such a backup after you execute database schema changes.

USE master;
GO

CREATE DATABASE TrackMyChanges;
GO

BACKUP DATABASE TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.bak';
GO

USE TrackMyChanges;
GO

-- Here execute all statements that create, modify, and drop objects.

USE master;
GO

-- Now you can check Transaction log file

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

-- Now you can check Transaction Log backup file

DROP DATABASE TrackMyChanges;
Reading Transaction Log file

To read the transaction log file, we can use the undocumented fn_dblog function. It accepts two parameters:

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the log.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the log file.
SELECT * FROM fn_dblog(null, null);

This query returns a lot of data. Fortunately, we don’t need all columns to check what happened to our objects. We may easily reduce the number of columns and rows to these relevant ones. Each object modification needs to be a part of a transaction. As a first step, we can list only rows with the LOP_BEGIN_XACT operation.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    [Transaction SID],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction Name

Based on the [Transaction Name] we can identify transactions that change the schema of our objects. At this point, we don’t know yet what object it is, but we can check who modified it. [Transaction SID] column contains a SID of the Login that was used to execute this operation. We can use SUSER_SNAME() function to get its name. [Transaction Name] column simply describes what was changed.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log - Transaction SID

To continue, we need to decide which particular change we want to investigate further. Let’s take the second CREATE/ALTER FUNCTION transaction. We need to note down its Transaction ID. For me, it is 0000:0000039e.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dblog(null, null) 
WHERE [Transaction ID] = '0000:0000039e';

Transaction Log - One Transaction

Now, to discover what object was changed we have to dig into [LOCK Information] column. The first LOP_LOCK_XACT operation describes Schema Modification Lock on object ID = 965578478.

HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 5:965578478:0

This is our function:

SELECT * FROM sys.objects WHERE object_id = 965578478;

Transaction Log - Function

OK. At this point, we know what object was changed, when, and by whom. However, this is Transaction Log, and it should also contain detailed information what exactly was changed. Can we get it? Oh yes, we can. To do it – run the following query.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description], 
    CAST(SUBSTRING([RowLog Contents 0],33,LEN([RowLog Contents 0])) AS varchar(8000)) AS [Definition]
FROM fn_dblog(null, null)
WHERE [Transaction ID] = '0000:0000039e'
AND [AllocUnitName] = 'sys.sysobjvalues.clst' 
ORDER BY [Current LSN] ASC;

Transaction Log - Change Details

As you can see, there are two rows. One describes the deleted old object Definition, and the second one represents the inserted new value. That’s really cool! Thanks to this we exactly know what was changed.

Reading Transaction Log backup

Reading changes directly from Transaction Log is one approach, but you can also get the same information from Transaction Log backups. The only difference is that you must use fn_dump_dblog() instead of fn_dblog().  This function accepts 68 parameters (sic!). Fortunately, we have to provide only a few of them.

  • The first is a starting log sequence number (LSN) we want to read from. If you specify NULL, it returns everything from the start of the backup file.
  • The second is an ending log sequence number (LSN) we want to read to. If you specify NULL, it returns everything to the end of the backup file.
  • The third is a type of file (can be DISK or TAPE).
  • The fourth one is a backup number in the backup file.
  • The fifth is a path to the backup file

What about the remaining 63 parameters? They need to be specified only if you use stripped media sets with multiple disk files (max 64). In such a case, you have to provide paths to the rest of the files. If you don’t use this feature, then you must provide DEFAULT values.

BACKUP LOG TrackMyChanges TO DISK='C:\Temp\TrackMyChanges.trn';
GO

SELECT * FROM fn_dump_dblog(
    NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT);

When you know what values to provide as parameters, then you can easily use this function to get data as in previous examples.

SELECT [Current LSN],
    [Operation],
    [Transaction ID],
    [SPID],
    [Begin Time],
    [Transaction Name],
    SUSER_SNAME([Transaction SID]) AS [Login],
    [Xact ID],
    [End Time],
    [Lock Information],
    [Description]
FROM fn_dump_dblog(
        NULL, NULL, N'DISK', 1, N'C:\Temp\TrackMyChanges.trn',
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT,
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, 
        DEFAULT, DEFAULT, DEFAULT)
WHERE [Operation] = 'LOP_BEGIN_XACT'
AND [Transaction Name] NOT IN ('AutoCreateQPStats', 'SplitPage')
ORDER BY [Current LSN] ASC;

Transaction Log Backup - Transaction Name

What was captured by Transaction Log?

By its nature Transaction Log contains detailed information about every change. You can extract from it such information as what was changed (in details) when it was modified and by whom.

Data retention

Here, the story is straightforward. You have access to this information as long as you store Transaction Log backups.

Transaction Log summary
Advantages:

  • Contains every detail
  • Info WHAT exactly was changed
  • Access to the old and new object definition
  • Info WHEN object was changed
  • Info WHO changed the changed
Disadvantages:

  • Require database FULL recovery model
  • The complicated and long process of data retrieval
  • In busy systems, it may be difficult to find log entry we are looking for
  • Retention based on Transaction Log backup retention

In the next part I will demonstrate you how to configure Extended Events Session to capture database schema changes.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

5 ways to track Database Schema changes – Part 1 – Default Trace

In the perfect world, only Database Administrators have access to SQL Server databases. All database schema changes go through strict Change Management Process where they need to be well described and approved by Change Advisory Board. The database schema is stored in a source code repository and deployed version doesn’t drift unexpectedly from its original model.

Unfortunately, we don’t live in the perfect world…

Despite the version control system, change management process, and limited access to the server, sometimes database schema is changed without our knowledge and supervision. It may happen in a development environment where a bunch of people have access and deployment process is not very strict (the common case).  However, it also may happen in higher level environments where only limited number of people have access (the rare case – but not impossible).

Sooner or later such unexpected schema changes start to be very problematic. They may break some functionality or create some other issues (ie. performance degradation). They may block deployment of next changes. They simply may be implemented in an inefficient way or even just be stupid.

In such a case, various questions arise and you as a DBA will have to answer some of them.

  • When XYZ object was changed?
  • What modifications were applied to the XYZ object?
  • Who changed XYZ object?

In this short 5 part series, I will demonstrate you 5 different ways to track such database schema changes. Thanks to this, you always will be able to easily answer such questions. Today’s post describes how to use Default Trace.

Continue reading “5 ways to track Database Schema changes – Part 1 – Default Trace”

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

Step by step installation of SQL Server 2019 (vNext) on Windows

On Monday, September 24, during 2018 Ignite conference, Microsoft announced the public preview of SQL Server 2019 (vNext). CTP 2.0 version (Community Technology Preview) is now accessible for everyone. In this blog post, I will provide a step-by-step guide (with screenshots) on SQL Server 2019 installation on Windows. If you are a novice SQL Server user or have not had the opportunity to install SQL Server instance before, you might find this post very helpful.   Continue reading “Step by step installation of SQL Server 2019 (vNext) on Windows”

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close