A list of the software I use

Two weeks ago I was blogging about The Essential SQL Server Toolkit in my stack. That reminds me of an idea I had in January to post a list of software I usually use. At that time I was forced to create such list for myself because I was changing my laptop to the new one. I decided to post it here, so next time when I will have to reinstall my laptop I will have everything already in one place. Additionally, maybe you will find something interesting here as well.

Basic
  • Microsoft Office – Word, Excel, Outlook, Powerpoint, OneNote. (link)
  • Chrome – Web browser. (link)
SQLServer
  • Microsoft SQL Server – I  think this doesn’t require any explanation. (link)
  • Microsoft SQL Server Management Studio – Data Management Tool. (link)
  • Microsoft SQL Operations Studio – Data Management Tool. (link)
  • Microsoft SQL Server Data Tools – SQL Server Tools for developers. (link)
  • RML Utilities – Tools to help database administrators manage the performance of Microsoft SQL Server. (link)
  • DiagManager – Graphical PSSdiag configuration manager. (link)
  • SQLNexus – Tool for PSSDiag data analysis. (link)
  • SQL Sentry Plan Explorer – Query analysis and tuning tool. (link)
  • Microsoft Data Migration Assistant – Tool for database compatibility issues detection. (link)
  • Microsoft Database Experimentation Assistant – Tool for evaluating a targeted version of SQL for a given workload. (link)
Other data platforms
  • DB Browser for SQL Lite – Data Management Tool for SQL Lite (link)
  • pgAdmin – Data Management Tool for PostgreSql. (link)
  • Neo4J – Graph Platform. (link)
Programming
  • git – Version control system. (link)
  • Git Extensions – Graphical User interface for git. (link)
  • TortoiseGit – Windows Shell Interface to Git and based on TortoiseSVN. (link)
  • Microsoft Visual Studio – Integrated Development Environment. (link)
  • Microsoft Visual Studio Code – Lightweight Code Editor. (link)
  • Notepad++ – Code editor. (link)
Frameworks
  • Java – Java Runtime Environment. (link)
  • .Net Framework – (link)
  • Microsoft SQL Server Data-tier Application Framework (DACFx) – (link)
Communication
  • Cisco Webex Meetings – Video conferencing and screen sharing. (link)
  • Skype – Video chat and voice calls. (link)
  • Slack – Collaboration tool. (link)
Miscellaneous – Work
  • 7-zip – Great file archiver. (link)
  • Adobe Acrobat Reader – PDF Reader. (link)
  • KeePass – Passwords manager. (link)
  • Putty – SSH and telnet client for Windows. (link)
  • Total Commander – File manager for Windows. (link)
  • WinMerge – File differencing and merging tool. (link)
  • WinScp – SFTP, SCP, and FTP client for Windows. (link)
  • Microsoft PowerBI Desktop – For creating live, interactive reports. (link)
  • Remote Server Administration Tools – For remote management rolses and features in Windows Server. (link)
  • OneDrive – File hosting service. (link)
  • VirtualBox – General purpose Virtual Machines host. (link)
  • Microsoft Visio – Diagram and charts creation. (link)
Miscellaneous – Entertainment
  • K-Lite Codec Pack – Pack of video and audio codecs. (link)
  • BESTPlayer – Video player with subtitles support. (link)
Miscellaneous – Blogging and presenting
  • GIMP – Photo/Graphic editing program. (link)
  • Pain.NET – Photo/Graphic editing program. (link)
  • ScreenToGif – Screen, webcam and sketchboard recorder with an integrated editor. (link)
  • Sizer – freeware utility that allows you to resize any window to an exact, predefined size. (link)
  • ZoomIt – screen zoom and annotation tool for technical presentations. (link)

Wow… I have much more items on my list than I expected. However, I know this list may be not complete and some very valuable application may be missing here.

Drop a comment if you found this list useful or if you think I should try out some other software you already use for a long time.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

I am speaking at 24 Hours of PASS

24HOP - logoI’m happy to announce that I have been selected to present a session during next 24 Hours of PASS. This time the main topic is Cross-Platform SQL Server Management.

24 HoP is an exceptional online event that provides free training for database professionals. It’s 24 back-to-back hours webinar series that features a new 60-minute webinar each hour. The full schedule has already been announced on last Friday.

I will have a great pleasure to present my session about SQL Operations Studio – a New Multi-Platform tool for SQL Server Database Development, Administration, and Monitoring at 06:00 UTC on April 26.

Be sure to mark your calendar for April 25-26, for this edition of 24 HoP. Registration is already open.

I’d love to see you there!

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

T-SQL Tuesday #101 – The Essential SQL Server Tools in my stack

Tsql2sday logoThis month’s TSQL2sday is hosted by Jens Vestergaard (b|t) and the topic is about SQL Server tools that we, Database Administrators, Architects, and Developers use every day to work on our tasks.

It’s a great topic, isn’t it

Personally, I think that the tools we use are of great importance for our daily work. Many of you may disagree by saying that skills are much more important than tools. Of course, I completely agree with this statement. The knowledge, experience, and skills enable us to do our work even without proper tools. However, it doesn’t mean that software we use doesn’t matter. Many applications, utilities, and frameworks can automate and speed up our daily activities. Thanks to that we can achieve more in a shorter time. That’s the reason why I blog about various SQL Server tools quite often (see posts in Tools category).

The SQL Server tools I use

I don’t have a long list of fancy and unknown applications I use. I think that most of them are well known in SQL Server community. Nevertheless, I decided to describe them for you. Maybe you will find something interesting here…

Server and Database Management

sql-server-management-studio-icoThe first, and probably the most used by me tool is SQL Server Management Studio (SSMS) (download). I think I do not have to present it to anyone. It’s Swiss Army Knife tool for SQL Server specialists. Before SQL Server 2017 it was an inseparable element of every release. Since version 16, it is a separate application with a short release cycle. Every few weeks, Microsoft releases a new version and each of them delivers some new amazing features like XE Profiler, Data Discovery & Classification, Vulnerability Assessment, Import Flat File Wizard, and many, many more. I cannot imagine working with SQL Server without SSMS.

sql-server-management-studio-screenshot

Despite the fact that SSMS is so popular, Microsoft decided to create another tool for database professionals. SQL Operations Studio (SQLOps) (download) is a new application from Microsoft designed to make our life easier. It’s a free and open source data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS, and Linux. This light-weight multi-platform solution can be used for SQL Server database development, administration, and monitoring. I described it more in details in this post.

sql-operations-studio-screenshot

T-SQL Source Code Management

sql-server-data-tools-icoFor database source code management, I use SQL Server Data Tools (SSDT) (download) and Git for Windows (download). Thanks to SSDT we can create SQL Database projects in the same way you can create .NET projects in Visual Studio. These projects can be built and validated, and then the compiled artifact in the form of a DACPAC file can be easily deployed against the target database.

sql-server-data-tools-screenshot

To automate the DACPAC deployment process my team uses sqlpackage.exe from Data-Tier Application Framework (DACFx) (download) wrapped in our own PowerShell module. We decided to create our own wrapper because this gives us required flexibility and makes our database deployment process more streamlined.

For text/SQL files

notepad-plus-plus-icoSSMS and SSDT are great and very powerful SQL Server tools for work with *.sql files. However, sometimes I find that I don’t need to use such big and complex tool for performing easy and quick tasks on text/SQL files. It may include things like opening file just for a quick view or quick search. I used to use Notepad++ (download) for such kind of activities, but recently I’m trying to make myself more familiar with Visual Studio Code (download).

notepad-plus-plus-screenshot

visual-studio-code-icoVisual Studio Code is a lightweight and extensible multi-platform source code editor which can be used on Windows, macOS, and Linux. Microsoft puts a lot of effort in the development of this solution. It comes with a built-in support for JavaScript, TypeScript, and Node.js and has a lot of extensions for other languages (such as T-SQL, C#, C++, Java, Python,…).

visual-studio-code-screenshot

winmerge-icoFor text file comparison I use WinMerge (download). It is a free and an Open Source differencing and merging tool for Windows. It can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle.

winmerge-screenshot

Database Unit Tests

tsqltTo test database code I use tSQLt (download). tSQLt is a free and opensource database unit testing framework for Microsoft SQL Server, completely written in T-SQL and .NET CLR. Additionally, besides the long list of available assertions, it also has the great and rare functionality to mock various database objects. In my opinion that makes it better then tests in SSDT.

In addition to the above, I recently started to use SQLCover (download) created by Ed Elliott (b|t). It’s one more free and open source solution, this time for unit tests code coverage report generation. It’s amazing how well it works.

SQLCover-screenshot

Community scripts/tools

Besides all of the above-mentioned SQL Server tools, I have my favorite set of well-known T-SQL scripts or procedures I use commonly for troubleshooting. The first and probably most-known solution is sp_whoisactive (download) procedure from Adam Machanic (b|t). It’s something like a powerful combination of sp_who2, Activity Monitor, and many other diagnostic queries based on DMVs. If you’ve never used it before, then you should try it now!

first-responder-kitOther scripts I use quite often are stored procedures from Brent Ozar ULTD First Responder Kit (download). This bundle consists of such tools like sp_Blitz, sp_BlitzFirst, sp_BlitzCache, and a few others also very useful.

Troubleshooting Tools

In very difficult situations, when above scripts are not enough for issue troubleshooting, I use PSSdiag (download) and SQLNexus (download). Both were developed in Microsoft as side projects mainly used by SQL Server Customer Support Service and Product Support Service teams. Now, public versions are free and opensource and can be used by everyone.

PSSdiag is a data collector utility that can capture various interesting data sets, counters, and metrics. For example:

  • Event Logs
  • PerfMon counters
  • Server configuration
  • Error Logs
  • Profiler Trace / XE Session
  • Blocking information
  • Query statistics
  • and many others…

If that would not be enough it also can be extended by custom collector scripts. This gives the user the ability collect all the necessary information.

SQLNexus analyses the data collected by PSSdiag and generates very detailed reports that describe monitored SQL Server state and health. They are invaluable in troubleshooting SQL Server issues.

In case you’re interested in possibilities that these tools give, I encourage you to read my other blog post describing PSSdiag and SQLNexus use case.

SQL Server tools that are on my check-it-out list

In addition to all of these tools I already use, there are others I still want to try out.

dbatools-icodbatools (download) is a PowerShell module with more than 300 SQL Server administration, best practice and migration commands included. Thanks to the huge number of contributors from SQL Server community it’s growing and getting better and better.

dbachecks-icoThe same amazing group of people created dbachecks (download) PowerShell module. It’s a solution for automated environment validation. It’s based on Pester and has already more than 80 checks implemented.

dbachecks

dbareports-icoLast but not least is dbareports (download). It uses PowerShell, T-SQL and SQL Agent to gather information about your SQL Server estate. It also comes with beautiful SSRS reports and PowerBi and Cortana Integration.

All of these solutions wouldn’t exist without Chrissy LeMaire (b|t) and Rob Sewell aka SQL DBA with A Beard (b|t) who started all of this! Thanks to their and SQL Server community effort we have a plenty of great SQL Server tools for our use.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

How to use Diskspd to check IO subsystem performance

One of many responsibilities of the Database Administrator is the installation of new SQL Server instances. However, before you do it, it is best practice to test IO subsystem performance. It does not make any sense to install the new instance in an environment that will have performance issues in a moment. In this blog post, I describe how to use Diskspd tool to perform such disk performance tests.

What is Diskspd?

Diskspd is a storage testing tool created by Microsoft Windows, Windows Server and Cloud Server Infrastructure Engineering teams. It combines robust and granular IO workload definition with flexible runtime and output options. That makes it a perfect tool for storage performance testing, validation and benchmarking.

Where to find Diskspd?

Diskspd is a free and open source utility. Its source code can be found on GitHub. The repository also hosts other frameworks which use Diskspd. You can find them under ‘Frameworks’ directory. A binary release is hosted by Microsoft at the following location: http://aka.ms/diskspd.

How to use Diskspd?

Diskspd doesn’t require any additional tools or frameworks to be used. No .NET or Java required. Everything that is needed is included in a ZIP file. After downloading it and extracting you will have this:

Diskspd - zip archive

Each of three folders contains a Diskspd.exe executable:

  • amd64fre – for 64bit systems
  • armfre – for ARM systems
  • x86fre – for 32bit systems

In addition to that, you get a comprehensive Diskspd documentation with many usage examples and a PowerShell script to convert XML results into CSV.

Diskspd is a command line utility and as a such has a long list of available parameters. However, to use it efficiently you just need to remember a few of them. Let’s analyze parameters from the below example:

DiskSpd.exe -c150G -d300 -r -w40 -t8 -o32 -b64K -Sh -L D:\SpeedTest\testfile.dat

Parameters:

  • -c150G – Create a file of the specified size. Size can be stated in bytes or KiBs, MiBs, GiBs. Here – 150GB.
  • -d300 – Duration of measurement period in seconds, not including cool-down or warm-up time (default = 10 seconds). Here – 5 minutes.
  • -r – Random I/O access (override -s).
  • -s – Sequential I/O access.
  • -w40 – Percentage of write requests to issue (default = 0, 100% read). Here 40% of IO operations are Writes, remaining 60% are Reads. This is a usual load for my SQL Server OLTP databases.
  • -t8 – The number of threads per file. Here – 8. One thread per available core.
  • -o32 – The number of outstanding I/O requests per target per thread. In other words, it is a queue depth. Here – 32.
  • -b64K – Block size in bytes or KiBs, MiBs, or GiBs. Here – 64KB.
  • -Sh – Disable both software caching and hardware write caching.
  • -L – Measure latency statistics.
  • D:\SpeedTest\testfile.dat – My target file used for testing (created with -c).
How to read results?

I executed this command on a system I recently got for testing. My task was to examine whether new SAN disks array is configured properly and has sufficient performance for a load generated by SQL Server. Here is the result:

Command Line: DiskSpd.exe -c150G -d300 -r -w40 -t8 -o32 -b64K -Sh -L D:\SpeedTest\testfile.dat

Input parameters:

    timespan:   1
    -------------
    duration: 300s
    warm up time: 5s
    cool down time: 0s
    measuring latency
    random seed: 0
    path: 'D:\SpeedTest\testfile.dat'
        think time: 0ms
        burst size: 0
        software cache disabled
        hardware write cache disabled, writethrough on
        performing mix test (read/write ratio: 60/40)
        block size: 65536
        using random I/O (alignment: 65536)
        number of outstanding I/O operations: 32
        thread stride size: 0
        threads per file: 8
        using I/O Completion Ports
        IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:	300.01s
thread count:		8
proc count:		8

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   1.01%|   0.13%|    0.87%|  99.00%
   1|   3.04%|   0.14%|    2.90%|  96.96%
   2|   0.71%|   0.14%|    0.57%|  99.29%
   3|   0.74%|   0.13%|    0.61%|  99.26%
   4|   0.62%|   0.10%|    0.53%|  99.38%
   5|   0.90%|   0.09%|    0.81%|  99.10%
   6|   0.66%|   0.06%|    0.60%|  99.34%
   7|   0.62%|   0.05%|    0.57%|  99.38%
-------------------------------------------
avg.|   1.04%|   0.11%|    0.93%|  98.96%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      4326424576 |        66016 |      13.75 |     220.04 |  145.375 |    52.576 | testfile.dat (150GB)
     1 |      4338679808 |        66203 |      13.79 |     220.67 |  144.962 |    51.947 | testfile.dat (150GB)
     2 |      4328062976 |        66041 |      13.76 |     220.13 |  145.323 |    52.482 | testfile.dat (150GB)
     3 |      4328128512 |        66042 |      13.76 |     220.13 |  145.308 |    52.563 | testfile.dat (150GB)
     4 |      4336517120 |        66170 |      13.78 |     220.56 |  145.029 |    52.215 | testfile.dat (150GB)
     5 |      4334288896 |        66136 |      13.78 |     220.44 |  145.107 |    52.244 | testfile.dat (150GB)
     6 |      4328718336 |        66051 |      13.76 |     220.16 |  145.289 |    53.204 | testfile.dat (150GB)
     7 |      4339859456 |        66221 |      13.80 |     220.73 |  144.928 |    52.339 | testfile.dat (150GB)
-----------------------------------------------------------------------------------------------------
total:       34660679680 |       528880 |     110.18 |    1762.85 |  145.165 |    52.447

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      2600796160 |        39685 |       8.27 |     132.28 |  144.647 |    58.771 | testfile.dat (150GB)
     1 |      2616066048 |        39918 |       8.32 |     133.05 |  144.026 |    57.697 | testfile.dat (150GB)
     2 |      2605776896 |        39761 |       8.28 |     132.53 |  144.203 |    58.723 | testfile.dat (150GB)
     3 |      2582183936 |        39401 |       8.21 |     131.33 |  144.362 |    59.017 | testfile.dat (150GB)
     4 |      2592538624 |        39559 |       8.24 |     131.86 |  144.037 |    58.435 | testfile.dat (150GB)
     5 |      2597126144 |        39629 |       8.26 |     132.09 |  144.168 |    58.460 | testfile.dat (150GB)
     6 |      2599616512 |        39667 |       8.26 |     132.22 |  144.713 |    59.531 | testfile.dat (150GB)
     7 |      2601058304 |        39689 |       8.27 |     132.29 |  143.783 |    58.635 | testfile.dat (150GB)
-----------------------------------------------------------------------------------------------------
total:       20795162624 |       317309 |      66.10 |    1057.65 |  144.242 |    58.661

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1725628416 |        26331 |       5.49 |      87.77 |  146.474 |    41.504 | testfile.dat (150GB)
     1 |      1722613760 |        26285 |       5.48 |      87.61 |  146.384 |    41.685 | testfile.dat (150GB)
     2 |      1722286080 |        26280 |       5.47 |      87.60 |  147.019 |    41.225 | testfile.dat (150GB)
     3 |      1745944576 |        26641 |       5.55 |      88.80 |  146.707 |    41.165 | testfile.dat (150GB)
     4 |      1743978496 |        26611 |       5.54 |      88.70 |  146.505 |    41.226 | testfile.dat (150GB)
     5 |      1737162752 |        26507 |       5.52 |      88.35 |  146.511 |    41.199 | testfile.dat (150GB)
     6 |      1729101824 |        26384 |       5.50 |      87.94 |  146.156 |    41.917 | testfile.dat (150GB)
     7 |      1738801152 |        26532 |       5.53 |      88.44 |  146.641 |    41.101 | testfile.dat (150GB)
-----------------------------------------------------------------------------------------------------
total:       13865517056 |       211571 |      44.08 |     705.20 |  146.550 |    41.379


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |     22.132 |     22.095 |     22.095
   25th |    111.940 |    129.764 |    116.858
   50th |    132.568 |    151.406 |    141.026
   75th |    163.575 |    172.090 |    168.590
   90th |    210.856 |    191.478 |    199.258
   95th |    251.512 |    203.980 |    226.995
   99th |    359.454 |    228.761 |    322.688
3-nines |    541.528 |    270.872 |    497.081
4-nines |    727.123 |    317.519 |    695.128
5-nines |    811.054 |    475.358 |    807.394
6-nines |    828.036 |    648.498 |    828.036
7-nines |    828.036 |    648.498 |    828.036
8-nines |    828.036 |    648.498 |    828.036
9-nines |    828.036 |    648.498 |    828.036
    max |    828.036 |    648.498 |    828.036

Let’s analyze it.

In the first line, we have the command which was executed to perform the test. Used parameters and any other settings are listed and explained below.

Command Line: DiskSpd.exe -c150G -d300 -r -w40 -t8 -o32 -b64K -Sh -L D:\SpeedTest\testfile.dat

Input parameters:

    timespan:   1
    -------------
    duration: 300s
    warm up time: 5s
    cool down time: 0s
    measuring latency
    random seed: 0
    path: 'D:\SpeedTest\testfile.dat'
        think time: 0ms
        burst size: 0
        software cache disabled
        hardware write cache disabled, writethrough on
        performing mix test (read/write ratio: 60/40)
        block size: 65536
        using random I/O (alignment: 65536)
        number of outstanding I/O operations: 32
        thread stride size: 0
        threads per file: 8
        using I/O Completion Ports
        IO priority: normal

In next section, we have the time of the test run, number of threads, and number of available processors. Additionally, we have information about the average utilization of every processor during the test run.

actual test time:	300.01s
thread count:		8
proc count:		8

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|   1.01%|   0.13%|    0.87%|  99.00%
   1|   3.04%|   0.14%|    2.90%|  96.96%
   2|   0.71%|   0.14%|    0.57%|  99.29%
   3|   0.74%|   0.13%|    0.61%|  99.26%
   4|   0.62%|   0.10%|    0.53%|  99.38%
   5|   0.90%|   0.09%|    0.81%|  99.10%
   6|   0.66%|   0.06%|    0.60%|  99.34%
   7|   0.62%|   0.05%|    0.57%|  99.38%
-------------------------------------------
avg.|   1.04%|   0.11%|    0.93%|  98.96%

As you can see there was almost no load.

The Total IO section provides statistics (Read+Write) per thread. The last row provides Total values for the whole test run.

Total IO
thread |  bytes     |  I/Os  |  MB/s  | I/O per s | AvgLat  | LatStdDev |  file
------------------------------------------------------------------------------------
     0 | 4326424576 |  66016 |  13.75 |    220.04 | 145.375 |    52.576 | testfile.dat
     1 | 4338679808 |  66203 |  13.79 |    220.67 | 144.962 |    51.947 | testfile.dat
     2 | 4328062976 |  66041 |  13.76 |    220.13 | 145.323 |    52.482 | testfile.dat
     3 | 4328128512 |  66042 |  13.76 |    220.13 | 145.308 |    52.563 | testfile.dat
     4 | 4336517120 |  66170 |  13.78 |    220.56 | 145.029 |    52.215 | testfile.dat
     5 | 4334288896 |  66136 |  13.78 |    220.44 | 145.107 |    52.244 | testfile.dat
     6 | 4328718336 |  66051 |  13.76 |    220.16 | 145.289 |    53.204 | testfile.dat
     7 | 4339859456 |  66221 |  13.80 |    220.73 | 144.928 |    52.339 | testfile.dat
------------------------------------------------------------------------------------
total:  34660679680 | 528880 | 110.18 |   1762.85 | 145.165 |    52.447

In this case, test generated 528880 IO operations, the average throughput was 110.18 MB/s (1762.85 IOPS) and the average latency was 145 ms. In short: Terrible!

Next two sections, Read IO and Write IO provides the same statistics but individually for the Read and Write operations.

The last section provides a detailed information about a latency. It is a summary table of per percentile latencies per operation type.

  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |     22.132 |     22.095 |     22.095
   25th |    111.940 |    129.764 |    116.858
   50th |    132.568 |    151.406 |    141.026
   75th |    163.575 |    172.090 |    168.590
   90th |    210.856 |    191.478 |    199.258
   95th |    251.512 |    203.980 |    226.995
   99th |    359.454 |    228.761 |    322.688
3-nines |    541.528 |    270.872 |    497.081
4-nines |    727.123 |    317.519 |    695.128
5-nines |    811.054 |    475.358 |    807.394
6-nines |    828.036 |    648.498 |    828.036
7-nines |    828.036 |    648.498 |    828.036
8-nines |    828.036 |    648.498 |    828.036
9-nines |    828.036 |    648.498 |    828.036
    max |    828.036 |    648.498 |    828.036

The ‘nines’ refer to the number of nines: 3-nines is the 99.9th percentile, and so forth.

This is how PerfMon counters looked during this test run.

Diskspd - PerfMon

Testing scenarios

Of course, one single test run is not enough to thoroughly test IO subsystem. Especially in a situation when you have different drives to handle different workload types (or if you work with OLAP databases). One of the most common best practice is to store database data files and database log file separately. Such separation requires a different kind of testing for each drive.

Fortunately, Diskspd documentation provides a lot of helpful examples:

Data file patterns

100% 8KiB/64KiB Random reads – Large area of random concurrent reads of 8KB/64KB blocks. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –d300 -r -w0 -t8 –o32 -b8K -h -L F:\testfile.dat

DiskSpd.exe -c1000G –d300 -r -w0 -t8 –o32 -b64K -h -L F:\testfile.dat

100% 8KiB/64KiB Random writes – Large area of random concurrent writes of 8KB/64KB blocks. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –d300 -r –w100 -t8 –o32 -b8K -h -L F:\testfile.dat

DiskSpd.exe -c1000G –d300 -r –w100 -t8 –o32 -b64K -h -L F:\testfile.dat

60% 8KiB/64KiB Random READs, 40% 8KiB/64KiB Random writes – Large area of random concurrent 60% reads and 40% writes of 8KB/64KB blocks. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –d300 -r –w40 -t8 –o32 -b8K -h -L F:\testfile.dat

DiskSpd.exe -c1000G –d300 -r –w40 -t8 –o32 -b64K -h -L F:\testfile.dat
Log file patterns

100% 64KiB reads – Large area of sequential concurrent reads of 64KB blocks. 8 outstanding IOs. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –w0 -b64K –F4 -T1b -s8b -o8 –d300 -h F:\testfile.dat

100% 64KiB writes – Large area of sequential concurrent writes of 64KB blocks. 116 Outstanding IOs. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –w100 -b64K –F4 -T1b -s8b –o116 –d300 -h F:\testfile.dat

40% 64KiB reads, 60% 64KiB writes – Large area of sequential concurrent 40% Reads, 60% writes of 64KB blocks. 8 Outstanding IOs. Disable both software caching and hardware write caching.

DiskSpd.exe -c1000G –w60 -b64K –F4 -T1b -s8b –o8 –d300 -h F:\testfile.dat
The results of my testing

If you are curious, I will tell you. I executed six test runs. I tested 8KB and 64KB block size operations against disk formatted with 4KB, 8KB, and 64KB per cluster. Each one with 60% reads / 40% writes ratio. Here you can find the average latency I got from my measurements.

Latency in ms
Test Read Write Bytes per Cluster
8k_1 248.303 397.233 4k
8k_2 137.661 167.228 8k
8k_3 172.608 214.828 64k
64k_1 164.921 215.299 4k
64k_2 144.242 146.55 8k
64K_3 161.367 155.142 64k

If you don’t know yet how to interpret these results, I highly recommend you to read this post from Paul Randal (b|t). Here I will just paste a short cheat sheet:

  • Excellent: < 1ms
  • Very good: < 5ms
  • Good: 5 – 10ms
  • Poor: 10 – 20ms
  • Bad: 20 – 100ms
  • Shockingly bad: 100 – 500ms
  • WOW!: > 500ms

Yeah… Shockingly bad… This disk is trash. I need to write one more email to the storage team.

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

SQL Operations Studio – Comprehensive guide to the new database DevOps tool!

SQL Operations Studio - logo croppedSQL Operations Studio is a new tool created by Microsoft to make life easier for Database Administrators and Developers. It was first announced at the beginning of November this year, during PASS Summit conference in Seattle. Two weeks later on November 15th, the preview version was made available to the whole SQL Server community. The tool is available on three platforms (Windows, Linux, macOS) and can be downloaded for free here: Download and install Microsoft SQL Operations Studio (preview).

In this blog post, I’m going to describe SQL Operations Studio features. Continue reading “SQL Operations Studio – Comprehensive guide to the new database DevOps tool!”

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

Import Flat File wizard- the new feature available in SSMS v17.3

Import Flat File - header

Recently I wrote post about XE Profiler – the new feature in SQL Server Management Studio 17.3. You can find it here. However it is not only feature added to this version of SSMS. Microsoft has also added the new Import Flat File wizard to this version. It is a response to many complaints about an inconvenient Flat File import process received from many SSMS users.

New Import Flat File wizard supposes to make a flat file import easier and more comfortable.

What a Flat File is?

Flat File is a plain text file with no internal hierarchy. It’s often use to store tabular data because its creation is simple. Two most popular types of Flat Files are:

  • Delimited (such as comma- and tab-separated)
    Id, Car, Price
    1, Ford, 1500
    2, Chevrolet, 2000
  • Fixed width (when values in one column have the same length in each row)
    Id   Car         Price
    1    Ford        1500
    2    Chevrolet   2000

One of the most well-known and often-used types of delimited flat files is a comma-separated values (CSV) file.

Basic CSV rules (from wikipedia):

  • Adjacent fields must be separated by a single comma. However, CSV formats vary greatly in this choice of separator character. In particular, in locales where the comma is used as a decimal separator, semicolon, TAB, or other characters are used instead.
  • Any field may be quoted (that is, enclosed within double-quote characters). Some fields must be quoted, as specified in following rules.
  • Fields with embedded commas or double-quote characters must be quoted.
  • Each of the embedded double-quote characters must be represented by a pair of double-quote characters.
  • Fields with embedded line breaks must be quoted (however, many CSV implementations do not support embedded line breaks).
  • The first record may be a “header”, which contains column names in each of the fields

Example:

Year,Make,Model,Description,Price
 1997,Ford,E350,"ac, abs, moon",3000.00
 1999,Chevy,"Venture ""Extended Edition""","",4900.00
 1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00
 1996,Jeep,Grand Cherokee,"MUST SELL!
 air, moon roof, loaded",4799.00
Import and Export Data wizard

In older versions of SQL Server Management Studio we have Import and Export Data wizard available. It works pretty well when importing data between tables or databases, but it’s not so comfortable in use when importing data from flat files.

SQL Server Import and Export wizard

SQL Server Import and Export wizard - details

This wizard requires detailed information from the user about source file format. We have to specify a delimiter, a separator, a text qualifier, a code page and few other things. As one of the next steps we have to review whether the wizard correctly recognized column’s datatypes and their lengths. It’s a very cumbersome process which often ends with switching between the next and the previous wizard’s tabs to correct any errors and to finally import data successfully.

This is how above example of CSV file looks after import by this wizard:

Import and Export Data wizard - result

Import Flat File wizard

In SQL Server Management Studio 17.3 Microsoft gave users new functionality: Import Flat File wizard.

Import Flat File - menu

This wizard is much simpler in use.

  1. Introduction: Welcome page describes shortly how to use this Import Flat File wizard. If you don’t want to see this page every time while using this process you can check “Do not show this page again” option. Import Flat File - Introduction
  2. Specify input file:  Use this page to choose source file and destination table name. Click browse button to search for *.txt and *.csv files. The new table should be unique, if not then the wizard will not let you to continue.Import Flat File - Specify Input File
  3. Preview Data: Here you can take a look on your data preview. It will show you up to first 50 rows. Import Flat File - Preview Data
  4. Modify Columns: On this page you can correct column names, data types and additional attributes, recognized and set by wizard.Import Flat File - Modify Columns
  5. Summary: This page presents a summary of the import process.Import Flat File - Summary
  6. Results: On this last page users receive information whether import was successful. Import Flat File - Results

That’s it! Only 5 steps and file is imported. As you probably already noticed there is one huge difference between this import process and the previous one.  This time we didn’t have to describe file format. That is because of a completely new import framework called PROSE. Microsoft Program Synthesis using Examples SDK can analyze text data, search for patterns and “learn” structure of the file. This is how it discovers column names, datatypes, delimiters and other properties.

Let’s take a look on import results.

Import Flat File - import result

As you can see it didn’t work well for quoted fields with embedded double-quote characters.  On the other hand it recognized NULL value correctly.

Other Examples

OK, we already know the framework recognizes NULL values and can parse comma separated values. What about other possible formats?

Semicolon separated values

Year;Make;Model;Description;Price
1997;Ford;E350;"ac, abs, moon";3000.00
1999;Chevy;"Venture ""Extended Edition""";"";4900.00
1999;Chevy;"Venture ""Extended Edition, Very Large""";;5000.00
1996;Jeep;Grand Cherokee;"MUST SELL!
air, moon roof, loaded";4799.00

Import Flat File - import result with semicolon

No surprises – It works in the exactly same way.

Fixed width file

Year Make  Model                         Description   Price  
1997 Ford  E350                          ac, abs, moon 3000.00
1999 Chevy Venture "Extended Ed."                      4900.00
1999 Chevy Venture "Extended Ed., Large"               5000.00

Import Flat File - fixed width import result

Nope. I didn’t work.

TRUE/FALSE values

Year,Make,Model,Description,Price,Available
1997,Ford,E350,"ac, abs, moon",3000.00,TRUE
1999,Chevy,"Venture ""Extended Edition""","",4900.00,FALSE
1999,Chevy,"Venture ""Extended Edition, Very Large""",,5000.00,TRUE
1996,Jeep,Grand Cherokee,"MUST SELL!
air, moon roof, loaded",4799.00,FALSE

Import Flat File - boolean import result

TRUE and FALSE values are recognized and correctly mapped to bit values.

Report File (*.rpt)

What about files with query results? Maybe this is the perfect solution to load this data back to the database?

Import Flat File - rpt

Unfortunately such file format is not supported.

Not supported file format

What happens if we will provide unsupported file format like JSON or SQL Server Error Log file (after writing my last week’s blog post I had to try this)? Such tries will end with below error.

Import Flat File - error

Summary

New Import Flat File functionality can decrease time required to import simple flat files. Unfortunately for more complex ones it doesn’t work well yet. It recognizes well TRUE, FALSE and NULL values. However it cannot parse fixed-width files. It can handle quoted string values, but has issues when parsing quoted fields which contain double-quoted characters. If you find this annoying I encourage you to vote up my connect item: https://connect.microsoft.com/SQLServer/feedback/details/3144015

-Marek

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

XE Profiler – the new feature available in SSMS v17.3

SSMS v17.3At the end of September during Microsoft Ignite event Microsoft announced new SQL Server 2017. On this conference, Bob Ward (b|t) a Principal Architect for the Microsoft Database Systems Group delivered a great session titled: Experience Microsoft SQL Server 2017: The fast and the furious. In 70 minutes he led us into the amazing world of SQL Server 2017 features. During this presentation he used a preview version of SQL Server Management Studio v17.3 and did a quick demo of XE Profiler – one of the new features (youtube).

Few days ago I had a great opportunity to play with SSMS v17.3 (build 14.0.17199.0) and I want to describe you how the XE Profiler works.  Continue reading “XE Profiler – the new feature available in SSMS v17.3”

Share it:
FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close