• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

performance

SQL Server Wait Stats

April 16, 2025 by Kevin3NF Leave a Comment

Everything in SQL Server is waiting for something…this is by design. And most people never think about it…

Until your application is waiting to complete its request, sell the product or provide “the report” to the CEO!

Most SQL Server performance issues come down to one thing: waiting. Every time a query slows down, SQL Server records what it was waiting on. These wait stats are one of the best diagnostic tools for DBAs, sysadmins, and developers – but only if you know how to use them.

What Are Wait Stats?

SQL Server tracks the time spent waiting on resources like CPU, memory, disk, or locks. Think of them as traffic reports – some delays are normal, while others indicate serious bottlenecks. If your database is slow, wait stats tell you where to start looking.

How to Read Wait Stats

A simple query can reveal where your server is spending the most time:

SELECT wait_type, waiting_tasks_count, wait_time_ms, signal_wait_time_ms 
FROM sys.dm_os_wait_stats 
ORDER BY wait_time_ms DESC;

Key columns:

  • wait_type – The specific type of wait that might be affecting performance
  • waiting_tasks_count – Number of times this wait has occurred
  • wait_time_ms – Total time spent waiting (cumulative since last clear)

 

Common Wait Types and What They Mean:

A table of various SQL Server Wait types and possible fixes

 

Many other waits exist, but these are some of the most common. Microsoft provides a full reference here.

SQL Skills has the best list of wait types and things to know I’ve ever seen. Go there and read before changing anything. MANY recorded waits are benign and not worth looking into.

Finding and Fixing Performance Issues: A guide for you to try in your development environment

  1. Run the wait stats query to capture the current state.
  2. Identify the top wait type and correlate it with system metrics. Research it
  3. Apply a tuning change (indexing, parallelism adjustments, memory tuning).
  4. Re-run the query to compare results.

This process provides a clear before-and-after snapshot of how performance improves.

The Bottom Line

Wait stats don’t tell you exactly what’s broken, but they show where to investigate. The key is to correlate them with query execution plans, server configuration, and system resource usage. Tracking these waits over time can also help spot performance trends before they become critical issues.

SQL Server is always waiting for something. The question is – what are you going to do about it? What are you waiting for?!

Thanks for reading!

–Kevin

Get the first month of Pocket DBA® FREE from Dallas DBAs!

 

Filed Under: Performance, SQL Tagged With: performance, SQL, syndicated

How old are those stats?

September 30, 2021 by Kevin3NF 2 Comments

Image by angel1238812 from Pixabay

SQL Server maintains a variety of stats about all sort of performance items.

  • Index usage (or missing indexes)
  • Query performance
  • Corrupt pages
  • Disk IO performance
  • Way more than I care to list here

I was trying to troubleshoot some TerriBad * tempdb write performance…almost 3000ms per write, on a server that we recently migrated to.

Our data center vendor of course said the storage was perfect and that we should try troubleshooting SQL Server.  (Duh?)

These stats are stored in the sys.dm_io_virtual_file_stats DMF, and I found this gem under the ‘sample_ms’ column description in the result set:

“The data type is int for SQL Server 2008 through SQL Server 2014 (12.x). In these versions, the value will reset to 0 after approximately 25 days of continuous database engine uptime.”

Now, this is a SQL 2017 box, but this tells me that these stats are going to reset at some point.

When I re-ran my IO stalls query…the writes for all 8 tempdb data files are at 7ms.  The sample_ms is currently 3,097,351,985 which is about 36 days if I’ve done my math right. Server uptime is approaching 90 days.

The takeaway for you, dear reader, is to know where to find the stats but also to know what timeframe they represent.

* Thanks to Brent Ozar (b|t) for this highly amusing term 🙂

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Performance Tagged With: performance, syndicated

Virtual Log Files

November 24, 2020 by Kevin3NF Leave a Comment

Today’s post is a guest article from a friend of Dallas DBAs, writer, and fantastic DBA Jules Behrens (B|L)

One common performance issue that is not well known that should still be on your radar as a DBA is a high number of VLFs.

Virtual Log Files are the files SQL Server uses to do the actual work in a SQL log file (MyDatabase_log.LDF). It allocates new VLFs every time the log file grows. Perhaps you’ve already spotted the problem – if the log file is set to grow by a tiny increment, then if your the file ever grows very large, you may end up with thousands of tiny little VLFs, and this can slow down your performance at the database level. Think of it like a room (the log file) filled with boxes (the VLFs). If you just have a few boxes, it is more efficient to figure out where something (a piece of data in the log file) is, than if you have thousands of tiny boxes. (Analogy courtesy of @SQLDork)

It is especially evident there is an issue with VLFs when SQL Server takes a long time to recover from a restart. Other symptoms may be slowness with autogrowth, log shipping, replication, and general transactional slowness. Anything that touches the log file, in other words.

The best solution is prevention – set your log file to be big enough to handle its transaction load to begin with, and set it to have a sensible growth rate in proportion to its size, and you’ll never see this come up. But sometimes we inherit issues where best practices were not followed, and a high number of VLFs is certainly something to check when doing a health assessment on an unfamiliar environment.

Find your VLF counts

The built-in DMV sys.dm_db_log_info is specifically for finding information about the log file, and command DBCC LOGINFO (deprecated) will return a lot of useful information about VLFs as well. There is an excellent script for pulling the count of VLFs that uses DBCC LOGINFO from Kev Riley, on Microsoft Tech Net:

https://gallery.technet.microsoft.com/scriptcenter/SQL-Script-to-list-VLF-e6315249

There is also a great script by Steve Rezhener on SQLSolutionsGroup.com that utilizes the view:

https://sqlsolutionsgroup.com/capture-sql-server-vlf-information-using-a-dmv/

Either one of these will tell you what you ultimately need to know – if your VLFs are an issue. How many VLFs are too many? There isn’t an industry standard, but for the sake of a starting point, let’s say a tiny log file has 500 VLFs. That is high. A 5GB log file with 200 VLFs, on the other hand, is perfectly acceptable. You’ll likely know a VLF problem when you find it; you’ll run a count on the VLFs and it will return something atrocious like 20,000. (ed – someone at Microsoft support told me about one with 1,000,000 VLFs)

Resolution

If the database is in Simple recovery model and doesn’t see much traffic, this is easy enough to fix. Manually shrink the log file as small as it will go, verify the autogrow is appropriate, and grow it back to its normal size. If the database is in Full recovery model and is in high use, it’s a little more complex. Follow these steps (you may have to do it more than once):

  • Take a transaction log backup .
  • Issue a CHECKPOINT manually.
  • Check the empty space in the transaction log to make sure you have room to shrink it.
  • Shrink the log file as small as it will go.
  • Grow the file back to its normal size.
  • Lather, Rinse, Repeat as needed

Now check your VLF counts again, and make sure you are down to a nice low number. Done!

If you need help, contact us

Thanks for reading!

Kevin3NF

Filed Under: HealthCheck, Performance, Troubleshooting Tagged With: performance, syndicated

TIL: Starting a recurring SQL trace

December 8, 2016 by Kevin3NF 1 Comment

Today I learned…

How to set up a recurring Server-side SQL trace that runs every hour for 10 minutes.

Issues:

  • 6 people in the room are staring at me waiting for the last second request to be done at the end of an 11 hour day (3 of them from the VBV – Very Big Vendor)
  • Trace file names must be different, or you get errors
  • Trace files cannot end with a number
  • I can’t tell time when I am hungry and tired

I know I need to script out the trace to T-SQL (which I had already done), and put this in a ‘start trace’ job on an hourly schedule. I have set these up before to run ad hoc, but not recurring.
I also need code to stop the trace 10 minutes later that my boss understands…so we are going for fast and functional here….not pretty.

In the SQL script Profiler generates, the @tracefile parameter supplied to sp_trace_create is specified in the call:

exec @rc = sp_trace_create @TraceID output, 2, 'D:\MyTraceFiles\Trace', @maxfilesize, NULL

If you need different names for each execution you have to get creative. I created a parameter to feed that included a getdate():

declare @tracefile nvarchar(245)
set @tracefile = 'D:\MyTraceFiles\Trace_'+Cast (getdate() as varchar(20))+'_Trace'

this worked and gave this result in testing:

D:\MyTraceFiles\Trace_Dec 8 2016 7:08AM_Trace

But…running the full trace script generated files in the directory with this name:

Trace_Dec 8 2016 7

It took me some time playing with variable sizes and such before I asked one of the VBV guys to look at it. We figured out the ‘:’ in the time stamp was an issue, so I included a REPLACE:

set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

Which gave this result:

D:\MyTraceFiles\Trace_Dec 8 2016 711AM_Trace

More importantly, I had a functioning trace script I could schedule:

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @tracefile nvarchar(245)

set @maxfilesize = 4096
set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

--select @tracefile

exec @rc = sp_trace_create @TraceID output, 2, @tracefile, @maxfilesize, NULL
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 9, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
.
.
.
exec sp_trace_setevent @TraceID, 13, 64, @on
exec sp_trace_setevent @TraceID, 13, 66, @on

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 35, 0, 6, N'Database1'
exec sp_trace_setfilter @TraceID, 35, 1, 6, N'Database2'
--above, the '1' in parameter 3 changes this to an OR instead of an AND
-- so where databasename is Database1 OR Database2. Also new info for ---me today

-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

-- display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go

Scheduling was simple…drop the above into Job ‘Start trace’ and enable to run every hour on the hour.

Create a second job to run every hour on the 10 minutes with the following:

exec sp_trace_setstatus 2, 0
exec sp_trace_setstatus 2, 2

On this system, my trace will always be 2. 1 is the default trace and nobody else here runs them.

Summary:

I already knew I don’t do well at the end of a long day or in a hurry, so I took my time while the others waited, in order to not put in a bad solution to run overnight. There are much more elegant ways to do this, but this worked for me and I learned it on the fly.

Thanks for reading!

Kevin3NF

Filed Under: Performance, TIL Tagged With: job, performance, SQL Trace, TIL

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...