• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Performance Tuning

Getting Started with Index and Execution Plan Basics

August 24, 2018 by Kevin3NF Leave a Comment

If you are a developer or new SQL Server DBA and have never really worked with query tuning, this post is for you.  If you are an experienced DBA/Tuner and want to nit-pick me saying field vs. column, feel free to leave a comment that I will delete or never publish, lol.  This is basics, yo…

I concentrate on how to measure the performance of a query, as well as the basics of “seeing” how SQL Server is executing the query to return the results.

I could type all of this up, but watching the video is going to be a WHOLE lot better teacher, so here you go:

As always, feel free to reach out via the comments here or on the video.  Or, follow me on Twitter and ask away!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Indexing Strategies, Performance Tuning

SQL 101: Extended Events – set up a basic session

August 6, 2018 by Kevin3NF Leave a Comment

Setting up your first Extended Event session is much simpler than I thought it would be!

Extended Events (XE) in SQL Server showed up first in SQL 2008, but they got a GUI in SQL 2012, making them much easier to work with on the fly.  I’m one of those old DBAs that was reluctant to move away from SQL Trace via Profiler for simple needs.  Of course for more complex issues and long-term tracking I used server-side tracing for many years.

With the advent of XE, Microsoft stopped enhancing SQL Trace.  Many of the new features of SQL are not in SQL Trace at all (180 in Trace compared to 1000+ in XE now).  In addition to the video in this post, I strongly recommend you go through the 4 step Stairway to Extended Events by Erin Stellato (b|t).  I’m just now starting my journey with XE so I will be leaving out everything but the most basics steps I used…this is a 101 post after all 🙂

Please forgive the watermark…was testing Camtasia.  I’ll be buying a copy soon!

I also want to thank Dave Mason (b|t) for his post Hide and Seek with Extended Events, which helped me understand why I was not seeing my test data in “Watch Live Data” mode, and come up with the workaround you see in the video.  Dave has his own workaround that I did not test.  Lastly, thanks to my Apprentice DBA (t) for teaching me XE after I had him go learn from the Stairway and some testing on his own.   I used what he taught me the next day to capture a sample command on a production system so I could tune some code in the customer’s test environment.

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs


Filed Under: Beginner, EntryLevel, Performance Tuning, video

Rant: Dead DBAs

November 27, 2017 by Kevin3NF 1 Comment

Where I go on a rant about the “death of the DBA”…which by comparison to others barely even qualifies as a rant 🙂

This is the post I reference in the video:

My prediction of the DBA role

And these are the names I mentioned:

  • Grant Fritchey
  • Brent Ozar
  • Pinal Dave
  • Paul Randal

I’ll let you YaBingGoogleHoo them yourself…

One thing I forgot to mention:  If you are not improving your skills and learning all the time…you are not a dead DBA, but you are becoming a “Zombie DBA” (baaaackuuuupssss….)

Thanks for watching!

Kevin3NF

 

 

 

Filed Under: Accidental DBA, Apprentice, Beginner, Career, Performance Tuning, Personal, PowerShell

DBCC Opentran, simplified!

April 3, 2017 by Kevin3NF 2 Comments

In my Top 10 SQL Server Functions post awhile back, I listed DBCC OPENTRAN as one of the top 3, and for good reason.

An Open transaction may simply be something that has not finished yet, or someone issued a BEGIN TRAN without a corresponding COMMIT or ROLLBACK.  Or as we will see at the end, replication is having issues.

You can use this against any database with minimal syntax and get back solid information very quickly.

 
--connect to sample db
use MyDatabase
go

--as generic as this command gets and still runs:
DBCC OPENTRAN
 

Result if nothing is open:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I start and execute a DML (insert, update or delete) transaction with BEGIN TRAN and leave out the corresponding COMMIT, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now, if I open a second transaction (in a new query window) and execute any DML statement without the COMMIT, and then run DBCC OPENTRAN again, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Yes…the same output, as this is just showing the ONE oldest transaction.

I can run a query to show that there are two SPIDs with open transactions:

--
SELECT spid, blocked,[dbid],last_batch,open_tran
FROM master.sys.sysprocesses
WHERE open_tran <> 0
 

DBCC Opentran Spid

If I COMMIT spid 64 and re-run DBCC OPENTRAN, the SPID changes to the second transaction I started:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 52  <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:9603:1)
Start time : Apr 1 2017 5:11:20:830PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I COMMIT spid 52 and re-run DBCC OPENTRAN along with checking sysprocesses for open_tran <> 0 I get:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
spid blocked dbid last_batch open_tran
—— ——- —— ———————– ———
(0 row(s) affected)

 

Now, all of that was just running DBCC OPENTRAN by itself.  There are additional options:

--specify dbname, dbid or 0 for the current database
DBCC OPENTRAN (SmallData_BigLog)

You will get results in the same format as the previous examples.

You can suppress all messages, regardless of if a transaction is open or not (but I have no idea why this would help you…)

DBCC OPENTRAN (0) with no_infomsgs

Result:

Command(s) completed successfully.

 

If you needed to periodically capture the oldest transaction, in order to review later, use WITH TABLERESULTS:

-- TableResults only shows the oldest open tran
-- useful running in a loop to load the oldest
-- tran over time.

--create a temp table
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);

-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN (SmallData_BigLog) with tableresults')
SELECT * FROM #OpenTranStatus
DROP TABLE #OpenTranStatus
   

In the above, you could create a user table instead of a temp table of course…it depends on your needs.

One more particularly useful item you may see when running DBCC OPENTRAN by itself:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:143:3)
Oldest non-distributed LSN : (37:144:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

 

If your database is participating in Replication as a Publisher, this may show up when running OPENTRAN, but it doesn’t necessarily mean that the transaction is actually open.  I set this up and stopped the Replication Log Reader and Distribution agent jobs.   I then added some data to a published table (article) and ran DBCC OPENTRAN to get the above result.  Note that there are two lines with LSN information in them (no SPIDs)

I then ran the Log Reader Agent job and got back:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:157:3)
Oldest non-distributed LSN : (0:0:0)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

I have verified that new records I inserted have been read by the log reader, AND distributed to the subscriber(s).  This means that while you are seeing

Oldest distributed LSN : (37:157:3)

There is not an error…just info.

If you have non-distributed LSNs, there is something to troubleshoot in the replication process which is way outside the scope of this post.  A non-distributed replicated transaction/LSN CAN cause some huge Log file growth, and need to be investigated.  If this happens frequently, use the TABLERESULTS option to log to a regular table and alert on it.

Hopefully this gives you some insight into various ways to use DBCC OPENTRAN as well as use cases for the various options.  90% of the time I run this, it is due to application transactions timing out, or log file growth issues.

I love comments….please feel free to leave questions for me in them on this topic.

Thanks for reading!

Kevin3NF

My Pluralsight course for new SQL Server DBAs

Follow @Dallas_DBAs


Filed Under: Accidental DBA, Beginner, EntryLevel, Performance Tuning, SQL

TIL: Starting a recurring SQL trace

December 8, 2016 by Kevin3NF 1 Comment

Today I learned…

How to set up a recurring Server-side SQL trace that runs every hour for 10 minutes.

Issues:

  • 6 people in the room are staring at me waiting for the last second request to be done at the end of an 11 hour day (3 of them from the VBV – Very Big Vendor)
  • Trace file names must be different, or you get errors
  • Trace files cannot end with a number
  • I can’t tell time when I am hungry and tired

I know I need to script out the trace to T-SQL (which I had already done), and put this in a ‘start trace’ job on an hourly schedule. I have set these up before to run ad hoc, but not recurring.
I also need code to stop the trace 10 minutes later that my boss understands…so we are going for fast and functional here….not pretty.

In the SQL script Profiler generates, the @tracefile parameter supplied to sp_trace_create is specified in the call:

exec @rc = sp_trace_create @TraceID output, 2, 'D:\MyTraceFiles\Trace', @maxfilesize, NULL

If you need different names for each execution you have to get creative. I created a parameter to feed that included a getdate():

declare @tracefile nvarchar(245)
set @tracefile = 'D:\MyTraceFiles\Trace_'+Cast (getdate() as varchar(20))+'_Trace'

this worked and gave this result in testing:

D:\MyTraceFiles\Trace_Dec 8 2016 7:08AM_Trace

But…running the full trace script generated files in the directory with this name:

Trace_Dec 8 2016 7

It took me some time playing with variable sizes and such before I asked one of the VBV guys to look at it. We figured out the ‘:’ in the time stamp was an issue, so I included a REPLACE:

set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

Which gave this result:

D:\MyTraceFiles\Trace_Dec 8 2016 711AM_Trace

More importantly, I had a functioning trace script I could schedule:

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @tracefile nvarchar(245)

set @maxfilesize = 4096
set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

--select @tracefile

exec @rc = sp_trace_create @TraceID output, 2, @tracefile, @maxfilesize, NULL
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 9, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
.
.
.
exec sp_trace_setevent @TraceID, 13, 64, @on
exec sp_trace_setevent @TraceID, 13, 66, @on

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 35, 0, 6, N'Database1'
exec sp_trace_setfilter @TraceID, 35, 1, 6, N'Database2'
--above, the '1' in parameter 3 changes this to an OR instead of an AND
-- so where databasename is Database1 OR Database2. Also new info for ---me today

-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

-- display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go

Scheduling was simple…drop the above into Job ‘Start trace’ and enable to run every hour on the hour.

Create a second job to run every hour on the 10 minutes with the following:

exec sp_trace_setstatus 2, 0
exec sp_trace_setstatus 2, 2

On this system, my trace will always be 2. 1 is the default trace and nobody else here runs them.

Summary:

I already knew I don’t do well at the end of a long day or in a hurry, so I took my time while the others waited, in order to not put in a bad solution to run overnight. There are much more elegant ways to do this, but this worked for me and I learned it on the fly.

Thanks for reading!

Kevin3NF

Filed Under: Performance Tuning, TIL Tagged With: job, performance, SQL Trace, TIL

Min and Max Server Memory in English

August 15, 2016 by Kevin3NF Leave a Comment

This one is for the new DBAs…

There is a lot of confusion on memory settings in SQL Server.  Specifically Min and Max settings found in the Properties of an instance:

There are dozens, if not hundreds of blog postings that will go into beautiful detail about all of the intricacies of SQL Server memory…after all, SQL lives there!   Thus, it is very important to have a basic understanding early in your SQL DBA career….especially if you are an accidental DBA.

In the screenshot above, I have set the Min and Max on my laptop to 2GB and 14GB.  Its a 16GB laptop, so I left 2GB for the O/S…pretty basic stuff.

Max Server Memory is fairly obvious…when SQL Server reaches this point, it stops allocating and starts paging to disk as necessary.

This is pretty much the same as setting a throttle control on a car.  Or a restrictor plate in NASCAR.  Without it you can run the car engine all the way to the max if you want, but eventually the other systems are going to fail (Cooling, transmission, brakes, etc.).  Same thing if you don’t leave memory for the O/S.

Min Server Memory seems to get the most bad information spread around.   SQL Server does NOT automatically grab memory up to the Min setting when it starts.  However, once it gets there, it doesn’t give back.

Back to the car analogy…if you start up and head off down the road at 20 mph, you are above the default (0), but not at the max (100+, depending on the car and the tires…).  If you set the cruise control, you can accelerate up and down above 20, but you won’t go below that unless you hit the brakes.

So that’s it…by default, SQL installs at idle and full speed ahead.  Its your job to turn on the cruise control and not redline the engine until it blows.

There are holes in the analogy if you dig deep enough, but this should give you a real-world base to work from as you grow in your skills.

Kevin3NF

Filed Under: Beginner, Configuration, Install, Performance Tuning

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...