• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Beginner

DBCC Opentran, simplified!

April 3, 2017 by Kevin3NF 2 Comments

In my Top 10 SQL Server Functions post awhile back, I listed DBCC OPENTRAN as one of the top 3, and for good reason.

An Open transaction may simply be something that has not finished yet, or someone issued a BEGIN TRAN without a corresponding COMMIT or ROLLBACK.  Or as we will see at the end, replication is having issues.

You can use this against any database with minimal syntax and get back solid information very quickly.

 
--connect to sample db
use MyDatabase
go

--as generic as this command gets and still runs:
DBCC OPENTRAN
 

Result if nothing is open:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I start and execute a DML (insert, update or delete) transaction with BEGIN TRAN and leave out the corresponding COMMIT, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now, if I open a second transaction (in a new query window) and execute any DML statement without the COMMIT, and then run DBCC OPENTRAN again, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Yes…the same output, as this is just showing the ONE oldest transaction.

I can run a query to show that there are two SPIDs with open transactions:

--
SELECT spid, blocked,[dbid],last_batch,open_tran
FROM master.sys.sysprocesses
WHERE open_tran <> 0
 

DBCC Opentran Spid

If I COMMIT spid 64 and re-run DBCC OPENTRAN, the SPID changes to the second transaction I started:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 52  <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:9603:1)
Start time : Apr 1 2017 5:11:20:830PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I COMMIT spid 52 and re-run DBCC OPENTRAN along with checking sysprocesses for open_tran <> 0 I get:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
spid blocked dbid last_batch open_tran
—— ——- —— ———————– ———
(0 row(s) affected)

 

Now, all of that was just running DBCC OPENTRAN by itself.  There are additional options:

--specify dbname, dbid or 0 for the current database
DBCC OPENTRAN (SmallData_BigLog)

You will get results in the same format as the previous examples.

You can suppress all messages, regardless of if a transaction is open or not (but I have no idea why this would help you…)

DBCC OPENTRAN (0) with no_infomsgs

Result:

Command(s) completed successfully.

 

If you needed to periodically capture the oldest transaction, in order to review later, use WITH TABLERESULTS:

-- TableResults only shows the oldest open tran
-- useful running in a loop to load the oldest
-- tran over time.

--create a temp table
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);

-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN (SmallData_BigLog) with tableresults')
SELECT * FROM #OpenTranStatus
DROP TABLE #OpenTranStatus
   

In the above, you could create a user table instead of a temp table of course…it depends on your needs.

One more particularly useful item you may see when running DBCC OPENTRAN by itself:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:143:3)
Oldest non-distributed LSN : (37:144:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

 

If your database is participating in Replication as a Publisher, this may show up when running OPENTRAN, but it doesn’t necessarily mean that the transaction is actually open.  I set this up and stopped the Replication Log Reader and Distribution agent jobs.   I then added some data to a published table (article) and ran DBCC OPENTRAN to get the above result.  Note that there are two lines with LSN information in them (no SPIDs)

I then ran the Log Reader Agent job and got back:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:157:3)
Oldest non-distributed LSN : (0:0:0)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

I have verified that new records I inserted have been read by the log reader, AND distributed to the subscriber(s).  This means that while you are seeing

Oldest distributed LSN : (37:157:3)

There is not an error…just info.

If you have non-distributed LSNs, there is something to troubleshoot in the replication process which is way outside the scope of this post.  A non-distributed replicated transaction/LSN CAN cause some huge Log file growth, and need to be investigated.  If this happens frequently, use the TABLERESULTS option to log to a regular table and alert on it.

Hopefully this gives you some insight into various ways to use DBCC OPENTRAN as well as use cases for the various options.  90% of the time I run this, it is due to application transactions timing out, or log file growth issues.

I love comments….please feel free to leave questions for me in them on this topic.

Thanks for reading!

Kevin3NF

My Pluralsight course for new SQL Server DBAs

Follow @Dallas_DBAs


Filed Under: Accidental DBA, Beginner, EntryLevel, Performance Tuning, SQL

The Apprentice: Locks and Blocks and Deadlocks….oh my!

March 30, 2017 by Kevin3NF Leave a Comment

I re-posted SQL 101 Blocking vs. Deadlocking – in English for the Non-DBA the other day for two reasons:

  1.  Its good info for new DBAs struggling to understand the interaction and differences in these terms
  2.  It was next on the list to walk the Apprentice through and he reads my Tweets 🙂

We had about an hour to spend working through this so we briefly covered the article, using Starbucks and Chick-Fil-A interchangeably as something you almost always have to wait in line for.

Things we managed to cover, test, or define in that one hour:

  • Lock: When a customer “Sally” (query1) walks up to the cashier “Ken” (Resource1), she has locked him into taking her order.
    • Ken is a CPU here, or a Row/Page/Table…don’t overthink my analogies 😀
  • Block: The dude “Broseph” (query2) behind Sally (query1) has to wait…he’s blocked.
  • If the manager (Query Optimizer) sees that Sally is ordering 20 drinks for the office, he may open a second register and have Joe (Resource2) starting ringing up some of Sally’s orders.  Brospeh is still in line, waiting.  Sally has gone parallel.
  • If the manager decides Ken needs to work all of Sally’s order himself (MAXDOP1), he may open a second register (Joe/resource2) and move Broseph to that line.
  • If Max (query3) walks in, sees what is going on and decides he doesn’t really want coffee…he just rolls on back out the door and leaves a 1-star review on Yelp (failed transaction, retry?)

Other stuff we covered, in no particular order:

  • Deadlocking and the Mom process (see above link)
  • INSERT statements to create sample blocking
  • IMPLICIT and EXPLICIT transactions, so the tests actually work
  • 4 parts of an object name [instance].[database].[schema].[object]
  • Why a sysadmin cannot directly query Instance2 from Instance1, regardless of his level of sysadmin-ness unless…
  • …Linked Server
  • sp_lock
  • master..sysprocesses (old skool)

Good times were had, jokes were made, stuff was learned.  Oh…and every time we meet I ask him random stuff from previous meetings to gauge retention.   So far so good.

Thanks for reading!

Kevin3NF

Filed Under: Accidental DBA, Apprentice, Beginner

Why is my SQL Log File Huge?

March 8, 2017 by Kevin3NF 11 Comments

Pluralsight courses for new SQL Server DBAs
Do you need our help?     Or our DBA retainer service for emergencies?

 

HUGE Log files and how to troubleshoot:

The single most common question I have encountered in 18+ years of working with SQL Server:

Why is the .LDF file filling up my 500GB drive?  I only have 100MB of data!!?!?!?  Why am I getting error 9002?

For new or non-DBAs, this is a very frustrating situation without a logical reason (or so it seems).  It is also very common for it to be accompanied by applications that won’t work, alerts firing for drive space issues, etc.

If you like video, I recorded my response to this question and discuss the two most common remedies.  If you don’t like video, scroll down for text:

 

There are a number of reasons a log file can fill to extreme sizes.  The most common one by far is that the database is in full recovery model, and Transaction Log backups are not happening fast enough, or not happening at all.  Next to that, it could be that you had a massive transaction happen such as a huge data import, rebuild all indexes, etc.  These are logged and stay there until the .ldf file is backed up (or checkpointed if you are in Simple Recovery).

Step 1: Verify recovery model

Right-click the database, go to properties, click the Options tab.   You will see Full, Bulk-Logged or Simple.   If you are in Full, you have the option of backing up the log…which is the best possible situation.

SQL Server Database Options

Step 2: Verify if the log is full or “empty”

Verify if the log file is actually full or not.  If you are backing up and the file still grew to ridiculous size…it may have just been a one time thing and you can deal with that easily.  Right-click the database, go to reports, standard reports, disk usage.  This will give you 2 pie charts.  Left is the data file, right is the log.  If the log shows almost or completely full AND the huge size, you need to backup.  If the log file is huge and mostly empty, you simply need to shrink to an acceptable size.

SQL Server Disk Usage

Step 3: Shrink the file (if empty)

Right-click the database>>Tasks>>Shrink>>Files

Choose ‘Log ‘ from the File Type drop down.  Hopefully there is only one log file.  If not, pick the big one.  Under Shrink Action, choose an appropriate size and ‘Reorganize pages before releasing space” option, even though log file shrinks don’t actually do that.   Pick a size in MB and click ok.  0 is not a good choice here.

SQL Server Shrink File

Step 4: Backup

I’m not going to go into a ton of detail here….Right-click the database>>Tasks>>Backup   Change the backup type to Transaction Log and work through the rest of the steps.

If the Log Backup works, but the space is not freed (refresh the usage report), you have a different issue that these steps will not help with. Check out the “Wrapping Up” section at the bottom of this post.

If you don’t have enough room on any local, attached or network drive to create a log backup, even with compression, keep reading:

Step 5: Flip the Recovery Model (if log backup is not possible)

Warning:  Doing this WILL cause you to lose point-in-time recoverability, but if you cannot backup the log, you are pretty much already there anyway.

Right-click the database>>Properties>>Options

Change the recovery model to Simple and click OK

SQL Server Recovery Model

Wait a few seconds and then go refresh the Disk Usage report.  The log file will be the same size, but should be almost empty:

SQL Server Disk Usage

Step 6: Shrink the Log file

See step 3 above…

Step 7: Flip the recovery back to Full

See step 1…

Step 8: Set up recurring log backups

If you don’t know how to do this, go to Management, Maintenance Plans, right-click Maintenance Plan>>Maintenance Plan Wizard and go from there.   This is well documented elsewhere.

Wrapping Up:

Hopefully, this resolved your issue but there are definitely other reasons for this issue to happen aside from a simple failure to back up.   Most notably, a very large transaction in a database that is participating in SQL Replication as a publisher.

If the above methods do not work, run these two statements and go post the results in the MSDN SQL Server forums, along with a description of the issue and what you have already tried (hopefully all of the above):

Select [name],recovery_model_desc, log_reuse_wait_desc 
from sys.databases
Where [name] = 'MyDatabase' --change this

DBCC OPENTRAN --results will be in the messages section

I love comments on my post, but if you need quick help go to the forums first, or maybe even a call to Microsoft Support if the “quick hits” don’t get you the resolution you need.  If this helped, please comment and share the link…

Thanks for reading!

Kevin3NF

My Pluralsight course for new SQL Server DBAs

Follow @Dallas_DBAs

Filed Under: Accidental DBA, backup, Beginner, EntryLevel

The Apprentice: Detective work

March 8, 2017 by Kevin3NF Leave a Comment

SQL Server Database Training Apprentice Detective

I decided to see how much knowledge and familiarity the Apprentice has retained in the area of Database Properties.

The Setup:

I wanted to simulate a customer engaging him to “Look at the database” because it doesn’t “seem right.”  I created a Sales database on his machine and mis-configured some items, taking it far away from best practices.  It came on line, but had issues 🙂

The first thing he noticed and asked about was the lack of tables or other objects.  My response was that the “customer” was installing a 3rd party application which has two steps:  Create the Database, and then Create the Objects.   They thought the results of step one were odd and called us.

What he found:

  • Auto-Shrink enabled (very common for 3rd party apps)
  • Insane file names (Logical: Bill and Log_Ted, Physical: Taco.mdf – data and Burrito.Mdf – log file)
  • Auto-grow for data file of 1 MB, capped at 5 MB
  • Auto-grow for log of 100%, unlimited

No backups had been taken, but I don’t recall if he found that or if we even discussed it.  He is well aware of backups and recovery models.

This took us down a conversation of best practices, and how to rename files in a database, both Logical and Physical.  What’s really fun is when you want to look up the ALTER DATABASE command to rename the physical files and the internet connection is down…so no MSDN or Google!

We used the GUI to create a script for changing the logical names, then modified that for the physical files instead.  And he already knew that the actual files on disk had to be changed as well.

He did really well on this, with very little prompting.   Well done!

Thanks for reading!

Kevin3NF

 

Filed Under: Accidental DBA, Apprentice, Beginner, EntryLevel

The Apprentice: Non-SQL stuff that SQL Server depends on

February 27, 2017 by Kevin3NF Leave a Comment

The apprentice and I gathered at my house Sunday evening for a bit of training.   I gave him some homework ahead of time to go look up RAID and the most common levels.

Yep…we spent an hour standing/sitting in my kitchen discussing RAID 0/1/10/5/5+1, etc.

And Spinning disks vs. SSD

And Memory (including addresses)

And CPUs

And SAN vs. DAS vs. internal

And how SQL Server uses all of these items.

And how the costs associated with these choices vary from client to client.

And how it is perfectly acceptable to blame the storage team for anything up to and including your lunch being stolen from the office fridge. 😉

We never even started the SQL Services…went old school with pen and paper to map things out.

Short one today, thanks for reading!

Kevin3NF

Filed Under: Apprentice, Beginner, Career, EntryLevel

The Apprentice: Top 10 list

February 19, 2017 by Kevin3NF Leave a Comment

I got to work with the Apprentice today for just over an hour, and it seemed appropriate for us to go over the Top 10 SQL functions post I put up a few days ago, since he will be using them throughout his career.

We ran several against his registered servers list and went through why you would use them and when.  @@Version to verify servers are up to appropriate SP/CU level, @@Servername to verify all of them are online and responding, etc.

We spent quite a bit of time talking about implicit vs. explicit transactions and looking for issues with DBCC Opentran, including writing our own INSERT and leaving it hanging/uncommitted.  This took us down a fairly interesting rabbit hole.

I think the most fun he had was when he went off and used Cast, GetDate() and DateDiff to mess around with the sample I gave in the original post to figure out how many days old he is, plus how far back he could go with GetDate() – x. (Jan 1, 1753 as it turns out).   When he starts doing things “off-topic” I just sit back and watch 🙂

The second half of the list wasn’t as relevant to him as the first, but then again he’s been doing DBA stuff for total of about a week now.

For each of these we were able to go through at least the basics, which he understood.  And he finally bookmarked my blog 😉

This is fun for both of us.

Thanks for reading!

Kevin3NF

Filed Under: Apprentice, Beginner, EntryLevel, SQL

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to page 11
  • Go to page 12
  • Go to page 13
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...