• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Accidental DBA

SQL Saturday Houston, 2017

April 27, 2017 by Kevin3NF Leave a Comment

I will be presenting my session “Backups for non-DBAs…the Why, not the How” at SQL Saturday Houston on June 17.

Abstract:

Join me and get some new perspective on what your DBA is doing, and why!

Targeted at the non-DBA such as BI/DW folks, Application Developers, Managers, and System Admins, this session will go over the reasons we back up our databases, our systems, etc. Considerations such as Single points of failure, High Availability/Disaster recovery, Business Continuity and others will be discussed in this interactive conversation. It will be conversation heavy, with supporting slides to download, and one Database backup demo at the end if time permits.

Everyone involved in a technical role needs to at least know that the things they have created will be recoverable in the event of a disaster, or even just an “oops” moment. The CIO/CTO should know how long critical systems will be down when bad things happen.

Backups are everyone’s responsibility…whether asking the right questions or implementing the process.

If you come on Friday, there is a pre-conference full day of training on Performance Tuning by Brent Ozar…$199 as of this post for a full day of awesomesauce. Or, my friend Tim Mitchell (b|t) will teach you how to build better SSIS packages for $129.

Hope to see you there!

Kevin3NF

Filed Under: Accidental DBA, backup, Beginner, Speaking, SQLSaturday

How I became a:________________

April 3, 2017 by Kevin3NF Leave a Comment

Thanks to Andy Warren (b|t) for this idea….

I am starting a collection of “How I Became A _____________” stories and will be aggregating them right here.   Your contributions are welcomed and encouraged.  You can email me some text, send me a link to your blog where you write it up, or even a video on YouTube if you have it.  Include what section below it should go in.  Longer submissions would be better as links to your blogs, shorter ones work as just text if you prefer.  Add your link as a comment, and I’ll delete that after incorporating into the list.

For now, I’d like to stick with items in and around the database world…DBA, Developer, Architect, BI pro, Database Manager, SQL Server, Oracle, etc.

The goal here is to have a place for people considering database work as a profession a place to go to get more info direct from the mouths of those of us already in the industry.  Lets keep the contributions family-friendly and encouraging!

 

Database Administration:

  • Jeff Miller (t) reboots his career and joins the Dallas DBAs staff
  • Andy Levy (b|t) finds out he has DBA tendencies…
  • Arun Sirpal (b|t) never stops learning!
  • Michael D’Spain, the landlocked surfing DBA musician shares his story
  • David Alcock (b|t) shares his path to DBA, along with tips for those considering the idea from across the pond.
  • Jr. DBA Julie (t) shares her story
    The story of how I became a SQL database administrator is … less than glamorous. Without going into specifics, I found myself at 35 years old in the position of needing to change careers – and needing to do so at a breakneck speed, because quite unexpectedly I became the sole provider for my family. I had a bachelor’s degree in art, few marketable skills, no professional job history for the past 10 years, and the weight of the consequences of failure breathing hard down my neck.
    So I reached out. I talked to people. I spammed my Facebook and Twitter feeds, asking people what it was like in their industry – what the job prospects were, what the working conditions were like, what the requirements were, what the long term growth potential might be. I made sure everyone I knew was aware I was looking for a job.
    I did not get a single response to the resumes I submitted to job postings. I DID get a response from my friend Jen McCown, who asked me one day if I’d like to do grunt DBA work for her home consulting business. YES, I said, I would love to do that.
    “Is this a pity job?” I asked her.
    “No no no,” she lied.
    Jen and her husband Sean spent a good part of the next couple of months training me to be a DBA. It was very difficult, as the learning curve was so steep as to be nearly vertical. I lived, breathed, ate and slept databases. I was keenly aware of what failure would mean, for me and my children, if I didn’t nail this. I remember one instance where I was sitting with my youngest child while he fell asleep, and he was annoyed by the light of my laptop and the click of the keys as I worked on some issue or other.
    “I hate databases,” he grumbled and pulled the covers over his head.
    At the moment, I kinda hated databases, too.
    Several months later, Sean introduced me to the folk at a local company, who had a position open for a junior DBA. They needed someone who could do backups and restores, document procedures, run queries, and look into basic problems like blocking and security access. I got the job.
    I’m almost three years into a DBA career now, and I am occasionally reminded with great humility and gratitude that I would not be here without Sean and Jen. I was lucky. But I am coming to realize that all of us have a Sean-and-Jen – people who opened up doors with opportunities for work. So how would I apply my experience becoming a DBA to others? To a certain extent, you have to make your own luck. Networking is the single greatest avenue to luck, but a lot of work comes before and after the opportunity presents itself. Have your resume together, read articles and books, take training courses and certifications, go to user group meetings and such. Most importantly, be prepared to work your butt off. Database work, especially in the beginning of a career, may mean a lot of late nights, even overnights. It’s not for everybody, but it certainly is a great path.
  • Chris Yates (b|t) – The SQL Professor
  • Anders Pederson (b|t) in 2 parts:
    • About Anders
    • Anders gives Access the boot
  • @SQLAndy: How to Become a SQL Server DBA (whitepaper, site registration required)
  • DBA Kevin Hill gives an email interview

Database Consultant:

  • David Alcock (b|t) turns to Consulting!
  • Matt Gordon (b|t)- Support Analyst to Consultant

Database Developer:

Business Intelligence/Analytics:

Database Management:

ETL Developer:

Database Architect:

Teacher/Presenter/Speaker:

  • Erin Stellato’s (b|t) 20 year presenting journey
  • John Deardurff (b|t), Trainer – Database on a Dare

 

Filed Under: Accidental DBA, Beginner, Career, Oracle, SQL

DBCC Opentran, simplified!

April 3, 2017 by Kevin3NF 2 Comments

In my Top 10 SQL Server Functions post awhile back, I listed DBCC OPENTRAN as one of the top 3, and for good reason.

An Open transaction may simply be something that has not finished yet, or someone issued a BEGIN TRAN without a corresponding COMMIT or ROLLBACK.  Or as we will see at the end, replication is having issues.

You can use this against any database with minimal syntax and get back solid information very quickly.

 
--connect to sample db
use MyDatabase
go

--as generic as this command gets and still runs:
DBCC OPENTRAN
 

Result if nothing is open:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I start and execute a DML (insert, update or delete) transaction with BEGIN TRAN and leave out the corresponding COMMIT, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Now, if I open a second transaction (in a new query window) and execute any DML statement without the COMMIT, and then run DBCC OPENTRAN again, I get:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 64 <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:4620:1)
Start time : Apr 1 2017 4:59:02:307PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Yes…the same output, as this is just showing the ONE oldest transaction.

I can run a query to show that there are two SPIDs with open transactions:

--
SELECT spid, blocked,[dbid],last_batch,open_tran
FROM master.sys.sysprocesses
WHERE open_tran <> 0
 

DBCC Opentran Spid

If I COMMIT spid 64 and re-run DBCC OPENTRAN, the SPID changes to the second transaction I started:

Transaction information for database ‘SmallData_BigLog’.
Oldest active transaction:
SPID (server process ID): 52  <———–
UID (user ID) : -1
Name : user_transaction
LSN : (637:9603:1)
Start time : Apr 1 2017 5:11:20:830PM
SID : 0x0105000000000005150000004893a0845595b6fef515cd5de9030000
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

If I COMMIT spid 52 and re-run DBCC OPENTRAN along with checking sysprocesses for open_tran <> 0 I get:

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
spid blocked dbid last_batch open_tran
—— ——- —— ———————– ———
(0 row(s) affected)

 

Now, all of that was just running DBCC OPENTRAN by itself.  There are additional options:

--specify dbname, dbid or 0 for the current database
DBCC OPENTRAN (SmallData_BigLog)

You will get results in the same format as the previous examples.

You can suppress all messages, regardless of if a transaction is open or not (but I have no idea why this would help you…)

DBCC OPENTRAN (0) with no_infomsgs

Result:

Command(s) completed successfully.

 

If you needed to periodically capture the oldest transaction, in order to review later, use WITH TABLERESULTS:

-- TableResults only shows the oldest open tran
-- useful running in a loop to load the oldest
-- tran over time.

--create a temp table
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);

-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN (SmallData_BigLog) with tableresults')
SELECT * FROM #OpenTranStatus
DROP TABLE #OpenTranStatus
   

In the above, you could create a user table instead of a temp table of course…it depends on your needs.

One more particularly useful item you may see when running DBCC OPENTRAN by itself:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:143:3)
Oldest non-distributed LSN : (37:144:1)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

 

If your database is participating in Replication as a Publisher, this may show up when running OPENTRAN, but it doesn’t necessarily mean that the transaction is actually open.  I set this up and stopped the Replication Log Reader and Distribution agent jobs.   I then added some data to a published table (article) and ran DBCC OPENTRAN to get the above result.  Note that there are two lines with LSN information in them (no SPIDs)

I then ran the Log Reader Agent job and got back:

Transaction information for database ‘Music’.
Replicated Transaction Information:
Oldest distributed LSN : (37:157:3)
Oldest non-distributed LSN : (0:0:0)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

I have verified that new records I inserted have been read by the log reader, AND distributed to the subscriber(s).  This means that while you are seeing

Oldest distributed LSN : (37:157:3)

There is not an error…just info.

If you have non-distributed LSNs, there is something to troubleshoot in the replication process which is way outside the scope of this post.  A non-distributed replicated transaction/LSN CAN cause some huge Log file growth, and need to be investigated.  If this happens frequently, use the TABLERESULTS option to log to a regular table and alert on it.

Hopefully this gives you some insight into various ways to use DBCC OPENTRAN as well as use cases for the various options.  90% of the time I run this, it is due to application transactions timing out, or log file growth issues.

I love comments….please feel free to leave questions for me in them on this topic.

Thanks for reading!

Kevin3NF

My Pluralsight course for new SQL Server DBAs

Follow @Dallas_DBAs


Filed Under: Accidental DBA, Beginner, EntryLevel, Performance Tuning, SQL

The Apprentice: Locks and Blocks and Deadlocks….oh my!

March 30, 2017 by Kevin3NF Leave a Comment

I re-posted SQL 101 Blocking vs. Deadlocking – in English for the Non-DBA the other day for two reasons:

  1.  Its good info for new DBAs struggling to understand the interaction and differences in these terms
  2.  It was next on the list to walk the Apprentice through and he reads my Tweets 🙂

We had about an hour to spend working through this so we briefly covered the article, using Starbucks and Chick-Fil-A interchangeably as something you almost always have to wait in line for.

Things we managed to cover, test, or define in that one hour:

  • Lock: When a customer “Sally” (query1) walks up to the cashier “Ken” (Resource1), she has locked him into taking her order.
    • Ken is a CPU here, or a Row/Page/Table…don’t overthink my analogies 😀
  • Block: The dude “Broseph” (query2) behind Sally (query1) has to wait…he’s blocked.
  • If the manager (Query Optimizer) sees that Sally is ordering 20 drinks for the office, he may open a second register and have Joe (Resource2) starting ringing up some of Sally’s orders.  Brospeh is still in line, waiting.  Sally has gone parallel.
  • If the manager decides Ken needs to work all of Sally’s order himself (MAXDOP1), he may open a second register (Joe/resource2) and move Broseph to that line.
  • If Max (query3) walks in, sees what is going on and decides he doesn’t really want coffee…he just rolls on back out the door and leaves a 1-star review on Yelp (failed transaction, retry?)

Other stuff we covered, in no particular order:

  • Deadlocking and the Mom process (see above link)
  • INSERT statements to create sample blocking
  • IMPLICIT and EXPLICIT transactions, so the tests actually work
  • 4 parts of an object name [instance].[database].[schema].[object]
  • Why a sysadmin cannot directly query Instance2 from Instance1, regardless of his level of sysadmin-ness unless…
  • …Linked Server
  • sp_lock
  • master..sysprocesses (old skool)

Good times were had, jokes were made, stuff was learned.  Oh…and every time we meet I ask him random stuff from previous meetings to gauge retention.   So far so good.

Thanks for reading!

Kevin3NF

Filed Under: Accidental DBA, Apprentice, Beginner

Why is my SQL Log File Huge?

March 8, 2017 by Kevin3NF 11 Comments

Pluralsight courses for new SQL Server DBAs
Do you need our help?     Or our DBA retainer service for emergencies?

 

HUGE Log files and how to troubleshoot:

The single most common question I have encountered in 18+ years of working with SQL Server:

Why is the .LDF file filling up my 500GB drive?  I only have 100MB of data!!?!?!?  Why am I getting error 9002?

For new or non-DBAs, this is a very frustrating situation without a logical reason (or so it seems).  It is also very common for it to be accompanied by applications that won’t work, alerts firing for drive space issues, etc.

If you like video, I recorded my response to this question and discuss the two most common remedies.  If you don’t like video, scroll down for text:

 

There are a number of reasons a log file can fill to extreme sizes.  The most common one by far is that the database is in full recovery model, and Transaction Log backups are not happening fast enough, or not happening at all.  Next to that, it could be that you had a massive transaction happen such as a huge data import, rebuild all indexes, etc.  These are logged and stay there until the .ldf file is backed up (or checkpointed if you are in Simple Recovery).

Step 1: Verify recovery model

Right-click the database, go to properties, click the Options tab.   You will see Full, Bulk-Logged or Simple.   If you are in Full, you have the option of backing up the log…which is the best possible situation.

SQL Server Database Options

Step 2: Verify if the log is full or “empty”

Verify if the log file is actually full or not.  If you are backing up and the file still grew to ridiculous size…it may have just been a one time thing and you can deal with that easily.  Right-click the database, go to reports, standard reports, disk usage.  This will give you 2 pie charts.  Left is the data file, right is the log.  If the log shows almost or completely full AND the huge size, you need to backup.  If the log file is huge and mostly empty, you simply need to shrink to an acceptable size.

SQL Server Disk Usage

Step 3: Shrink the file (if empty)

Right-click the database>>Tasks>>Shrink>>Files

Choose ‘Log ‘ from the File Type drop down.  Hopefully there is only one log file.  If not, pick the big one.  Under Shrink Action, choose an appropriate size and ‘Reorganize pages before releasing space” option, even though log file shrinks don’t actually do that.   Pick a size in MB and click ok.  0 is not a good choice here.

SQL Server Shrink File

Step 4: Backup

I’m not going to go into a ton of detail here….Right-click the database>>Tasks>>Backup   Change the backup type to Transaction Log and work through the rest of the steps.

If the Log Backup works, but the space is not freed (refresh the usage report), you have a different issue that these steps will not help with. Check out the “Wrapping Up” section at the bottom of this post.

If you don’t have enough room on any local, attached or network drive to create a log backup, even with compression, keep reading:

Step 5: Flip the Recovery Model (if log backup is not possible)

Warning:  Doing this WILL cause you to lose point-in-time recoverability, but if you cannot backup the log, you are pretty much already there anyway.

Right-click the database>>Properties>>Options

Change the recovery model to Simple and click OK

SQL Server Recovery Model

Wait a few seconds and then go refresh the Disk Usage report.  The log file will be the same size, but should be almost empty:

SQL Server Disk Usage

Step 6: Shrink the Log file

See step 3 above…

Step 7: Flip the recovery back to Full

See step 1…

Step 8: Set up recurring log backups

If you don’t know how to do this, go to Management, Maintenance Plans, right-click Maintenance Plan>>Maintenance Plan Wizard and go from there.   This is well documented elsewhere.

Wrapping Up:

Hopefully, this resolved your issue but there are definitely other reasons for this issue to happen aside from a simple failure to back up.   Most notably, a very large transaction in a database that is participating in SQL Replication as a publisher.

If the above methods do not work, run these two statements and go post the results in the MSDN SQL Server forums, along with a description of the issue and what you have already tried (hopefully all of the above):

Select [name],recovery_model_desc, log_reuse_wait_desc 
from sys.databases
Where [name] = 'MyDatabase' --change this

DBCC OPENTRAN --results will be in the messages section

I love comments on my post, but if you need quick help go to the forums first, or maybe even a call to Microsoft Support if the “quick hits” don’t get you the resolution you need.  If this helped, please comment and share the link…

Thanks for reading!

Kevin3NF

My Pluralsight course for new SQL Server DBAs

Follow @Dallas_DBAs

Filed Under: Accidental DBA, backup, Beginner, EntryLevel

The Apprentice: Detective work

March 8, 2017 by Kevin3NF Leave a Comment

SQL Server Database Training Apprentice Detective

I decided to see how much knowledge and familiarity the Apprentice has retained in the area of Database Properties.

The Setup:

I wanted to simulate a customer engaging him to “Look at the database” because it doesn’t “seem right.”  I created a Sales database on his machine and mis-configured some items, taking it far away from best practices.  It came on line, but had issues 🙂

The first thing he noticed and asked about was the lack of tables or other objects.  My response was that the “customer” was installing a 3rd party application which has two steps:  Create the Database, and then Create the Objects.   They thought the results of step one were odd and called us.

What he found:

  • Auto-Shrink enabled (very common for 3rd party apps)
  • Insane file names (Logical: Bill and Log_Ted, Physical: Taco.mdf – data and Burrito.Mdf – log file)
  • Auto-grow for data file of 1 MB, capped at 5 MB
  • Auto-grow for log of 100%, unlimited

No backups had been taken, but I don’t recall if he found that or if we even discussed it.  He is well aware of backups and recovery models.

This took us down a conversation of best practices, and how to rename files in a database, both Logical and Physical.  What’s really fun is when you want to look up the ALTER DATABASE command to rename the physical files and the internet connection is down…so no MSDN or Google!

We used the GUI to create a script for changing the logical names, then modified that for the physical files instead.  And he already knew that the actual files on disk had to be changed as well.

He did really well on this, with very little prompting.   Well done!

Thanks for reading!

Kevin3NF

 

Filed Under: Accidental DBA, Apprentice, Beginner, EntryLevel

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...