The Ironic DBA—Rock Around the Clock

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

Last week I shared why you shouldn’t completely hate [square brackets], and this week I’m going to build on that theme a little bit more by showing you some minor tweaks to some scripts we use here at Dallas DBAs on a daily basis.

I Love it When Something Unplanned Comes Together

If you’ve been following my weekly Ironic DBA posts, you know that I’m new to this gig and have been learning things as rapidly as I can. It’s a little like being thrown into a pool to learn how to swim—though not the deep end. I’ve chronicled what I’ve learned each week and attempted to share it with the world. A funny thing about this week’s post requires a little backstory.

Kevin and I have known each other for years, and our families have gotten together weekly to play games for the last several years. Last time we were sitting around the table together, we were talking about my posts and I jokingly said something like, “If I don’t have a topic for a blog post in any given week, you need to ask me what I’m doing with my time.”

Then I proceeded to struggle to come up with a topic for this week. Oh, the irony.

Archimedes
Archimedes takes a bath and learns a thing.

What’s really cool is that I obviously did come up with something…unless the rest of this post is simply a ramble. I’ve long believed (as a former school teacher) that two of the best methods for learning are immersion and repetition. That’s been my approach to any of my self-guided SQL Server studies, and it’s paid off so far. Earlier this week I had an Archimedes-type Eureka! moment when some various threads I’ve been pulling all came together.

One of the stepping stones I’ve been using in my studies is Kevin’s post “Top 10 SQL Server functions every DBA should know by heart.” I’ve revisited that post several times but don’t always have many relevant opportunities to put those functions into practice on my VM. Either way, I’ve been using the repetition method to remind myself that these functions exist. The relevant function for today is Getdate().

Also relevant this week is this excellent post from Ken Fisher (b|t) about the built-in agent_datetime() function in SQL Server. I first read about it because Kevin found it and tweeted about it. We ended up using it in the script edits you’ll see below.

Generic SQL Server Output Sucks

As an “artsy” type person, some of the ways SQL Server displays information pains me. I get that we’re working with data and it doesn’t always have to be beautiful, but can we at least get something a bit more reader-friendly? The answer is usually, “Yes, but you’ll have to work for it.”

Now that I’ve been doing daily server reviews in Production for a while, I’ve gotten pretty familiar with three scripts that I run against servers every day. Those three scripts are Job History, Read Errorlogs, and Last Backups (generic titles). Let’s look at Job History first since it has some of the most interesting edits applied. Here’s the original script:

Select j.name, jh.step_name, run_status, run_date, run_time, run_duration, [server], [message]
From 
	[msdb].[dbo].[sysjobhistory] jh
	join [msdb].[dbo].sysjobs j 
		on jh.job_id = j.job_id
Where 1=1
and run_status not in (1,2,4)
and run_date > 20190701 
and [step_name] <> '(Job outcome)'
Order by run_date  desc, run_time desc

--Select MIN(run_date) from msdb..sysjobhistory

Which renders the following output:

Meh, the run_date and run_time output is underwhelming and hard to read quickly. Imagine checking 100 servers or more and needing to quickly read the time/date stamps.

As I was working on my own edits, Kevin sent me a snippet of code changes—based on Ken Fisher’s blog post mentioned above—in order to get a much nicer output:

Select 
	j.name as [Job Name], 
	jh.step_name as [Job Step Name], 
	run_status as [Run Status], 
	msdb.dbo.agent_datetime(run_date,run_time) as [Job Run Time], 
	[message] as [Message]
From [msdb].[dbo].[sysjobhistory] jh
	join [msdb].[dbo].sysjobs j 
		on jh.job_id = j.job_id
Where 1=1 
	and run_status not in (1,2,4)
	and run_date >= 20190701
	and jh.step_name <> '(Job outcome)'
Order by 
	j.name,
	msdb.dbo.agent_datetime(run_date,run_time) Desc

Which gives us the following output:

You’ll also notice that I added some more of my own square bracket magic to make the column headers more readable. It’s the little details that sometimes make a big difference.

I did the same sort of thing to our Read Errorlog script, but added my own line to change the datetime stamp here as well. The original script had this line of code:

And Logdate > getdate() -3

Which rendered this result:

With Kevin’s guidance, I changed the line using the Convert() function and received the subsequent output:

convert(nvarchar(30), getdate() -3, 20) as [Error Date & Time], --convert datetime to readable format

Finally, we check for the latest backups each morning by using a script that…you guessed it…checks for the latest backups. The procedure goes something like this:

1. Run Last Backups script and get results (see screenshot).

2. Copy results with headers and paste into an Excel spreadsheet.
3. Sort and filter results in spreadsheet to make reading of latest backup timestamps easier on the eyes and fall in sequential order.
4. Report findings to client.

That’s all well and good, but since I studied the GROUP BY and ORDER BY commands in TSQL recently I thought, “Why are we going through the extra copy/paste spreadsheet sort/filter steps? Why not simply rewrite the script to sort the results for us?”

So that’s what I did. I simply edited the last line of the script (as well as do more square bracket magic):

Order by RecoveryMode, [Status], [LastFullDate], [LastLogDate], db.[Database]

And here’s what the output looks like now:

Nice! Sorted and filtered, reader-friendly last backups results.

Now, in the end, does any of this improve our client’s server efficiency? No, but what it does do is let ME be more efficient for our clients. There’s no reason to spend extra time unnecessarily.

Have I written super-complex code? Nope. Have I contributed something to the SQL community that a Senior DBA couldn’t have written in 2 minutes? Nope, but that’s not the point. The point is I learned from the experience of editing existing scripts and now have some slightly sharper tools in my toolbox. 🙂

That’s all for this week. Join me next time for the next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files



Leave a Comment

Sign up for our Newsletter

%d bloggers like this: