• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

Beginner

Duplicate Indexes Explained

November 13, 2019 by Kevin3NF Leave a Comment

What are duplicate indexes, and why do I care?

This is an entry level explanation, with an analogy for new DBAs.

Duplicate indexes are those that exactly match the Key and Included columns.  That’s easy.

Possible duplicate indexes are those that very closely match Key/Included columns.

Why do you care?

Indexes have to be maintained. When I say that, most people immediately think of Reorganizing, rebuilding and updating statistics, and they are not wrong.

But..don’t overlook the updates that happen to indexes when the data changes in the columns they are based on. If you have a duplicate index and you add, change or delete a row…BOTH of the indexes are changed. This takes CPU, memory and log space to do.  Magnify by multiple indexes across a databases with tables that have millions or billions of rows and you start feeling this effort.

Duplicates:

Consider the following two indexes:

-- DisplayName only
CREATE INDEX [NC_DisplayName] ON [dbo].[Users]
	([DisplayName] ASC)
GO

--DisplayName plus additional info
CREATE INDEX [NC_DisplayName_Includes] ON [dbo].[Users]
	([DisplayName] ASC)
INCLUDE ([Reputation],[CreationDate],[UpVotes],[Views]) 
GO

If you query the StackOverflow database for me (Kevin3NF) using

Select DisplayName
From Users
Where DisplayName = 'Kevin3NF'

It will use the first index *

If you add ‘Upvotes’ to the Select, it will use the second index.

So…how is this a duplicate index?

The Key column (DisplayName) is the same.

Drop the NC_DisplayName index, and your first query will use the second index because even though there are 4 additional columns, the Key column is still there and this index is better than a table scan.

You also get the benefit of not having to update NC_DisplayName any time data is changed.

An Analogy of Two Doors:

You have two doors:

The First door is simple.  It has one aspect to it…a handle.

The Second door has a handle, as well as all sorts of Included extras.

Both doors lead to the same place.  Assume some people like the simplicity of Door1 and some people really want to go through the Steampunk style Door2.  Both get what they want (destination), by picking the door that suits them.

If you take away Door1 (Drop Index), the folks that prefer it can simply go through the fancy Door2 and get where they needed to be. Door2 has included extras (columns) that the “Simple door” folks just ignore.

It really is that simple.

But…don’t go make an index with all possible columns!  That’s a whole different kind of bad indexing.

Now…this does not mean you need to go blindly dropping indexes.  Research, test, verify which indexes are being used/unused, etc.  Definitely don’t go and drop a Primary Key index just because its a possible duplicate.

I use sp_BlitzIndex for my initial info gathering when doing HealthChecks, Index Tuning, etc. Its free and solid.

Video:

* I have run into a weird issue I am trying to sort out. In my testing, it appears that SQL Server might be basing its index decisions on the Index Creation date, all other things being equal. I will update as I find out more. This was done on SQL 2016, SP2 with the StackOverflow public data dump.

Per my good friend Pinal Dave (b|t), this is a known behavior: ” if two index has same key columns the one created later is used.”

Go forth and research!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Index

The Ironic DBA—Back to Basics

September 24, 2019 by SQLandMTB Leave a Comment

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

It’s been a few weeks since I added anything to this series—though I did contribute my first-ever T-SQL Tuesday post a couple of weeks ago. The reasons for my silence are actually pretty simple. I’ve been busy.

Is Your Isolation Concurrent?

My main daily task since coming on board here at Dallas DBAs has been immersive self-study. I spend the vast majority of my time reading blog posts, books, and watching videos about all things SQL Server. I recently enrolled in Brent Ozar’s training classes and have been learning a great deal. I typically watch one or two videos a day there, and spend a lot of time aftewards doing follow-up reading in an attempt to reinforce what I’ve just consumed.

There’s so much to learn!

Before enrolling in those classes, I spent a few weeks sort of ambling all over the place without any specific step-by-step process as to what I should be studying. I had sort of gotten to the point where I had learned enough that it was getting hard to determine exactly what I should learn next, so my focus was rather fuzzy.

Along the way, I spent a few days going down the rabbit hole of concurrency and isolation levels, which is really useful stuff to know if you’re serious about being a top-notch DBA. It’s good stuff, and I’m glad I read up on it, but 90% of what I had read was cart-before-the-horse type stuff. I needed to keep it simple and go back to basics.

Build that Muscle Memory

I wrote in my last Ironic DBA post about the basics of finding and reading error logs. Nested within that simple write up was a truth I needed to remind myself about and keep coming back to: Keep learning about how navigate and use SQL Server Management Studio (SSMS).

Case in point: I currently have three clients whose servers I review daily, and one client who receives a weekly review. I’ll be picking up one or two more clients in the near future. I’m almost daily presented with an “I’ve never seen this before” moment, which is a learning opportunity. It’s not uncommon for me find a new-to-me error and spend a bunch of time Googling and checking reliable sources in an attempt to figure out what’s going on.

More often than not, my difficulties in figuring out what’s going wrong are equal parts not knowing where to look in SSMS and not knowing about the error itself. I can learn from mentoring or reading what is causing an error, but knowing how to troubleshoot it is largely knowing how to navigate SSMS effectively.

Seriously, I think the best piece of advice I can give my fellow newbie DBAs is do everything you can to learn about using SSMS. Learning how SQL Server works under the hood, how relational databases work, how to write and troubleshoot queries, and things like indexing, statistics, and monitoring are all critical to your career as a DBA. But none of that matters if you don’t get familiar with the tool you will use most often.

SSMS is the tool that will make everything else you learn make more sense because it is where you can see all the magic happen—or not happen in the case of job failures, deadlocks, and other nasty stuff. Let’s be honest, the tool is not intuitive, and in 2019 it feels very long in the tooth—like using legacy software because there’s nothing else better. As a graphic designer and lifelong Mac user I find the software clunky and confusing, and constantly think about ways the GUI could be vastly improved.

The problem with that is it would blow the mind of every long-term DBA out there. Can you imagine how lost the majority of career DBAs would be if Microsoft suddenly released a whole new interface to SQL Server? Even if they created a GUI that was objectively better, many DBAs would feel like they’re starting over and it’s a frustration they just don’t need to deal with to get their jobs done. So, I agree that the best course of action is to identify simple ways to tweak the current GUI to improve the tool without blowing it up and starting from scratch.

So embrace SSMS for what it is and what it does. Despite it’s weaknesses, it’s the most powerful tool in your DBA tool kit.

That’s all for this week. Join me next time for the next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files

    • Prequel: The Ironic DBA—Starting a New and Unexpected Career
    • Episode 1: You Back That Up?
    • Episode 2: Attack of the Corruption
    • Episode 3: Revenge of the Index
    • Episode 4: A New Primary Key
    • Episode 5: The Maintenance Plan Strikes Back
    • Episode 6: Return of the TSQL
    • Episode 7: The Backup Awakens
    • Episode 8: The Last Rebuild
    • Episode 9: Rise of the Clients
    • Review One: A SQL Story
    • It’s Hip to Be Square
    • Rock Around the Clock
    • Failure is Always an Option

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Career, SSMS

The Ironic DBA—Failure Is Always an Option

September 5, 2019 by SQLandMTB Leave a Comment

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

In the last episode I showed how I tweaked some in-house scripts to provide more user-friendly output. This time around I want to revisit those scripts and give my fellow beginner DBAs some insight on some very basic troubleshooting.

Failure is Always An Option

I’m a big Mythbusters fan, and was saddened when the show eventually went off the air. There’s so much I learned about how the world around me works by watching the antics of Adam Savage, Jamie Hyneman, the rest of their crew. I still follow both of them on Twitter (links above), and also regularly watch Adam Savage’s Tested on YouTube.

Several pithy phrases were said over the many seasons of Mythbusters episodes, including gems like, “I reject your reality and substitute my own,” and “Jamie likes big boom.” My favorite line from the show is “Failure is always an option.”

My family has been rewatching some of the show’s episodes, and in one of their final shows Adam mentions how they wouldn’t have been able to accomplish that particular episode’s goals without their previous years of experience. If you were to go and watch every episode, you’d quickly realize that the Mythbusters failed more often than succeeded. It’s through repeated trial and error that they learned the most beneficial lessons.

Learning about SQL Server’s capabilities has been a similar sort of journey. It’s still early days for me, but I’m sure that most Senior DBA’s out there will tell you that the knowledge they’ve gained over the years has been full of “that didn’t work” moments. For fun, check out this video from Bert Wagner (b|t) about SQL Fails.

No matter how much I try to remember all of this, I’m still the sort of person who gets that flip-flopping stomach feeling when I mess up or can’t figure something out right away. It’s in moments like these that I have to take a breath and remind myself that I’m still learning. SQL Server is a very complex piece of software—so complex that I doubt there is any one person who knows EVERYTHING about it, not even the people who’ve worked to develop it over the years.

Failure is always an option. As a SQL Server DBA, you’ll soon learn that your client’s servers will fail—no matter how good you are at your job. How will you identify those failures? Here’s one way.

There’s Your Problem

SQL Server has a bunch of built-in tools and resources that help identify failures and errors. Having said that, there’s a learning curve involved that’s sort of like baking a loaf of bread. The components are all there at your fingertips, but you need someone to show you how to use them in the correct properly.

One of the scripts I run daily, the Read Errorlog script, has a bit of code that looks like this:

--Dump all the things into the table
	insert into #Errorlog
	EXEC sys.xp_readerrorlog 
	0 -- Current ERRORLOG
	,1 -- SQL ERRORLOG (not Agent)

What’s relevant here is understanding WHAT is being read when this script is run. I’m not all that concerned today with showing how we massage the output, just where the information is coming from.

The line EXEC sys.xp_readerrorlog is executing a widely-known but undocumented Extended Stored Procedure. This is why you see “xp” in the scriptlet. If a regular Stored Procedure were being executed you’d see “sp” instead.

NOTE: You’ll see the following message at the top of the MS Docs related to Extended Stored Procedures: “This feature will be removed in a future version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. Use CLR Integration instead.” We will probably need to rewrite our in-house scripts some time in the future to stay current.

Error logs are not stored in the database, but rather in text files on the host server. So, this Extended Stored Procedure looks outside of SQL Server to where the error log text files are stored within the hardware environment.

What is the procedure reading? xp_readerrorlog is pulling information from the files you can find in the Object Explorer under Management–>SQL Server Logs (highlighted in green). The Extended Stored Procedure helps make our lives as DBAs just a little bit more efficient by pulling the relevant information from the text files for us rather than forcing us to view each individual log file and scroll through hundreds of lines of results.

What’s also important to note here is what our version of the Read Errorlog script is NOT reading. Notice the Error Logs folder highlighted in red in the Object Explorer. You can find it under SQL Server Agent–>Error Logs. We don’t care about those error logs for this particular task.

xp_readerrorlogs accepts several parameters. The two we use most often are the Log Number and Log Type parameters.

The Log Number parameter we pass is “0”, which tells SSMS to read the current log. The Log Type parameter we pass is “1”, which tells SSMS to read from SQL Server Logs (green) and NOT from the SQL Server Agent Error Logs (red).

Beyond this, we are then able to use our script to tell SSMS what data we’d like displayed from the logs, rather than having it output every single line. For instance, if we’re specifically looking for deadlocks, our SELECT statement can be written to only look for LogText like ‘%deadlock encountered%’.

That’s all for this week. Join me next time for the next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files

        • Prequel: The Ironic DBA—Starting a New and Unexpected Career
        • Episode 1: You Back That Up?
        • Episode 2: Attack of the Corruption
        • Episode 3: Revenge of the Index
        • Episode 4: A New Primary Key
        • Episode 5: The Maintenance Plan Strikes Back
        • Episode 6: Return of the TSQL
        • Episode 7: The Backup Awakens
        • Episode 8: The Last Rebuild
        • Episode 9: Rise of the Clients
        • Review One: A SQL Story
        • It’s Hip to Be Square
        • Rock Around the Clock

Follow @Dallas_DBAs

Filed Under: Apprentice, Beginner, Career, EntryLevel

The Ironic DBA—Rock Around the Clock

August 27, 2019 by SQLandMTB Leave a Comment

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

Last week I shared why you shouldn’t completely hate [square brackets], and this week I’m going to build on that theme a little bit more by showing you some minor tweaks to some scripts we use here at Dallas DBAs on a daily basis.

I Love it When Something Unplanned Comes Together

If you’ve been following my weekly Ironic DBA posts, you know that I’m new to this gig and have been learning things as rapidly as I can. It’s a little like being thrown into a pool to learn how to swim—though not the deep end. I’ve chronicled what I’ve learned each week and attempted to share it with the world. A funny thing about this week’s post requires a little backstory.

Kevin and I have known each other for years, and our families have gotten together weekly to play games for the last several years. Last time we were sitting around the table together, we were talking about my posts and I jokingly said something like, “If I don’t have a topic for a blog post in any given week, you need to ask me what I’m doing with my time.”

Then I proceeded to struggle to come up with a topic for this week. Oh, the irony.

Archimedes
Archimedes takes a bath and learns a thing.

What’s really cool is that I obviously did come up with something…unless the rest of this post is simply a ramble. I’ve long believed (as a former school teacher) that two of the best methods for learning are immersion and repetition. That’s been my approach to any of my self-guided SQL Server studies, and it’s paid off so far. Earlier this week I had an Archimedes-type Eureka! moment when some various threads I’ve been pulling all came together.

One of the stepping stones I’ve been using in my studies is Kevin’s post “Top 10 SQL Server functions every DBA should know by heart.” I’ve revisited that post several times but don’t always have many relevant opportunities to put those functions into practice on my VM. Either way, I’ve been using the repetition method to remind myself that these functions exist. The relevant function for today is Getdate().

Also relevant this week is this excellent post from Ken Fisher (b|t) about the built-in agent_datetime() function in SQL Server. I first read about it because Kevin found it and tweeted about it. We ended up using it in the script edits you’ll see below.

Generic SQL Server Output Sucks

As an “artsy” type person, some of the ways SQL Server displays information pains me. I get that we’re working with data and it doesn’t always have to be beautiful, but can we at least get something a bit more reader-friendly? The answer is usually, “Yes, but you’ll have to work for it.”

Now that I’ve been doing daily server reviews in Production for a while, I’ve gotten pretty familiar with three scripts that I run against servers every day. Those three scripts are Job History, Read Errorlogs, and Last Backups (generic titles). Let’s look at Job History first since it has some of the most interesting edits applied. Here’s the original script:

Select j.name, jh.step_name, run_status, run_date, run_time, run_duration, [server], [message]
From 
	[msdb].[dbo].[sysjobhistory] jh
	join [msdb].[dbo].sysjobs j 
		on jh.job_id = j.job_id
Where 1=1
and run_status not in (1,2,4)
and run_date > 20190701 
and [step_name] <> '(Job outcome)'
Order by run_date  desc, run_time desc

--Select MIN(run_date) from msdb..sysjobhistory

Which renders the following output:

Meh, the run_date and run_time output is underwhelming and hard to read quickly. Imagine checking 100 servers or more and needing to quickly read the time/date stamps.

As I was working on my own edits, Kevin sent me a snippet of code changes—based on Ken Fisher’s blog post mentioned above—in order to get a much nicer output:

Select 
	j.name as [Job Name], 
	jh.step_name as [Job Step Name], 
	run_status as [Run Status], 
	msdb.dbo.agent_datetime(run_date,run_time) as [Job Run Time], 
	[message] as [Message]
From [msdb].[dbo].[sysjobhistory] jh
	join [msdb].[dbo].sysjobs j 
		on jh.job_id = j.job_id
Where 1=1 
	and run_status not in (1,2,4)
	and run_date >= 20190701
	and jh.step_name <> '(Job outcome)'
Order by 
	j.name,
	msdb.dbo.agent_datetime(run_date,run_time) Desc

Which gives us the following output:

You’ll also notice that I added some more of my own square bracket magic to make the column headers more readable. It’s the little details that sometimes make a big difference.

I did the same sort of thing to our Read Errorlog script, but added my own line to change the datetime stamp here as well. The original script had this line of code:

And Logdate > getdate() -3

Which rendered this result:

With Kevin’s guidance, I changed the line using the Convert() function and received the subsequent output:

convert(nvarchar(30), getdate() -3, 20) as [Error Date & Time], --convert datetime to readable format

Finally, we check for the latest backups each morning by using a script that…you guessed it…checks for the latest backups. The procedure goes something like this:

1. Run Last Backups script and get results (see screenshot).

2. Copy results with headers and paste into an Excel spreadsheet.
3. Sort and filter results in spreadsheet to make reading of latest backup timestamps easier on the eyes and fall in sequential order.
4. Report findings to client.

That’s all well and good, but since I studied the GROUP BY and ORDER BY commands in TSQL recently I thought, “Why are we going through the extra copy/paste spreadsheet sort/filter steps? Why not simply rewrite the script to sort the results for us?”

So that’s what I did. I simply edited the last line of the script (as well as do more square bracket magic):

Order by RecoveryMode, [Status], [LastFullDate], [LastLogDate], db.[Database]

And here’s what the output looks like now:

Nice! Sorted and filtered, reader-friendly last backups results.

Now, in the end, does any of this improve our client’s server efficiency? No, but what it does do is let ME be more efficient for our clients. There’s no reason to spend extra time unnecessarily.

Have I written super-complex code? Nope. Have I contributed something to the SQL community that a Senior DBA couldn’t have written in 2 minutes? Nope, but that’s not the point. The point is I learned from the experience of editing existing scripts and now have some slightly sharper tools in my toolbox. 🙂

That’s all for this week. Join me next time for the next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files

      • Prequel: The Ironic DBA—Starting a New and Unexpected Career
      • Episode 1: You Back That Up?
      • Episode 2: Attack of the Corruption
      • Episode 3: Revenge of the Index
      • Episode 4: A New Primary Key
      • Episode 5: The Maintenance Plan Strikes Back
      • Episode 6: Return of the TSQL
      • Episode 7: The Backup Awakens
      • Episode 8: The Last Rebuild
      • Episode 9: Rise of the Clients
      • Review One: A SQL Story
      • It’s Hip to Be Square

Follow @Dallas_DBAs

Filed Under: Apprentice, Beginner, Career, EntryLevel

The Ironic DBA—It’s Hip to Be Square

August 20, 2019 by SQLandMTB Leave a Comment

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

Last week I reviewed how far I’d come and some of what I’d learned so far on this journey. You [hopefully] noticed the titles of the previous episodes were all Star Wars-esque titles, and the episodes were numbered. All of those posts were a generalized recap of what had happened two or three weeks previously. Going forward, my contributions will be more timely—as in what I’ve learned or experienced within the last week or so—and hopefully continue to add value to my fellow apprentices in the #sqlfamily.

 

Brackets Not Feeling the Love?

There seems to be some generalized hating on the use of square brackets in TSQL code among both new and seasoned DBAs. What are square brackets? I’m sure if you’ve spent almost any time in the SSMS query editor you’ve seen something like this:

[StackOverflow2010].[dbo].[Users]

This commonly happens when you use SSMS to create your scripts for you, or you drag and drop a database or table into the query editor window. The example above is a demonstration of a three-part naming convention for the Users table in the StackOverflow2010 database. What this shows is [NameOfDatabase].[Schema].[TableName]. The periods, or dots, are concatenating the three parts of the name together.

I get why some folks don’t like the brackets as they tend to muddle your script’s readability. To see a great example of this in action, check out Michael J. Swart’s post Remove SQL Junk (Brackets and Other Clutter). In most cases, you don’t need square brackets in your scripts, but there are instances where you will need them. That’s mostly beyond the scope of this post.

Cool side note: One of the tidbits I learned during this investigation into square brackets is that a database can have tables with identical names as long as each table’s schema is different. That’s something for you apprentice and accidental DBAs to be aware of for the future.

An Example of Nifty Bracket Usage

I mentioned last week how I have been working through the Stairway to T-SQL course at SQL Server Central. While playing with the GROUP BY clause scripts, I became annoyed at the inelegance of the table header output. Here’s an example:

USE tempdb;
GO
SELECT StoreName 
     ,SUM(TotalSalesAmount) AS StoreSalesAmount
FROM dbo.SalesTransaction  
GROUP BY StoreName;

The script above renders this result:

I decided to test the waters and “fix” this by using square brackets for the column header names:

USE tempdb;
GO
SELECT StoreName AS [Store Name]
     ,SUM(TotalSalesAmount) AS [Store Sales Amount]
FROM dbo.SalesTransaction  
GROUP BY StoreName;

Which rendered this:

Ahh, satisfying.

It’s a super simple addition, and one that does not detract all that much from the script’s readability. What is does do in my opinion is make the output more reader-friendly.  Essentially, consider if you might want to use square brackets when you want to use spaces or special characters in your column headers. If you want to go crazy, you can take using square brackets to rename columns a whole lot deeper.

As a side note, you may have to use square brackets if you have a table or column name that uses a word that is reserved in SQL DML. For instance, if someone was wicked enough to name a table or column “Index” then you would have to use square brackets around the name in order to create scripts that function properly.

But no one would ever be that evil, would they?

That’s all for this week. Join me next time for next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files

      • Prequel: The Ironic DBA—Starting a New and Unexpected Career
      • Episode 1: You Back That Up?
      • Episode 2: Attack of the Corruption
      • Episode 3: Revenge of the Index
      • Episode 4: A New Primary Key
      • Episode 5: The Maintenance Plan Strikes Back
      • Episode 6: Return of the TSQL
      • Episode 7: The Backup Awakens
      • Episode 8: The Last Rebuild
      • Episode 9: Rise of the Clients
      • Review One: A SQL Story

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, EntryLevel

The Ironic DBA—Review One: A SQL Story

August 13, 2019 by SQLandMTB Leave a Comment

Welcome back to The Ironic DBA Files, a series where the newbie DBA on staff at Dallas DBAs chronicles the ups and downs of fast-tracking a new career as a DBA.

Last time around I related the exciting development of beginning to touch production servers. Now, with ten episodes in the can I think it’s time for a review of what I’ve learned and done so far. Stick with me as I hope to relate a few things I haven’t shared before, as well as some of my favorite tips, links, and resources that have helped me progresses as rapidly as reasonably possible.

What my brain looks like after a few months of DBA training.

A New Trail

I originally related how training to become a DBA was a new and unexpected opportunity, thus my reason for choosing the moniker “The Ironic DBA.” I’m not quite sure at this point whether I’m truly ironic, or using the word incorrectly like Alanis Morissette, but I digress…

Having a Senior DBA who can mentor you is absolutely the best approach to this career. I’m sure there are many intelligent people who could self-study and figure out all of this on their own via trial and error, but having an experienced DBA looking over your shoulder is one of the ways to progress much more quickly.

Not everyone can be as fortunate as I am to have a close friend be your personal coach, but that doesn’t mean you shouldn’t look for someone who can help you regularly, whether at your workplace or a local PASS meeting.

Mapping the Loops

Stacked Loop Trail SystemIn mountain biking, we most often have trails that are full loops—loops that when followed from beginning to end bring you back to the trail head where you originally started. Some trail systems are stacked loop systems, where successive loops are “stacked” upon each other, connecting in such a way as to give users options for extended distance or varied routes. Trail systems are often constructed this way so that one or two loops can be opened to the public while successive loops are constructed and added over time. It’s also common that the successive loops increase in difficulty as you progress through the loops.

That’s exactly the approach we’ve taken with my DBA training. To start, you’ve got to get a handle on the basics like nomenclature, systems, and base-level architecture. What is a database? What does relational database mean? What’s an instance? Before I ever seriously considered becoming a DBA, I had already attended Kevin’s Free SQL Server DBA Training twice because I’m his friend and designer. He asked me to help him evaluate his teaching and presentation. This means I was pretty familiar with basic concepts before we started any sort of formal training.

From there, you can begin mapping out your loops in order of priority and difficulty. There are many different ways to map out these paths—another reason why a good mentor will be a great trail guide. If for some reason, you’re going it alone, I hope this post (and my series in general) will help you. I would also highly recommend these posts here on Dallas DBAs: Top 10 SQL Server functions every DBA should know by heart, and Dear Junior DBA…

For the record, and in an effort to not belabor my previous posts, here’s a breakdown of the “loops” I’ve traveled so far:

  • Learn about backups and restores
  • Learn about DBCC CheckDB, normalization, and security basics
  • Learn about instances, b-tree structure, and indexes
  • Learn about primary keys and clustered index keys
  • Learn about Maintenance Plans
  • Learn the basic of T-SQL syntax
  • Learn from your mistakes
  • Learn about Reoganize and Rebuild Index commands
  • Learn how to review client servers

But Wait…There’s More

As you might imagine, my weekly posts have only hit the high points and most critical information I’ve been gleaning. I’ve learned quite a few things along the way I haven’t written about previously, as well as collected several resources. Below is a sort of stream-of-consciousness recollection of other things I’ve learned that I think will help other Apprentice and Accidental DBAs kick-start their career. Excuse me while I scroll back in time in my Dallas DBAs Slack channels…

Since you’re going to be working on the files and file systems within SQL Server, get familiar with using the .mdf, .ndf, and .ldf standards. One of my first questions about these filetypes—because you can actually use whatever suffix you’d like (but shouldn’t)—was whether all three were identical filetypes. Kevin’s quick answer was, “No. There are (minimally) two file types… ‘Row Data’ (.MDF, .NDF) and ‘Log’…different internal structure that SQL writes to differently.”

By the way, this and other initial questions specifically came out of watching Kevin’s Pluralsight course, Getting Started with Your First SQL Server Instance. Use this as a foundation to get familiar with installing, updating, and uninstalling instances properly. And then get familiar with how to find and install test databases such as AdventureWorks, WideWorldImporters, and StackOverflow—using a test environment Azure VM if possible.

Oh, and get ready for a huge chunk of acronyms that look almost identical that you need to learn and know the differences: SSMS, SSIS, SSAS, SSRS…

I think I’ve mentioned it before, but staying on top of applying the latest Service Pack and Cumulative Updates to SQL Server instances is very important. A fantastic resource for quickly and easily finding the latest releases from Microsoft is the Microsoft SQL Server Versions List blog. You can drill down and find the correct version and download the appropriate SPs and CUs.

Being a visual person, I gravitated toward viewing Execution Plans early on. Of course, I didn’t understand a whole lot about what I was looking at, but it was good to begin getting familiar with the icons and arrows and such. Don’t sweat it, you’ll learn more and more about Execution Plans and how to read them as you move forward. Hint: Make sure you learn the difference between Estimated and Actual Execution Plans, and how to get those specific plan results.

A sneak peek at my Google Docs SQL Server training library.

I highly suggest building a Google Docs or Office 365 online library of notes and documentation on your journey.

I’m running out of space here, and haven’t covered anywhere near as much as I’d like, so I’m going to leave you with a couple of different lists I hope you’ll find helpful.

DBA Scripts and Tools

  • Ola Hallengren’s Ola Scripts
  • Brent Ozar’s First Responder Kit
  • Adam Machanic’s sp_whoisactive
  • SentryOne Plan Explorer
  • Stack Exchange’s Monitoring System: Opserver

Great Links for a New DBA

    • How to Think Like the SQL Server Engine
    • Free Downloads for Powerful SQL Server Management
    • PASS DBA Fundamentals Virtual Group
    • Redgate SQL Simple Talk Series
    • Introduction to SQL Server Security
    • SQL Server Central’s Stairways Archive

(I’m currently working my way through Stairway to T-SQL DML.)

  • SQL Skill’s Accidental DBA Series
  • How to Download the Stack Overflow Database
  • Kevin’s YouTube video on installing Ola Scripts
  • Adam Machanic’s sp_whoisactive Documentation

That’s all for this week. Join me next time for next episode in The Ironic DBA Files.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

The Ironic DBA Files

      • Prequel: The Ironic DBA—Starting a New and Unexpected Career
      • Episode 1: You Back That Up?
      • Episode 2: Attack of the Corruption
      • Episode 3: Revenge of the Index
      • Episode 4: A New Primary Key
      • Episode 5: The Maintenance Plan Strikes Back
      • Episode 6: Return of the TSQL
      • Episode 7: The Backup Awakens
      • Episode 8: The Last Rebuild
      • Episode 9: Rise of the Clients

Follow @Dallas_DBAs

Filed Under: Apprentice, Beginner, Career, EntryLevel

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 12
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...