• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

Explaining SQL Server in plain english

  • Services
    • SQL Server Health Check
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

Blog

How old are those stats?

September 30, 2021 by Kevin3NF 2 Comments

Image by angel1238812 from Pixabay

SQL Server maintains a variety of stats about all sort of performance items.

  • Index usage (or missing indexes)
  • Query performance
  • Corrupt pages
  • Disk IO performance
  • Way more than I care to list here

I was trying to troubleshoot some TerriBad * tempdb write performance…almost 3000ms per write, on a server that we recently migrated to.

Our data center vendor of course said the storage was perfect and that we should try troubleshooting SQL Server.  (Duh?)

These stats are stored in the sys.dm_io_virtual_file_stats DMF, and I found this gem under the ‘sample_ms’ column description in the result set:

“The data type is int for SQL Server 2008 through SQL Server 2014 (12.x). In these versions, the value will reset to 0 after approximately 25 days of continuous database engine uptime.”

Now, this is a SQL 2017 box, but this tells me that these stats are going to reset at some point.

When I re-ran my IO stalls query…the writes for all 8 tempdb data files are at 7ms.  The sample_ms is currently 3,097,351,985 which is about 36 days if I’ve done my math right. Server uptime is approaching 90 days.

The takeaway for you, dear reader, is to know where to find the stats but also to know what timeframe they represent.

* Thanks to Brent Ozar (b|t) for this highly amusing term 🙂

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Performance Tagged With: performance, syndicated

Ola log backup job failure

September 28, 2021 by Kevin3NF Leave a Comment

I had a log backup job that was not failing, and was also not backing up logs:

EXECUTE [DBA].[dbo].[DatabaseBackup]
@Databases = 'USER_DATABASES. model',
@Directory = 'D:\Backups\KBH-Precision_SQL2016',
@BackupType = 'LOG',
@Verify = 'Y',
@CleanupTime = NULL,
@CheckSum = 'Y',
@LogToTable = 'Y'

Older version of Ola. Newer versions give this:

Date and time: 2021-09-28 14:56:12
Server: KBH-PRECISION\SQL2016
Version: 13.0.5026.0
Edition: Developer Edition (64-bit)
Platform: Windows
Procedure: [DBA].[dbo].[DatabaseBackup]
Parameters: Removed for cleanliness
Version: 2020-12-31 18:58:56
Source: https://ola.hallengren.com
The following databases in the @Databases parameter do not exist: [USER_DATABASES. model].
Date and time: 2021-09-28 14:56:12
Completion time: 2021-09-28T14:56:12.6474426-05:00

 

Watch those typos, kids!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: backup Tagged With: ola

Server Review Essentials for Accidental and Junior DBAs—Daily Server Checks (Part 1)

July 6, 2021 by SQLandMTB Leave a Comment

Welcome back to the series “Server Review Essentials for Accidental and Junior DBAs.” So far in this series we’ve taken a look at how to set up your work environment, best practices for onboarding new servers and clients, and prepping your servers for daily reviews.

What You’ve Accomplished Thus Far

After all of our work and prep during the last several posts in this series, it’s finally time to get down to the “real” work of a DBA.

Wait, that’s not quite right. While it may seem as if everything covered so far in this series has been pretty basic and production servers have barely been touched, don’t get the idea that the work done so far isn’t of high value. Think about some of the things you’ve already learned about your servers during this preparation process:

  • You’ve gathered all of the necessary credentials needed to log into VPNs and individual servers, verifying the accuracy of those credentials.
  • You’ve set up a stable work environment.
  • You’ve built a starting personal script library that will grow and evolve over time.
  • You’ve collected “biographical” information about every server you’ll be reviewing, so you have a baseline to see how everything stands before you begin reviews.
  • You’ve learned the expected RPO and RTO for all of your databases.
  • You’ve built a Server Inventory List that shows the status and use of every server for which you will be responsible for administrating.
  • You’ve discovered the current patch level of all the servers within your purview, which will give you vital knowledge as you move forward.
  • You’ve learned what maintenance jobs are already active on your servers.
  • You’ve learned what the current backups look like and whether or not you’re currently meeting RPO and RTO.
  • And you’ve created a DBA Tools database on all of your servers so that you’re prepared for regular server reviews.

Why is all of the above so vital? If I’ve learned one thing in my two years of database administration it’s that there seems to be a large number of folks who really have no clear understanding of what’s going on in their server environments. Their approach to preserving their data is more seat of the pants and reactionary than proactive monitoring.

As outside consultants, these are the types of folks who typically end up calling us to say, “my server is on fire!” Many of the issues we see on an “emergency” basis could have been prevented with regular reviews.

A Proactive Approach

At Dallas DBAs, we much prefer a proactive approach to database maintenance and performance. When we take on a new group of servers, the first thing we do is run a health check on the boxes and search for any issues that may need immediate remediation or any server settings that can be adjusted for immediate benefit.

While these topics are not covered in this series, we’re often looking for common issues related to things such as Cost Threshold for Parallelism, Instant File Initialization, MAXDOP, max server memory, the current state of backups, TempDB configuration, and more.

This gives us a launchpad to implement all of the prep work mentioned in this series so far, and better guide the people who are depending upon the data to run their business or otherwise get work done.

By taking a proactive approach, we often know about problems days or hours before our clients do. We can see if backups have failed, ensure index maintenance is occurring regularly, and keep an eye on job failures and errors.

We often prefer to do a quick 15 minute or less check of all servers daily, running a handful of scripts that I’ll begin sharing in the next few posts. These scripts quickly show us the state of the backups, reveal any job failures or errors we need to investigate, and find alerts we may need to be aware of.

This proactive approach also helps us get to know what is expected from the servers. As we report job failures and errors, we may learn that certain issues are common knowledge and are not necessarily causing the end-users any pain.

We learn how our data users feel about deadlocks. They may not care about them if they don’t “feel” them. A job that runs every 10 minutes and is the victim of a deadlock once or twice a day likely isn’t going to be an issue. Sometimes job failures are simply random “network blips” that happen periodically, so all you can do is name a note, shrug your shoulders, and move on.

That’s all for now. Next time we’ll take a deeper dive into looking at investigating your server’s backup history.

If you’d like to know more about SQL Server Maintenance, check out Kevin’s Getting Started with SQL Server Maintenance course at Pluralsight.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Career, EntryLevel Tagged With: syndicated

Server Review Essentials for Accidental and Junior DBAs—Prepping for Server Reviews

June 16, 2021 by SQLandMTB Leave a Comment

Welcome back to our series, Server Review Essentials for Accidental and Junior DBAs, where we’ve been taking an in-depth look at how to get ready to perform regular SQL server reviews for your company or clients. So far, we’ve covered how to get your working environment set up (Part 1 | Part 2) setting up biographical information about your environments, collecting server information, building a maintenance job list, and backup frequency list, as well as providing sample biographies and server reports.

Now it’s time to move forward and put all your preparation to use and perform regular server checks…almost. We’ve got one more round of prep work to do before your environment is completely ready to go.

Verify RPO

Before you go any deeper into this process, now is the time to ensure all of the servers you’ve been entrusted with are meeting the stated Recovery Point Objectives. My advice at this point is to not worry over the Recovery Time Objective until you’ve ensured you’re meeting RPO. Your company or clients won’t care how fast their data can be recovered if they can’t recover the data they actually need.

Use the backup frequency list you created in an earlier step and take the time to look through each server to see if the backup schedule meets the stated RPO in terms of FULL and LOG backup frequency (and DIFFs if relevant). We’re not confirming the existence of the backups yet, we’ll do that in a later step. For now, just make sure the backup jobs are scheduled properly to meet your RPO requirements.

If you find that your backup jobs are not scheduled properly, get with your senior DBA or the relevant person in authority and correct the backup job schedules accordingly.

Create Your Own DBA Tools Database

Though not necessary, I’d suggest creating your own DBA Tools database on each of the servers for which you are responsible. Most companies and clients won’t have any problem adding such a database since it is lightweight, provides tools beneficial to your server health, and creates a sandbox in which those tools can reside separately from the data.

This is a common method for us here at Dallas DBAs. Whenever we’re given the responsibility for administrating a new server or environment, one of the very first things we do is evaluate the current state of the server, implement some best practices, and create a new database like this to install our preferred scripts.

We’re not going to cover the steps to creating a new database for your tools. If you don’t know how to do that, check out the Microsoft Docs to learn how to make a new database. Having said that, we feel it’s always best to put the database in SIMPLE recovery model and make sure the database owner is “sa”. Otherwise, you can pretty much use the defaults in creating your new DBA Tools database.

What scripts do we install and use regularly? In the following posts in this series we’ll showcase some of the scripts we use, but below you’ll find a list of scripts and stored procedures we like to implement on the servers we touch.

Keep in mind the purpose of this post is not to explain how to install or use any of the scripts and tools listed below, only to show you some of the tools we like to use.

  • Ola Hallengren’s SQL Server Maintenance Solution is our preferred tool for all database backups, integrity checks, and index optimization. His scripts are far superior to the built-in Maintenance Plans in SQL Server.
  • Adam Machanic’s sp_WhoIsActive.
  • Brent Ozar’s First Responder Kit, which includes fantastic tools such at sp_Blitz, sp_BlitzCache, sp_BlitzIndex, and more.

Get Ready for Proactivity

Now your servers are primed for regular reviews, which means you will often know about potential issues long before they become catastrophic failures. Having done all of this work doesn’t guarantee your servers will be problem-free, but it does mean your servers will be less likely to suffer common issues and you’ll be better equipped to handle issues as they arise.

If you’d like to know more about SQL Server Maintenance, check out Kevin’s Getting Started with SQL Server Maintenance course at Pluralsight.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Career, EntryLevel Tagged With: syndicated

GroupBy Conference May 2021

May 17, 2021 by Kevin3NF 1 Comment

Dallas DBAs is proud to be sponsoring the May 2021 GroupBy Conference, Americas DBA track

GroupBy is a different sort of conference as it has been online since it began in 2019 (Thanks Brent! (b|t)) Sessions are voted on by the community with the top vote getters being accepted.

As part of our Virtual Group sponsorship we get to present a session, so we asked the first-runner up in the vote tally to “guest present” for us.

Jeff Moden (L), Mr. “NO RBAR” himself will be presenting “Black Arts” Index Maintenance – GUIDs v.s. Fragmentation – They’re not the problem… WE ARE!” at 22:00 UTC on May 25.

This will be a roughly 30 second “About Dallas DBAs” bit, and the rest is all Jeff.

Session Abstract:

This is NOT your typical presentation on the fragmentation problems of Random GUIDs. No… Instead we’re going to DESTROY THE MYTH OF RANDOM GUID FRAGMENTATION.

In one of the most ironic/heterodoxical turns of knowledge you’re ever likely to experience, we’ll see how THE USE OF RANDOM GUIDS CAN ACTUALLY PREVENT FRAGMENTATION! In the end, you’ll witness the results of some simple testing that clearly demonstrate that you can easily insert literally MILLIONs of rows into a Random GUID clustered index with almost no page splits (not even supposed “good” ones) and LESS THAN 1% Logical fragmentation!

We’ll identify the real problem and the seriously effective yet incredibly simple two-part fix for it. In the process, we’ll prove that Random GUIDs actually behave in a manner like most people expect a good index to behave, especially in but not limited to high performance OLTP environments as well as the benefits of doing so.

We’ll also learn how to use a new tool that I created (included in the ZIP file) to ACTUALLY SEE what an index looks like at the page level for all pages in a single graph.

Then, we’ll use that tool to lay waste to what people are currently calling “Best Practice” index maintenance. We’ll literally see what REORGANIZE does to an index at the page level and why it’s one of the very worst things you can do to your Random GUID and many other types of indexes even if you’re limited to the Standard Edition of SQL Server. We also prove that REORGANIZE CAUSES ORDERS-OF-MAGNITUDE WORSE TRANSACTION LOG FILE USAGE than REBUILD especially for Random GUIDs.

As interesting and totally necessary sidebars, we’ll also see how the use of ever-increasing index keys could be (and frequently is) a major source of many of your fragmentation problems and we’ll also see that NEWSEQUENTIALID may NOT the answer that you’re looking for.

See you there (virtually)!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Uncategorized

We need a Senior MySQL DBA

April 20, 2021 by Kevin3NF Leave a Comment

From time to time, clients of Dallas DBAs ask for help with MySQL and we don’t currently have anyone with that skill set on our team.

I’m looking for someone that can take ad hoc, part time projects when they arise on an hourly basis.

Ideally, an independent contractor/freelancer. Senior level (10+ years)

Specifics:

  • 1099
  • US Citizen or Green Card (legally able to work in the US)
  • Excellent communication skills
    • English does not have to be your first language, but fluency is a must
  • Senior level skills that we can verify
  • Willing to teach, document, etc. as well as fixing issues
  • This will almost always be remote work
  • $100-150+/hr, depending on your skills and the clients’ budgets

NO recruiters or 3rd parties at this time.

Email your PDF resume, LinkedIn Profile, etc. to Kevin@DallasDBAs.com

Thanks for reading!

Kevin3NF

Filed Under: Career, MySQL

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 42
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!


Categories

  • Accidental DBA
  • Apprentice
  • Azure
  • backup
  • backup
  • Beginner
  • Career
  • Configuration
  • Cycling
  • Dallas DBAs
  • Deployment
  • Emergency
  • Encryption
  • EntryLevel
  • Fries
  • Goals
  • HADR
  • HealthCheck
  • Index
  • Install
  • IRL
  • Management
  • Migration
  • MySQL
  • Oracle
  • PASS
  • Performance
  • Personal
  • Pluralsight
  • Podcast
  • PowerShell
  • red gate
  • Restore
  • Security
  • Speaking
  • SQL
  • sql 2005 log shipping suspect
  • sql injection
  • SQLSaturday
  • SSIS
  • SSMS
  • Summit
  • TIL
  • Tools
  • Training
  • Troubleshooting
  • TSQL
  • TSQL2sday
  • Uncategorized
  • Upgrade
  • vendor code modify
  • video
  • vNext

Copyright © 2023 · Genesis Sample on Genesis Framework · WordPress · Log in

Privacy Policy
 

Loading Comments...