• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

syndicated

Purging a SQL Server Express Database

December 30, 2021 by Kevin3NF 2 Comments

This is not another IRL post, but rather is more of “Help, my SQL Express database is full!” post.

Dog that barely fits in her bed
Just barely fits

I was contacted by a lawyer that was using a 3rd party application to store emails, keep track of time, etc.

The backend of the application is SQL Server Express edition, which has a hard limit of 10GB for the data file.

The biggest table by rowcount and size in the DB is (of course) the email table, which stores the email body in a ‘text‘ datatype. I don’t know what version of SQL Express he is using, but I was working with a restored copy on SQL 2016 Developer. His vendor was no help.

Steps we tried after a lot of digging and testing:

0. Initial datafile size: 9969MB

1. Purge oldest 20%

Delete [email]
Where sysid in 
  (Select top 20 percent sysid from [email] order by date);
go

File size 9600MB. Not much improvement

2. Purge by a specific date

Delete [email]
Where [date] < '2012-01-01'

File size 9500MB. Slightly better.

3. Of the remaining emails, delete by largest memo field, older than 1/1/2017:

Select top 1000 sysid, len(Cast(memo as nvarchar(max))) as size
Into #BigMemos
from [email]
Where [Date] < '2017-01-01'
Order by len(Cast(memo as nvarchar(max))) desc

Delete [email]
Where sysid in (Select sysid from #BigMemos)

Drop table #BigMemos

Still not much difference, but at this point I know the indexes and half-full pages must be in full chaos mode

4. Rebuild, then Reorganize all of the indexes on this table

Nope...can't show the statements...too much application specific naming :)

Down to 9200MB

5. Time to Shrink!

DBCC SHRINKFILE (N'MyDB', notruncate);
GO

DBCC SHRINKFILE (N'MyDB', truncateonly);
GO

-- Yes, both were required, and in this order. Your mileage may vary

This got us down to 8696 MB, which will last my new lawyer friend a solid 3-5 years.

And yes, I billed him lawyer rates for this 😉

If you need part-time DBA help, or just want to have a Senior DBA around for emergencies, Contact us!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: SQL Tagged With: syndicated

Group By Starting Characters

October 22, 2021 by Kevin3NF Leave a Comment

A litter of puppies
Image by Tom Staziker from Pixabay

I was chatting with Jeff (b|t) on my team yesterday and the context escapes me but I had this thought:

“Can you Group By the beginning characters, or a subset, of a field?”

I’m not a developer, so this question never comes my way. Except yesterday.

TL;DR – Yes.

Basic Group By code, using StackOverflow public data:

Select [Location], Count(Location)
From StackOverflow..Users
Where [Location] Like 'D%'
Group By [Location]
Having Count(Location) > 1
Order By [Location]

Same code, Grouping By the first 6 characters:

Select Left([Location],6), Count(Left(Location,6))
From StackOverflow..Users
Where [Location] Like 'D%'
Group By Left([Location],6)
Having Count(Left(Location,6)) > 1
Order by Left([Location],6)

Results. Lets pretend I was looking for people in Dallas.

There is more than 1 Dallas BTW:

Results from the 2 queries

I’m sure there are better use cases…but I only spent 7 minutes on this nonsense, just to see what would happen.

When find yourself saying “I wonder if SQL Server can….”, go try it.  Learn first, Google later 🙂

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: TSQL Tagged With: syndicated

Do Full Backups Break Log Shipping?

October 15, 2021 by Kevin3NF Leave a Comment

TLDR: Nope.

A tub boat pushing logs down a river
Image by jakeforlove from Pixabay

Keep on doing your full backups.

Make sure that any databases you Log Ship are NOT also doing log backups in your SQL Maintenance Plans, Ola Jobs, etc.

The Backup Chain will not be broken by running a Full backup, and you do not need to use Copy-Only

Proof:

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: backup, SQL Tagged With: syndicated, video

How old are those stats?

September 30, 2021 by Kevin3NF 2 Comments

Image by angel1238812 from Pixabay

SQL Server maintains a variety of stats about all sort of performance items.

  • Index usage (or missing indexes)
  • Query performance
  • Corrupt pages
  • Disk IO performance
  • Way more than I care to list here

I was trying to troubleshoot some TerriBad * tempdb write performance…almost 3000ms per write, on a server that we recently migrated to.

Our data center vendor of course said the storage was perfect and that we should try troubleshooting SQL Server.  (Duh?)

These stats are stored in the sys.dm_io_virtual_file_stats DMF, and I found this gem under the ‘sample_ms’ column description in the result set:

“The data type is int for SQL Server 2008 through SQL Server 2014 (12.x). In these versions, the value will reset to 0 after approximately 25 days of continuous database engine uptime.”

Now, this is a SQL 2017 box, but this tells me that these stats are going to reset at some point.

When I re-ran my IO stalls query…the writes for all 8 tempdb data files are at 7ms.  The sample_ms is currently 3,097,351,985 which is about 36 days if I’ve done my math right. Server uptime is approaching 90 days.

The takeaway for you, dear reader, is to know where to find the stats but also to know what timeframe they represent.

* Thanks to Brent Ozar (b|t) for this highly amusing term 🙂

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Performance Tagged With: performance, syndicated

Server Review Essentials for Accidental and Junior DBAs—Daily Server Checks (Part 1)

July 6, 2021 by SQLandMTB Leave a Comment

Welcome back to the series “Server Review Essentials for Accidental and Junior DBAs.” So far in this series we’ve taken a look at how to set up your work environment, best practices for onboarding new servers and clients, and prepping your servers for daily reviews.

What You’ve Accomplished Thus Far

After all of our work and prep during the last several posts in this series, it’s finally time to get down to the “real” work of a DBA.

Wait, that’s not quite right. While it may seem as if everything covered so far in this series has been pretty basic and production servers have barely been touched, don’t get the idea that the work done so far isn’t of high value. Think about some of the things you’ve already learned about your servers during this preparation process:

  • You’ve gathered all of the necessary credentials needed to log into VPNs and individual servers, verifying the accuracy of those credentials.
  • You’ve set up a stable work environment.
  • You’ve built a starting personal script library that will grow and evolve over time.
  • You’ve collected “biographical” information about every server you’ll be reviewing, so you have a baseline to see how everything stands before you begin reviews.
  • You’ve learned the expected RPO and RTO for all of your databases.
  • You’ve built a Server Inventory List that shows the status and use of every server for which you will be responsible for administrating.
  • You’ve discovered the current patch level of all the servers within your purview, which will give you vital knowledge as you move forward.
  • You’ve learned what maintenance jobs are already active on your servers.
  • You’ve learned what the current backups look like and whether or not you’re currently meeting RPO and RTO.
  • And you’ve created a DBA Tools database on all of your servers so that you’re prepared for regular server reviews.

Why is all of the above so vital? If I’ve learned one thing in my two years of database administration it’s that there seems to be a large number of folks who really have no clear understanding of what’s going on in their server environments. Their approach to preserving their data is more seat of the pants and reactionary than proactive monitoring.

As outside consultants, these are the types of folks who typically end up calling us to say, “my server is on fire!” Many of the issues we see on an “emergency” basis could have been prevented with regular reviews.

A Proactive Approach

At Dallas DBAs, we much prefer a proactive approach to database maintenance and performance. When we take on a new group of servers, the first thing we do is run a health check on the boxes and search for any issues that may need immediate remediation or any server settings that can be adjusted for immediate benefit.

While these topics are not covered in this series, we’re often looking for common issues related to things such as Cost Threshold for Parallelism, Instant File Initialization, MAXDOP, max server memory, the current state of backups, TempDB configuration, and more.

This gives us a launchpad to implement all of the prep work mentioned in this series so far, and better guide the people who are depending upon the data to run their business or otherwise get work done.

By taking a proactive approach, we often know about problems days or hours before our clients do. We can see if backups have failed, ensure index maintenance is occurring regularly, and keep an eye on job failures and errors.

We often prefer to do a quick 15 minute or less check of all servers daily, running a handful of scripts that I’ll begin sharing in the next few posts. These scripts quickly show us the state of the backups, reveal any job failures or errors we need to investigate, and find alerts we may need to be aware of.

This proactive approach also helps us get to know what is expected from the servers. As we report job failures and errors, we may learn that certain issues are common knowledge and are not necessarily causing the end-users any pain.

We learn how our data users feel about deadlocks. They may not care about them if they don’t “feel” them. A job that runs every 10 minutes and is the victim of a deadlock once or twice a day likely isn’t going to be an issue. Sometimes job failures are simply random “network blips” that happen periodically, so all you can do is name a note, shrug your shoulders, and move on.

That’s all for now. Next time we’ll take a deeper dive into looking at investigating your server’s backup history.

If you’d like to know more about SQL Server Maintenance, check out Kevin’s Getting Started with SQL Server Maintenance course at Pluralsight.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Career, EntryLevel Tagged With: syndicated

Server Review Essentials for Accidental and Junior DBAs—Prepping for Server Reviews

June 16, 2021 by SQLandMTB Leave a Comment

Welcome back to our series, Server Review Essentials for Accidental and Junior DBAs, where we’ve been taking an in-depth look at how to get ready to perform regular SQL server reviews for your company or clients. So far, we’ve covered how to get your working environment set up (Part 1 | Part 2) setting up biographical information about your environments, collecting server information, building a maintenance job list, and backup frequency list, as well as providing sample biographies and server reports.

Now it’s time to move forward and put all your preparation to use and perform regular server checks…almost. We’ve got one more round of prep work to do before your environment is completely ready to go.

Verify RPO

Before you go any deeper into this process, now is the time to ensure all of the servers you’ve been entrusted with are meeting the stated Recovery Point Objectives. My advice at this point is to not worry over the Recovery Time Objective until you’ve ensured you’re meeting RPO. Your company or clients won’t care how fast their data can be recovered if they can’t recover the data they actually need.

Use the backup frequency list you created in an earlier step and take the time to look through each server to see if the backup schedule meets the stated RPO in terms of FULL and LOG backup frequency (and DIFFs if relevant). We’re not confirming the existence of the backups yet, we’ll do that in a later step. For now, just make sure the backup jobs are scheduled properly to meet your RPO requirements.

If you find that your backup jobs are not scheduled properly, get with your senior DBA or the relevant person in authority and correct the backup job schedules accordingly.

Create Your Own DBA Tools Database

Though not necessary, I’d suggest creating your own DBA Tools database on each of the servers for which you are responsible. Most companies and clients won’t have any problem adding such a database since it is lightweight, provides tools beneficial to your server health, and creates a sandbox in which those tools can reside separately from the data.

This is a common method for us here at Dallas DBAs. Whenever we’re given the responsibility for administrating a new server or environment, one of the very first things we do is evaluate the current state of the server, implement some best practices, and create a new database like this to install our preferred scripts.

We’re not going to cover the steps to creating a new database for your tools. If you don’t know how to do that, check out the Microsoft Docs to learn how to make a new database. Having said that, we feel it’s always best to put the database in SIMPLE recovery model and make sure the database owner is “sa”. Otherwise, you can pretty much use the defaults in creating your new DBA Tools database.

What scripts do we install and use regularly? In the following posts in this series we’ll showcase some of the scripts we use, but below you’ll find a list of scripts and stored procedures we like to implement on the servers we touch.

Keep in mind the purpose of this post is not to explain how to install or use any of the scripts and tools listed below, only to show you some of the tools we like to use.

  • Ola Hallengren’s SQL Server Maintenance Solution is our preferred tool for all database backups, integrity checks, and index optimization. His scripts are far superior to the built-in Maintenance Plans in SQL Server.
  • Adam Machanic’s sp_WhoIsActive.
  • Brent Ozar’s First Responder Kit, which includes fantastic tools such at sp_Blitz, sp_BlitzCache, sp_BlitzIndex, and more.

Get Ready for Proactivity

Now your servers are primed for regular reviews, which means you will often know about potential issues long before they become catastrophic failures. Having done all of this work doesn’t guarantee your servers will be problem-free, but it does mean your servers will be less likely to suffer common issues and you’ll be better equipped to handle issues as they arise.

If you’d like to know more about SQL Server Maintenance, check out Kevin’s Getting Started with SQL Server Maintenance course at Pluralsight.

Follow me on Twitter at @SQLandMTB, and if you’re into mountain bikes come over and check out my site NTX Trails.

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Career, EntryLevel Tagged With: syndicated

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Interim pages omitted …
  • Go to page 12
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...