• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

SQL

SQL Server Backups: The Basics

August 20, 2025 by Kevin3NF Leave a Comment

If you’re responsible for a SQL Server instance, you need working, consistent backups. Not just a .bak file here and there, but a plan that runs automatically and covers the full recovery cycle.

Here’s how to get that in place, even if you’re not a DBA.

Understand What Each Backup Type Does:

You don’t need them all every time, but you do need to know what they’re for:

  • Full Backup
    A complete copy of the entire database at that moment in time. It’s your foundation.
  • Differential Backup
    Captures only what changed since the last full These can help speed up recovery time and reduce storage needs. Not really necessary if your databases are small.
  • Transaction Log Backup
    Captures everything written to the transaction log since the last log backup. Needed for point-in-time recovery.

    • If your database is in Full or Bulk-Logged recovery model and you’re not doing log backups, your log file will grow endlessly, potentially filling the drive it is on.

 

Set a Backup Schedule That Works

For production databases, this is my minimum recommended setup:

  • Full backups once per day
  • Log backups every 5 to 15 minutes
  • Optional differentials every few hours for large databases

For dev/test databases:

  • Full backups daily or weekly are usually fine
  • You can skip log backups unless you’re testing recovery processes
    • If you are going to skip, set the databases to SIMPLE Recovery

 

Automate the Backups

Use SQL Server Agent to schedule the jobs. Here are two options:

  • Maintenance Plans (basic, GUI-driven)
    • Good for smaller environments or shops without scripting experience
    • Be careful, default plans may not have the best options for your situation
    • Included in SQL Server, supported by Microsoft.
  • Ola Hallengren’s Maintenance Solution (highly recommended)
    • Free, open-source, script-based
    • Handles full/diff/log backup rotation, cleanup, logging, and more
      • Optionally does corruption checking and index/stats maintenance
    • Use SQL Agent to schedule the process via the jobs the script created
    • Free, FAQ/email/community support, but not Microsoft

 

Store Backups Somewhere Safe

Don’t store them on the same drive as the database files. If the drive dies, the data and backups may both be lost

Better options:

  • Separate disk or volume
  • Network share
  • Azure Blob Storage or S3, via Backup to URL option

 

Monitor It

Make sure the backup jobs are:

  • Running successfully
  • Completing on time
  • Not overwriting too soon or growing endlessly

Use SQL Agent alerts, third-party tools, or scripts to monitor backup age and job success.

The Bottom Line:

Understanding the basics of what backups are, and how they work is KEY to protecting your company’s most valuable asset. If you don’t know how this works and it is your responsibility, a database failure without a backup could be a career limiting move.


New Pocket DBA® clients get the first month FREE!

https://pocket-dba.com/

Book a call, and mention “Newsletter”


Thanks for reading!

— Kevin

Filed Under: backup, SQL Tagged With: syndicated

DBCC CHECKDB: Just Because It’s Quiet Doesn’t Mean It’s Safe

August 13, 2025 by Kevin3NF Leave a Comment

Corruption isn’t a “maybe someday” problem – what you need to do now.

Stop. Don’t panic.

You just ran DBCC CHECKDB for the first time in a while (or maybe ever) and saw something you didn’t expect: the word corruption.

Take a breath.

Don’t detach the database.
Don’t run REPAIR_ALLOW_DATA_LOSS.
Don’t reboot the server or start restoring things just yet.

There’s a lot of bad advice floating around from old blogs, well-meaning forum posts, and even some popular current LinkedIn threads. Some of it might’ve been okay 15 years ago. Some of it is dangerous.

Let’s dig in.

What Corruption Really Means

When SQL Server says there’s corruption, it’s not talking about “bad data” like wrong numbers or missing values. It means it found internal structures that are damaged. The kind that can cause queries or even make your database unusable.

This could be:

  • Broken data or index pages
  • Allocation inconsistencies (GAM, SGAM, PFS pages)
  • Corrupt system metadata
  • Problems in the transaction log.

This isn’t a performance problem.
It’s a data integrity problem. If left untreated, it can get worse.

How Does Corruption Happen?

Even if your server is well-configured, corruption can still creep in. Common causes include:

  • Failing disks or controllers (especially SANs and older SSDs)
  • Disk subsystems lying about successful writes
  • Power outages or hard shutdowns.
  • Sometimes SQL Server itself has bugs that cause corruption – especially in RTM versions.
  • Snapshot or backup software interfering at the file level
  • Antivirus software scanning .mdf, .ldf, or .ndf files directly

Some of these things leave no obvious signs. This is why running CHECKDB regularly is so important.

What DBCC CHECKDB Actually Does

When you run DBCC CHECKDB, SQL Server performs a deep consistency check of your database:

  • Every table, every index, every system structure
  • Logical and physical page consistency
  • Allocation integrity

If possible, SQL uses a snapshot to avoid locking the database.

What it doesn’t do:

  • Fix anything (unless you tell it to)
  • Prevent corruption
  • Run automatically (unless you set it up)

How Often Should You Run It?

Ideally: once per week, at minimum.

  • Schedule it in a SQL Agent job, off-hours.
  • Save the job output to file or table so you don’t miss warnings.
  • Set up an email alert for failures of this job (as well as corruption alerts for error 823-825)

If CHECKDB takes too long or hits your performance too hard, you can offload the work.

Offload CHECKDB with Test-DbaLastBackup

If you’re taking backups regularly (you are, right?), you can use Test-DbaLastBackup from the dbatools.io PowerShell module to verify database consistency (and restorability) without touching production.

This command:

  • Restores your most recent backup to another SQL instance
  • Runs DBCC CHECKDB against the restored copy
  • Confirms both restorable state and internal consistency

 

Test-DbaLastBackup -SqlInstance "TestRestoreSQL" -Destination "TestRestoreSQL" -Database "YourDatabase"

It’s a great way to validate backups and run CHECKDB in a lower-impact environment.
Not a replacement for CHECKDB in production, but a powerful supplement when time or resources are tight.

  • Consider running CHECKDB on a secondary replica if you’re using Availability Groups.
  • If CHECKDB fails due to size or takes too long, it’s even more important to find time and a strategy that works.

What to Do If You Find Corruption

  1. Read the output carefully.
    It tells you which object is affected and how.
  2. Run CHECKDB again to confirm.
    Temporary issues can happen, especially on shared storage.
  3. Do not detach the database.
    Doing so loses the ability to investigate further.
  4. Check your backups.
    Can you restore from before the corruption appeared? This is the first thing Microsoft will tell you when you call support.
  5. If you are really lucky the corruption might be in a non-clustered index, and dropping/recreating that index may solve it for now.
  1. Still stuck?

Read this from Brent Ozar: DBCC CHECKDB Reports Corruption? Here’s What to Do

About REPAIR_ALLOW_DATA_LOSS

That command does exactly what it says: it removes damaged pages and objects to make the database consistent again—even if that means losing real data.

Use it only when:

  • You have no usable backup
  • You’ve consulted with your team and accepted the risk (get that in writing from your manager/CTO)
  • You’ve tried every other recovery option

If you’re not 100% sure what it’s going to delete (if anything), you’re not ready to run it. This is the sort of thing that can get you fired. So is not having backups.

How to Check When DBCC CHECKDB Was Last Run

This script gives you the last successful run for each database:

SELECT 
    name AS DatabaseName,
    DATABASEPROPERTYEX(name, 'LastGoodCheckDbTime') AS LastCheckDBSuccess
FROM 
    sys.databases
WHERE 
    state_desc = 'ONLINE'
ORDER BY 
    LastCheckDBSuccess DESC;

If the date is blank, it has never been run.

The Bottom Line

Corruption doesn’t announce itself with a trumpet. You only know it’s there if you go looking.

CHECKDB gives you an early warning. It’s not glamorous, but it’s essential, especially in environments without a dedicated DBA watching for signs of trouble.

If you’re not running it, you’re flying blind.

If you don’t know what to do when it finds something, now’s the time to prepare.

Don’t panic. But don’t ignore it either.

 

Thanks for reading!

— Kevin

Filed Under: SQL, Troubleshooting Tagged With: syndicated

SQL Server I/O Bottlenecks: It’s Not Always the Disk’s Fault

August 6, 2025 by Kevin3NF Leave a Comment

“SQL Server is slow.”

We’ve all heard it. But that doesn’t always mean SQL Server is the problem. And “slow” means nothing without context and ability to verify.

More often than you’d think, poor performance is rooted in the one thing most sysadmins don’t touch until it’s on fire: the disk subsystem.

Why I/O Bottlenecks Fly Under the Radar

Many IT teams blame queries, blocking, or missing indexes when performance tanks, and sometimes they’re right. But if you’re seeing symptoms like long wait times, timeouts, or sluggish backups, there’s a good chance the underlying storage is at fault. I’ve rarely seen a storage admin agree with this at the onset of the problem, so you need to do the work up front.

Unless you look for I/O issues, you might never find them.

Common Causes of SQL Server I/O Bottlenecks

  • Slow or oversubscribed storage
    Spinning disks, congested SANs, or underpowered SSDs can’t keep up with demand.
  • Outdated or faulty drivers
    We’ve seen HBA or RAID controller driver issues that looked like database bugs.
  • Auto-growths triggered during business hours
    Small filegrowth settings lead to frequent stalls. Instant File Initialization helps this. If you cannot use IFI, manually grow your data files off-hours.
  • Bad indexing or bloated tables
    Too much data read, written, and maintained.
  • Unused indexes
    Every insert, update, or delete has to update them, whether they’re used or not. This one is a killer. My script is based one my friend Pinal Dave wrote many years ago.
  • Data, log, and tempDB all sharing a volume
    A recipe for write contention and checkpoint stalls. The more separation you can do, the better. If everything is going through one controller, this might not help, especially in a VMWare virtual controller configuration.
  • VM storage contention or thin provisioning
    Your VM’s dedicated storage might not be as dedicated as you think. Check with your admin to see if VMs have moved around and you are now in a “noisy neighbor” situation.

 

What Do “Good” Disk Numbers Look Like?

If you’re not sure what “normal” looks like for your disks, here are some rough benchmarks:

You can get these numbers using:

  • sys.dm_io_virtual_file_stats
  • Performance Monitor (Avg. Disk sec/Read, Disk Queue Length)
  • Disk benchmarking tools like CrystalDiskMark (local test environments)
  • Resource Monitor>>Disk tab is a quick and easy way to see visually what the disks are spinning time on, if you are on the server.

 

Fixes and Workarounds

  • Identify and reduce high physical reads
    These indicate SQL Server is constantly pulling data from disk, which could be caused by poor indexing, insufficient memory, or queries reading too much data. sp_BlitzCache from Ozar can help with this. Use @SortOrder = ‘reads’ or ‘avg reads’. Sp_whoisactive can help if the issue is ongoing.
  • Tune queries with high reads reads
    Even if a query runs from memory, it can churn the buffer pool and evict useful pages, leading to other queries hitting disk more often.
  • Set reasonable autogrowth sizes
    Growing in 1MB chunks? That’s going to hurt. Aim for larger, consistent growth settings, especially for TempDB and transaction logs.
  • Move files to better storage
    Separate data, logs, TempDB, and backups if possible. SSDs or NVMe where it counts.
  • Clean up unused indexes
    If they’re not used for reads, they’re just extra write overhead. Especially your audit and logging tables that rarely get queried.
  • Keep your drivers and firmware current
    Storage vendors quietly fix performance bugs all the time.
  • Monitor your VM host’s disk utilization
    Especially in shared environments. Noisy neighbors can take you down.

 

The Bottom Line:

SQL Server does a lot of things right, but it can’t make slow storage go faster. Verify the storage is the likely culprit before you go yell at the storage admin.

Before you throw more CPU or memory at a problem, take a closer look at your I/O path. You might just find the real bottleneck isn’t SQL Server at all.

Thanks for reading!

— Kevin

 

Filed Under: Configuration, Performance Tuning, SQL Tagged With: syndicated

SQL Server Maintenance Plans

July 30, 2025 by Kevin3NF Leave a Comment

If you’re a DBA, sysadmin, IT manager, or Accidental DBA, you’ve probably seen SQL Server’s built-in Maintenance Plans. They live right there in SSMS under the “Management” node, quietly offering to take care of your backups, index maintenance, integrity checks, random T-SQL tasks and more.

They look simple. They are simple. But that doesn’t mean they’re always the best solution.

 

What Maintenance Plans Can Do

Microsoft added Maintenance Plans to make basic tasks like backups accessible, especially in environments without a dedicated DBA.
The wizard-driven interface lets you:

  • Schedule Full, Differential, and Transaction Log backups
  • Perform index maintenance
  • Run DBCC CHECKDB
  • Execute basic cleanup tasks
  • Run T-SQL commands as part of the “flow”

And it all runs under SQL Server Agent so you can automate with just a few clicks.

 

What Maintenance Plans Can’t Do Well

Ease of use comes at the cost of flexibility.

Here’s where they fall short:

  • Limited control: You can’t fine-tune logic or dynamically skip steps based on conditions. Not without a lot of fiddling around in the SSIS canvas at least
  • LOTS of clicking, dragging, dropping, Googling, etc. if you are new to MPs. The Wizard will make some basic decisions for you.
  • Logging is basic: Failures often go unnoticed unless you’re checking manually. If a MP job fails, the reason is in the MP history, not the job history. Makes perfect sense.
  • Weird defaults: If you choose to create an index rebuild plan, it defaults to 30% or more fragmentation, and 1000 PAGES, that’s a LOT of time spent on teeny tiny 8MB indexes. Unless a page isn’t 8KB anymore.

If you’re working in a mission-critical or highly regulated environment, these gaps can cause trouble.

 

They’re Not Useless

Don’t get me wrong. Maintenance Plans have their place.

Especially if you’re:

  • Running one SQL Server instance with a couple of databases
  • Trying to get any backups in place after years of neglect. Any backup is better than no backup…but that’s a different post
  • Buying time until a better strategy is in place

 

Step-by-Step: How to Create a Full Backup Maintenance Plan

Let’s walk through the simplest case: backing up all user databases once a day.

  1. Launch the Wizard
  • In SSMS, expand Management
  • Right-click Maintenance Plans
  • Choose Maintenance Plan Wizard
  1. Name & Schedule the Plan
  • Click Next on the welcome screen
  • Name your plan (e.g., Nightly Full Backup)
  • Choose Single schedule for the entire plan
  • Click Change to set the schedule:
    • Frequency: Daily
    • Time: 2:00 AM (or another low-traffic time)
    • Recurs every: 1 day
  • Click OK, then Next
  1. Choose Task Type
  • Check only Back Up Database (Full) → Next
  1. Configure Backup Task
  • Databases: Select All user databases (or hand-pick)
  • Backup to: Disk → Choose or create a folder (e.g., D:\SQLBackups\)
    • URL is an option, for cloud storage.
  • Optional:
    • Create a sub-directory per database
    • Set backup expiration
    • Enable checksum
  • Click Next
  1. Reporting (Optional)
  • Save report to a text file or enable email notifications
    • The default is the same directory your SQL ERRORLOGs are living in.
  1. Finish
  • Review the summary
  • Click Finish to create and schedule the plan

Done. Backups will now run on schedule, and you’ve taken a first step.

But now you need to repeat that process for all the other maintenance tasks (Log backups, stats maintenance, CheckDB, etc.)

 

There’s a Better Way

Once you’re past the basics, most SQL Server professionals recommend moving on from Maintenance Plans. Here’s what they use:

Ola Hallengren’s Maintenance Solution

Free, flexible, and widely used in the SQL community.

  • Modular design
  • Intelligent scheduling
  • Excellent logging
  • Works with the SQL Agent
  • VERY simple setup. Please run this against a ‘DBA’ database, not master or msdb.

SQL Server Agent Jobs with Custom T-SQL

More setup time, but gives you full control over backup paths, logging, and error handling.

Third-Party Tools

If budget allows, options like Redgate SQL Backup or Idera SQL Safe Backup can offer robust UIs, centralized management, and alerts.

 

The Bottom Line

Maintenance Plans are training wheels.

They’ll get you moving, but they’re not built for high-speed, high-traffic, or high-stakes environments.

If you’re serious about protecting your data, build a better backup strategy. But if you’re just getting started and need a win?

Filed Under: backup, Configuration, SQL, SSMS Tagged With: syndicated

SQL Server Post-Install Configurations

July 23, 2025 by Kevin3NF Leave a Comment

The SQL Server installer has gotten better: tempdb configuration, MAXDOP, and even max memory can now be configured during setup.

But don’t be fooled: there’s still a post-install checklist that can make or break your environment over time. If you’ve ever inherited a server that “just ran” for years and started getting slower over time you’ve likely seen what happens when this list gets ignored.

These are not in any particular order, but some do require a restart of the server or the SQL Server Engine service to take effect:

  1. Enable and Configure Database Mail, Alerts, and Operators
    • Required for notifications from SQL Server Agent jobs and alerts.
    • Set up a mail profile and default operator.
    • Enables proactive failure detection and response.

 

  1. Set Up Alerts for High Severity errors 19–25, and 823-825 (corruption)
    • These represent serious errors such as disk issues, memory exhaustion, and corruption.
    • Configure SQL Agent alerts to trigger on these severity levels.
    • Route alerts to the Operator for immediate action.
    • Don’t forget to check the “Enable mail profile” box in the SQL Agent Properties>>Alert System page.
    • Some vulnerability / security tools intentionally send in bad usernames or packets that trigger Severity 20 alerts. You may wind up disabling this one.

 

  1. Enable Backup Compression by Default
    • Saves space and often speeds up backup jobs.
    • sp_configure ‘backup compression default’, 1
    • Reduces I/O load and backup windows on most systems.
    • No risk to this option.
    • SQL Server 2025 might be enhancing this. Backup compression has been an on/off switch since it was introduced.

 

  1. Create and Schedule Maintenance Jobs
    • Avoid relying on default Maintenance Plans if you can.
    • Key tasks:
      • Full, differential, and log backups (user and system databases)
      • Integrity checks (DBCC CHECKDB)
      • Index and stats maintenance
        • What parameters and how often? Lets argue in the comments!
    • Use Ola Hallengren’s Maintenance Solution for greater flexibility, better logging and more frequent updates. Free tool. Not supported by Microsoft.

 

  1. Configure Error Log Recycling
    • Prevent bloated error log files that slow down viewing or parsing.
    • Set SQL Server to recycle logs weekly (my preference)
    • Increase log retention to 12–30 files for historical troubleshooting. I like 25 so I have 6 months of data.

 

  1. Apply Cumulative Updates and Security Fixes
    • SQL Server isn’t patched after install.
    • Download and apply the latest CU and any critical security updates.
      • Make sure your applications are compatible with the updates you are installing.
    • Document patch level and baseline configuration.
    • Restart after each install. Don’t leave a reboot pending for the next person.
    • Full list of CUs, Security patches and Service Packs

 

  1. Back Up System Databases Immediately
    • Even a fresh install has valuable information (logins, jobs, etc.).
    • Take manual backups of master, model, and msdb.
    • Set the model database parameters to what new databases will typically use (Auto settings, RCSI, etc.)
    • Repeat after significant changes (e.g., login creation, job setup, new databases), in addition to scheduled backups

 

  1. Verify Instant File Initialization (IFI)
    • IFI drastically reduces file growth and restore time.
    • Requires “Perform Volume Maintenance Tasks” right for the SQL Server service account.
      • This is an installation option, but it is often overlooked
    • Check via sys.dm_server_services.
    • Requires a SQL Server service to take effect.

 

  1. Set Windows to High Performance Power Mode
    • Prevent CPU throttling that slows SQL Server under load.
    • Switch to High Performance mode via Windows Power Options.

 

  1. Reduce Surface Area
    • Disable unused features: Full-Text Search, SQLCLR, etc.
    • Disable SQL Browser if not using named instances.
    • Use sp_configure and SQL Server Configuration Manager to audit and lock down services.

 

  1. Review Default Permissions and Roles
    • Remove unused logins and review built-in accounts.
    • Disable or rename the ‘sa’ login if not in use.
    • Avoid assigning sysadmin unless absolutely necessary. Check it regularly.

 

  1. Instance level configurations
    • Cost Threshold for Parallelism
      • Defaults to 5, through at least SQL 2022. I prefer 50 for an OLTP system that does some reporting/complex querying
    • Optimize for Ad Hoc Workloads (if you are going to have a lot of Ad Hoc and you are tight on memory)

 

The Bottom Line:

Finishing the install is just the beginning. These post-install configurations set the foundation for a stable, secure, and high-performing SQL Server. Skip them, and you’ll be firefighting later. Not every single setting applies to every server. Click the links, read the docs, do the research.

Filed Under: Configuration, SQL Tagged With: syndicated

SQL Server Database Compatibility Levels

July 16, 2025 by Kevin3NF Leave a Comment

Why You Shouldn’t Overlook This Quiet but Critical SQL Server Setting

 

If you’ve ever upgraded a SQL Server instance and something just broke in your application, chances are high you’ve run into… Compatibility Level issues.

This quiet little setting determines how the SQL engine behaves—and it doesn’t always match the version of SQL Server you’re running.

Let’s unpack why this matters and how to keep it from biting you in production.

 

What Is a Compatibility Level, Anyway?

Every SQL Server database has a compatibility level that controls how certain features and behaviors operate—especially around T-SQL syntax, optimizer decisions, and deprecated functionality.

It’s Microsoft’s way of helping your database survive version upgrades… without immediately breaking your app.

Common levels:

  • 100 = SQL Server 2008
  • 110 = 2012
  • 120 = 2014
  • 130 = 2016
  • 140 = 2017
  • 150 = 2019
  • 160 = 2022
  • 170 = 2025 (presumably, still in CTP as of this writing)

Running SQL Server 2022 with a database at level 110? That means it’s still behaving like SQL 2012 in many ways.

 

Why This Can Cause Real Problems

Let’s say you upgrade your SQL Server from 2014 to 2019 and expect performance improvements, but instead things slow down, or worse, some queries fail entirely.

Why?

Because your database might still be running in compatibility level 120, and:

  • You’re missing optimizer enhancements introduced in later versions
  • Some new T-SQL features won’t work
  • You might even see unexpected errors or deprecated behaviors still being supported

On the flip side:

If you change the compatibility level too soon, you can break app functionality that relied on older behaviors.

 

Best Practices for Compatibility Levels

Check the current level before and after any upgrade:

SELECT name, compatibility_level FROM sys.databases;

Test thoroughly before changing it—ideally in a lower environment with production-like workloads.

Upgrade the compatibility level manually (it doesn’t change automatically with SQL Server version upgrades):

ALTER DATABASE YourDBName SET COMPATIBILITY_LEVEL = 150;

Monitor performance after changing it—you may need to update stats or review execution plans.

The Bottom Line:

Database compatibility level is easy to forget until it causes downtime or mysterious issues. Even then its rarely the first thing investigated (Query code and Indexes are usually first). Make it part of your upgrade checklist, not an afterthought.


New Pocket DBA® clients get the first month FREE!

https://pocket-dba.com/

Book a call, and mention “Newsletter”


Thanks for reading!

— Kevin

Filed Under: Configuration, Performance Tuning, SQL Tagged With: syndicated

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 37
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...