• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Configuration

SQL Server I/O Bottlenecks: It’s Not Always the Disk’s Fault

August 6, 2025 by Kevin3NF Leave a Comment

“SQL Server is slow.”

We’ve all heard it. But that doesn’t always mean SQL Server is the problem. And “slow” means nothing without context and ability to verify.

More often than you’d think, poor performance is rooted in the one thing most sysadmins don’t touch until it’s on fire: the disk subsystem.

Why I/O Bottlenecks Fly Under the Radar

Many IT teams blame queries, blocking, or missing indexes when performance tanks, and sometimes they’re right. But if you’re seeing symptoms like long wait times, timeouts, or sluggish backups, there’s a good chance the underlying storage is at fault. I’ve rarely seen a storage admin agree with this at the onset of the problem, so you need to do the work up front.

Unless you look for I/O issues, you might never find them.

Common Causes of SQL Server I/O Bottlenecks

  • Slow or oversubscribed storage
    Spinning disks, congested SANs, or underpowered SSDs can’t keep up with demand.
  • Outdated or faulty drivers
    We’ve seen HBA or RAID controller driver issues that looked like database bugs.
  • Auto-growths triggered during business hours
    Small filegrowth settings lead to frequent stalls. Instant File Initialization helps this. If you cannot use IFI, manually grow your data files off-hours.
  • Bad indexing or bloated tables
    Too much data read, written, and maintained.
  • Unused indexes
    Every insert, update, or delete has to update them, whether they’re used or not. This one is a killer. My script is based one my friend Pinal Dave wrote many years ago.
  • Data, log, and tempDB all sharing a volume
    A recipe for write contention and checkpoint stalls. The more separation you can do, the better. If everything is going through one controller, this might not help, especially in a VMWare virtual controller configuration.
  • VM storage contention or thin provisioning
    Your VM’s dedicated storage might not be as dedicated as you think. Check with your admin to see if VMs have moved around and you are now in a “noisy neighbor” situation.

 

What Do “Good” Disk Numbers Look Like?

If you’re not sure what “normal” looks like for your disks, here are some rough benchmarks:

You can get these numbers using:

  • sys.dm_io_virtual_file_stats
  • Performance Monitor (Avg. Disk sec/Read, Disk Queue Length)
  • Disk benchmarking tools like CrystalDiskMark (local test environments)
  • Resource Monitor>>Disk tab is a quick and easy way to see visually what the disks are spinning time on, if you are on the server.

 

Fixes and Workarounds

  • Identify and reduce high physical reads
    These indicate SQL Server is constantly pulling data from disk, which could be caused by poor indexing, insufficient memory, or queries reading too much data. sp_BlitzCache from Ozar can help with this. Use @SortOrder = ‘reads’ or ‘avg reads’. Sp_whoisactive can help if the issue is ongoing.
  • Tune queries with high reads reads
    Even if a query runs from memory, it can churn the buffer pool and evict useful pages, leading to other queries hitting disk more often.
  • Set reasonable autogrowth sizes
    Growing in 1MB chunks? That’s going to hurt. Aim for larger, consistent growth settings, especially for TempDB and transaction logs.
  • Move files to better storage
    Separate data, logs, TempDB, and backups if possible. SSDs or NVMe where it counts.
  • Clean up unused indexes
    If they’re not used for reads, they’re just extra write overhead. Especially your audit and logging tables that rarely get queried.
  • Keep your drivers and firmware current
    Storage vendors quietly fix performance bugs all the time.
  • Monitor your VM host’s disk utilization
    Especially in shared environments. Noisy neighbors can take you down.

 

The Bottom Line:

SQL Server does a lot of things right, but it can’t make slow storage go faster. Verify the storage is the likely culprit before you go yell at the storage admin.

Before you throw more CPU or memory at a problem, take a closer look at your I/O path. You might just find the real bottleneck isn’t SQL Server at all.

Thanks for reading!

— Kevin

 

Filed Under: Configuration, Performance Tuning, SQL Tagged With: syndicated

SQL Server Maintenance Plans

July 30, 2025 by Kevin3NF Leave a Comment

If you’re a DBA, sysadmin, IT manager, or Accidental DBA, you’ve probably seen SQL Server’s built-in Maintenance Plans. They live right there in SSMS under the “Management” node, quietly offering to take care of your backups, index maintenance, integrity checks, random T-SQL tasks and more.

They look simple. They are simple. But that doesn’t mean they’re always the best solution.

 

What Maintenance Plans Can Do

Microsoft added Maintenance Plans to make basic tasks like backups accessible, especially in environments without a dedicated DBA.
The wizard-driven interface lets you:

  • Schedule Full, Differential, and Transaction Log backups
  • Perform index maintenance
  • Run DBCC CHECKDB
  • Execute basic cleanup tasks
  • Run T-SQL commands as part of the “flow”

And it all runs under SQL Server Agent so you can automate with just a few clicks.

 

What Maintenance Plans Can’t Do Well

Ease of use comes at the cost of flexibility.

Here’s where they fall short:

  • Limited control: You can’t fine-tune logic or dynamically skip steps based on conditions. Not without a lot of fiddling around in the SSIS canvas at least
  • LOTS of clicking, dragging, dropping, Googling, etc. if you are new to MPs. The Wizard will make some basic decisions for you.
  • Logging is basic: Failures often go unnoticed unless you’re checking manually. If a MP job fails, the reason is in the MP history, not the job history. Makes perfect sense.
  • Weird defaults: If you choose to create an index rebuild plan, it defaults to 30% or more fragmentation, and 1000 PAGES, that’s a LOT of time spent on teeny tiny 8MB indexes. Unless a page isn’t 8KB anymore.

If you’re working in a mission-critical or highly regulated environment, these gaps can cause trouble.

 

They’re Not Useless

Don’t get me wrong. Maintenance Plans have their place.

Especially if you’re:

  • Running one SQL Server instance with a couple of databases
  • Trying to get any backups in place after years of neglect. Any backup is better than no backup…but that’s a different post
  • Buying time until a better strategy is in place

 

Step-by-Step: How to Create a Full Backup Maintenance Plan

Let’s walk through the simplest case: backing up all user databases once a day.

  1. Launch the Wizard
  • In SSMS, expand Management
  • Right-click Maintenance Plans
  • Choose Maintenance Plan Wizard
  1. Name & Schedule the Plan
  • Click Next on the welcome screen
  • Name your plan (e.g., Nightly Full Backup)
  • Choose Single schedule for the entire plan
  • Click Change to set the schedule:
    • Frequency: Daily
    • Time: 2:00 AM (or another low-traffic time)
    • Recurs every: 1 day
  • Click OK, then Next
  1. Choose Task Type
  • Check only Back Up Database (Full) → Next
  1. Configure Backup Task
  • Databases: Select All user databases (or hand-pick)
  • Backup to: Disk → Choose or create a folder (e.g., D:\SQLBackups\)
    • URL is an option, for cloud storage.
  • Optional:
    • Create a sub-directory per database
    • Set backup expiration
    • Enable checksum
  • Click Next
  1. Reporting (Optional)
  • Save report to a text file or enable email notifications
    • The default is the same directory your SQL ERRORLOGs are living in.
  1. Finish
  • Review the summary
  • Click Finish to create and schedule the plan

Done. Backups will now run on schedule, and you’ve taken a first step.

But now you need to repeat that process for all the other maintenance tasks (Log backups, stats maintenance, CheckDB, etc.)

 

There’s a Better Way

Once you’re past the basics, most SQL Server professionals recommend moving on from Maintenance Plans. Here’s what they use:

Ola Hallengren’s Maintenance Solution

Free, flexible, and widely used in the SQL community.

  • Modular design
  • Intelligent scheduling
  • Excellent logging
  • Works with the SQL Agent
  • VERY simple setup. Please run this against a ‘DBA’ database, not master or msdb.

SQL Server Agent Jobs with Custom T-SQL

More setup time, but gives you full control over backup paths, logging, and error handling.

Third-Party Tools

If budget allows, options like Redgate SQL Backup or Idera SQL Safe Backup can offer robust UIs, centralized management, and alerts.

 

The Bottom Line

Maintenance Plans are training wheels.

They’ll get you moving, but they’re not built for high-speed, high-traffic, or high-stakes environments.

If you’re serious about protecting your data, build a better backup strategy. But if you’re just getting started and need a win?

Filed Under: backup, Configuration, SQL, SSMS Tagged With: syndicated

SQL Server Post-Install Configurations

July 23, 2025 by Kevin3NF Leave a Comment

The SQL Server installer has gotten better: tempdb configuration, MAXDOP, and even max memory can now be configured during setup.

But don’t be fooled: there’s still a post-install checklist that can make or break your environment over time. If you’ve ever inherited a server that “just ran” for years and started getting slower over time you’ve likely seen what happens when this list gets ignored.

These are not in any particular order, but some do require a restart of the server or the SQL Server Engine service to take effect:

  1. Enable and Configure Database Mail, Alerts, and Operators
    • Required for notifications from SQL Server Agent jobs and alerts.
    • Set up a mail profile and default operator.
    • Enables proactive failure detection and response.

 

  1. Set Up Alerts for High Severity errors 19–25, and 823-825 (corruption)
    • These represent serious errors such as disk issues, memory exhaustion, and corruption.
    • Configure SQL Agent alerts to trigger on these severity levels.
    • Route alerts to the Operator for immediate action.
    • Don’t forget to check the “Enable mail profile” box in the SQL Agent Properties>>Alert System page.
    • Some vulnerability / security tools intentionally send in bad usernames or packets that trigger Severity 20 alerts. You may wind up disabling this one.

 

  1. Enable Backup Compression by Default
    • Saves space and often speeds up backup jobs.
    • sp_configure ‘backup compression default’, 1
    • Reduces I/O load and backup windows on most systems.
    • No risk to this option.
    • SQL Server 2025 might be enhancing this. Backup compression has been an on/off switch since it was introduced.

 

  1. Create and Schedule Maintenance Jobs
    • Avoid relying on default Maintenance Plans if you can.
    • Key tasks:
      • Full, differential, and log backups (user and system databases)
      • Integrity checks (DBCC CHECKDB)
      • Index and stats maintenance
        • What parameters and how often? Lets argue in the comments!
    • Use Ola Hallengren’s Maintenance Solution for greater flexibility, better logging and more frequent updates. Free tool. Not supported by Microsoft.

 

  1. Configure Error Log Recycling
    • Prevent bloated error log files that slow down viewing or parsing.
    • Set SQL Server to recycle logs weekly (my preference)
    • Increase log retention to 12–30 files for historical troubleshooting. I like 25 so I have 6 months of data.

 

  1. Apply Cumulative Updates and Security Fixes
    • SQL Server isn’t patched after install.
    • Download and apply the latest CU and any critical security updates.
      • Make sure your applications are compatible with the updates you are installing.
    • Document patch level and baseline configuration.
    • Restart after each install. Don’t leave a reboot pending for the next person.
    • Full list of CUs, Security patches and Service Packs

 

  1. Back Up System Databases Immediately
    • Even a fresh install has valuable information (logins, jobs, etc.).
    • Take manual backups of master, model, and msdb.
    • Set the model database parameters to what new databases will typically use (Auto settings, RCSI, etc.)
    • Repeat after significant changes (e.g., login creation, job setup, new databases), in addition to scheduled backups

 

  1. Verify Instant File Initialization (IFI)
    • IFI drastically reduces file growth and restore time.
    • Requires “Perform Volume Maintenance Tasks” right for the SQL Server service account.
      • This is an installation option, but it is often overlooked
    • Check via sys.dm_server_services.
    • Requires a SQL Server service to take effect.

 

  1. Set Windows to High Performance Power Mode
    • Prevent CPU throttling that slows SQL Server under load.
    • Switch to High Performance mode via Windows Power Options.

 

  1. Reduce Surface Area
    • Disable unused features: Full-Text Search, SQLCLR, etc.
    • Disable SQL Browser if not using named instances.
    • Use sp_configure and SQL Server Configuration Manager to audit and lock down services.

 

  1. Review Default Permissions and Roles
    • Remove unused logins and review built-in accounts.
    • Disable or rename the ‘sa’ login if not in use.
    • Avoid assigning sysadmin unless absolutely necessary. Check it regularly.

 

  1. Instance level configurations
    • Cost Threshold for Parallelism
      • Defaults to 5, through at least SQL 2022. I prefer 50 for an OLTP system that does some reporting/complex querying
    • Optimize for Ad Hoc Workloads (if you are going to have a lot of Ad Hoc and you are tight on memory)

 

The Bottom Line:

Finishing the install is just the beginning. These post-install configurations set the foundation for a stable, secure, and high-performing SQL Server. Skip them, and you’ll be firefighting later. Not every single setting applies to every server. Click the links, read the docs, do the research.

Filed Under: Configuration, SQL Tagged With: syndicated

SQL Server Database Compatibility Levels

July 16, 2025 by Kevin3NF Leave a Comment

Why You Shouldn’t Overlook This Quiet but Critical SQL Server Setting

 

If you’ve ever upgraded a SQL Server instance and something just broke in your application, chances are high you’ve run into… Compatibility Level issues.

This quiet little setting determines how the SQL engine behaves—and it doesn’t always match the version of SQL Server you’re running.

Let’s unpack why this matters and how to keep it from biting you in production.

 

What Is a Compatibility Level, Anyway?

Every SQL Server database has a compatibility level that controls how certain features and behaviors operate—especially around T-SQL syntax, optimizer decisions, and deprecated functionality.

It’s Microsoft’s way of helping your database survive version upgrades… without immediately breaking your app.

Common levels:

  • 100 = SQL Server 2008
  • 110 = 2012
  • 120 = 2014
  • 130 = 2016
  • 140 = 2017
  • 150 = 2019
  • 160 = 2022
  • 170 = 2025 (presumably, still in CTP as of this writing)

Running SQL Server 2022 with a database at level 110? That means it’s still behaving like SQL 2012 in many ways.

 

Why This Can Cause Real Problems

Let’s say you upgrade your SQL Server from 2014 to 2019 and expect performance improvements, but instead things slow down, or worse, some queries fail entirely.

Why?

Because your database might still be running in compatibility level 120, and:

  • You’re missing optimizer enhancements introduced in later versions
  • Some new T-SQL features won’t work
  • You might even see unexpected errors or deprecated behaviors still being supported

On the flip side:

If you change the compatibility level too soon, you can break app functionality that relied on older behaviors.

 

Best Practices for Compatibility Levels

Check the current level before and after any upgrade:

SELECT name, compatibility_level FROM sys.databases;

Test thoroughly before changing it—ideally in a lower environment with production-like workloads.

Upgrade the compatibility level manually (it doesn’t change automatically with SQL Server version upgrades):

ALTER DATABASE YourDBName SET COMPATIBILITY_LEVEL = 150;

Monitor performance after changing it—you may need to update stats or review execution plans.

The Bottom Line:

Database compatibility level is easy to forget until it causes downtime or mysterious issues. Even then its rarely the first thing investigated (Query code and Indexes are usually first). Make it part of your upgrade checklist, not an afterthought.


New Pocket DBA® clients get the first month FREE!

https://pocket-dba.com/

Book a call, and mention “Newsletter”


Thanks for reading!

— Kevin

Filed Under: Configuration, Performance Tuning, SQL Tagged With: syndicated

SQL Server Auto-settings: The Good, the Bad, and the Ugly

July 9, 2025 by Kevin3NF Leave a Comment

Settings That Could Be Hurting Your Performance

If you’ve ever created a new SQL Server database and just left the default settings alone, you’re not alone.

Microsoft provides a lot of “helpful” options under the hood (especially at the database level) designed for flexibility and ease of use. Some of them are better suited for development and testing environments, not production.

Let’s take a look at the three most misunderstood “auto” settings in SQL Server, and how they might be quietly causing you pain behind the scenes.

Preface: SQL Server is a shared resource system. A database with bad settings/performance can be much like a screaming baby on a plane. One crying person, but everyone is affected.

 

Auto_Close – The Bad

This setting causes the database to shut itself down after the last user disconnects. Sounds resource-friendly, right? Until you realize it has to reopen the database every time someone queries it again, which can take time and may result in application timeouts. Per Microsoft, “The AUTO_CLOSE process is asynchronous; repeatedly opening and closing the database doesn’t reduce performance”

  • Impact: Adds latency and I/O traffic every time the database “wakes up”
  • Use case: Most useful in local development or single-user environments
  • Recommendation: Set to FALSE for anything but the most trivial uses

 

Auto_Shrink – The Ugly

This setting allows SQL Server to automatically shrink your database files to reclaim space. Unfortunately, it often creates more problems than it solves.

  • Impact: Causes frequent file growth/shrink cycles, which may lead to fragmentation and performance issues. Shrinking of data files can cause locking and blocking issues. Do these manually, and off hours when required.
  • Use case: Almost never appropriate for production
  • Recommendation: Set to FALSE and manage growth/shrink operations manually and intentionally

 

Auto_Update_Statistics – The Good (Usually)

This one usually helps. SQL Server uses statistics to create execution plans. Keeping them current is critical for performance.

  • Impact: Keeps query plans accurate and efficient
  • Watch out: On very large tables, this setting can trigger noticeable delays if updates happen mid-query
  • Recommendation: Leave it TRUE in most cases. Also consider enabling AUTO_UPDATE_STATISTICS_ASYNC to prevent blocking.

 

Other “Auto” Settings – Grouped for Sanity

Here are a few more database-level settings worth knowing about:

  • Auto_Create_Statistics: Generally good. Leave it on unless you’re manually managing stats at an advanced level.
  • Auto_Update_Statistics_Async: Helpful in larger environments to reduce blocking on stats updates.
  • Auto Create Incremental Statistics (2014+): Niche but useful for partitioned tables.
    • ALTER DATABASE [Yourdatabase] SET AUTO_CREATE_STATISTICS ON(INCREMENTAL = ON)
  • Automatic_Tuning (starting with SQL Server 2017):
    • Requires Query Store to be enabled and in read-write mode
    • AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { DEFAULT | ON | OFF } )
    • I have not used this yet…proceed with caution

 

The Bottom Line: Set It and Regret It?

Default “auto” settings are often a compromise between convenience and control. For accidental DBAs and small teams, it’s easy to assume the defaults are “smart.” But smart for Microsoft’s general use case doesn’t always mean smart for your production workload.

It’s worth taking a few minutes per database to review these settings and make intentional decisions, especially Auto-Close and Auto-Shrink. These two alone are the source of more performance headaches than almost any others I see during client assessments.


SQL TidBits:

Want to quickly find which databases have these settings enabled?

SELECT 
    [name] AS DatabaseName,
    is_auto_close_on,
    is_auto_shrink_on,
    is_auto_create_stats_on,
    is_auto_update_stats_on,
    is_auto_update_stats_async_on
FROM 
    sys.databases;

Thanks for reading!

— Kevin

Filed Under: Configuration, Install, Performance Tuning, SQL Tagged With: syndicated

SQL Server Editions: What’s the Difference and Why It Matters

June 25, 2025 by Kevin3NF Leave a Comment

Understanding the Limits Before You Hit Them

If you’re responsible for a SQL Server instance but don’t live and breathe licensing, you’re not alone.

Many IT managers and sysadmins inherit a SQL Server setup only to find out later that they’re boxed in by the edition they didn’t choose. And by the time those limits show up, it’s often during a crisis. I’ve seen this many times over the years.

Let’s break down the main SQL Server editions: Express, Web, Standard, and Enterprise as well as what makes each one different. Not just in features, but in what they let you do… or stop you from doing.

 

SQL Server Express: Free, But with Hard Limits

This is Microsoft’s free entry-level edition, often bundled with lightweight apps or used by developers.

  • Max database size: 10 GB per database
  • Max memory usage (SQL engine): 1.4 GB
  • Max CPU usage: lesser of 1 physical socket, or 4 cores
  • Agent jobs: Not supported – this is a big deal

 

Use case: OK for dev/test, very small apps, or teaching environments.

Gotchas: You’ll quickly hit resource limits on memory or database size. Backup and maintenance automation via SQL Server Agent isn’t available. You have to script around it, and schedule jobs externally.

 

SQL Server Web: Affordable for Hosting, Limited for Business Use

Designed for web hosting providers – only available through certain Service Provider License Agreement partners.

  • Max DB size: No enforced limit
  • Max memory usage: 64 GB for DB engine
  • Max CPU usage: lesser of 4 sockets or 16 cores
  • SQL Agent is available for job scheduling
  • Key limitations: No advanced high availability (e.g., no AGs), no TDE (encryption), limited BI and performance features

 

Use case: Public-facing websites and web apps hosted via service providers.

Gotchas: Often confused with Standard Edition, but it’s not intended for in-house business workloads.

 

SQL Server Standard: The Most Common, and Commonly Misunderstood

This edition powers a majority of small and mid-size businesses.

  • Max DB size: No enforced limit (limited only by hardware)
  • Max memory usage: 128 GB for the SQL engine (other components like SSRS have their own limits)
  • Max CPU usage: lesser of 4 sockets or 24 cores (configuring your VM setup is critical here)
  • Includes: Backup compression, basic availability features (e.g., basic Always-On AG with 1 DB), TDE (SQL 2019+)
  • SQL Server Failover Clustering on 2 nodes is available
  • SQL Agent is available for job scheduling

 

Use case: OLTP workloads, departmental databases, general business apps.

Gotchas: Hit memory and CPU limits faster than expected in busy environments. Lacks advanced features like partitioning, online index rebuilds, and many encryption/performance options.

 

SQL Server Enterprise: All the Bells and Whistles

The top-tier edition. If you’re asking “Can SQL Server do this?” Enterprise almost always says yes.

  • Max DB size: No enforced limit
  • Max memory usage: OS limit (can use terabytes)
  • Max CPU usage: OS limit (can use 64+ cores)
  • Includes: Online index operations, table partitioning, in-memory OLTP, Always On with multiple DBs, TDE, columnstore indexes, and more
  • SQL Agent is available for job scheduling

 

Use case: High-performance, mission-critical systems. Often used with HA/DR needs, heavy workloads, and large-scale BI.

Gotchas: Very expensive. Licensing is per core, with a 4-core minimum per physical server or VM.

 

SQL Server Developer: Enterprise for non-production usage.

SQL 2025 might have a Dev edition coming with Standard Edition compatibility. As of this writing 2025 is in public preview

 

So… Which Edition Do You Have?

You can find out by running this query:

SELECT
    SERVERPROPERTY('Edition') AS Edition,
    SERVERPROPERTY('ProductVersion') AS ProductVersion,
    SERVERPROPERTY('EngineEdition') AS EngineEdition;

 

The Bottom Line:

Don’t wait until a restore fails or CPU caps out to figure out what edition you’re on. Knowing your edition helps you plan capacity, budget smarter, and avoid downtime when the server starts groaning.

Filed Under: Configuration, Install, SQL Tagged With: syndicated

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...