• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

syndicated

SQL Server Post-Install Configurations

July 23, 2025 by Kevin3NF Leave a Comment

The SQL Server installer has gotten better: tempdb configuration, MAXDOP, and even max memory can now be configured during setup.

But don’t be fooled: there’s still a post-install checklist that can make or break your environment over time. If you’ve ever inherited a server that “just ran” for years and started getting slower over time you’ve likely seen what happens when this list gets ignored.

These are not in any particular order, but some do require a restart of the server or the SQL Server Engine service to take effect:

  1. Enable and Configure Database Mail, Alerts, and Operators
    • Required for notifications from SQL Server Agent jobs and alerts.
    • Set up a mail profile and default operator.
    • Enables proactive failure detection and response.

 

  1. Set Up Alerts for High Severity errors 19–25, and 823-825 (corruption)
    • These represent serious errors such as disk issues, memory exhaustion, and corruption.
    • Configure SQL Agent alerts to trigger on these severity levels.
    • Route alerts to the Operator for immediate action.
    • Don’t forget to check the “Enable mail profile” box in the SQL Agent Properties>>Alert System page.
    • Some vulnerability / security tools intentionally send in bad usernames or packets that trigger Severity 20 alerts. You may wind up disabling this one.

 

  1. Enable Backup Compression by Default
    • Saves space and often speeds up backup jobs.
    • sp_configure ‘backup compression default’, 1
    • Reduces I/O load and backup windows on most systems.
    • No risk to this option.
    • SQL Server 2025 might be enhancing this. Backup compression has been an on/off switch since it was introduced.

 

  1. Create and Schedule Maintenance Jobs
    • Avoid relying on default Maintenance Plans if you can.
    • Key tasks:
      • Full, differential, and log backups (user and system databases)
      • Integrity checks (DBCC CHECKDB)
      • Index and stats maintenance
        • What parameters and how often? Lets argue in the comments!
    • Use Ola Hallengren’s Maintenance Solution for greater flexibility, better logging and more frequent updates. Free tool. Not supported by Microsoft.

 

  1. Configure Error Log Recycling
    • Prevent bloated error log files that slow down viewing or parsing.
    • Set SQL Server to recycle logs weekly (my preference)
    • Increase log retention to 12–30 files for historical troubleshooting. I like 25 so I have 6 months of data.

 

  1. Apply Cumulative Updates and Security Fixes
    • SQL Server isn’t patched after install.
    • Download and apply the latest CU and any critical security updates.
      • Make sure your applications are compatible with the updates you are installing.
    • Document patch level and baseline configuration.
    • Restart after each install. Don’t leave a reboot pending for the next person.
    • Full list of CUs, Security patches and Service Packs

 

  1. Back Up System Databases Immediately
    • Even a fresh install has valuable information (logins, jobs, etc.).
    • Take manual backups of master, model, and msdb.
    • Set the model database parameters to what new databases will typically use (Auto settings, RCSI, etc.)
    • Repeat after significant changes (e.g., login creation, job setup, new databases), in addition to scheduled backups

 

  1. Verify Instant File Initialization (IFI)
    • IFI drastically reduces file growth and restore time.
    • Requires “Perform Volume Maintenance Tasks” right for the SQL Server service account.
      • This is an installation option, but it is often overlooked
    • Check via sys.dm_server_services.
    • Requires a SQL Server service to take effect.

 

  1. Set Windows to High Performance Power Mode
    • Prevent CPU throttling that slows SQL Server under load.
    • Switch to High Performance mode via Windows Power Options.

 

  1. Reduce Surface Area
    • Disable unused features: Full-Text Search, SQLCLR, etc.
    • Disable SQL Browser if not using named instances.
    • Use sp_configure and SQL Server Configuration Manager to audit and lock down services.

 

  1. Review Default Permissions and Roles
    • Remove unused logins and review built-in accounts.
    • Disable or rename the ‘sa’ login if not in use.
    • Avoid assigning sysadmin unless absolutely necessary. Check it regularly.

 

  1. Instance level configurations
    • Cost Threshold for Parallelism
      • Defaults to 5, through at least SQL 2022. I prefer 50 for an OLTP system that does some reporting/complex querying
    • Optimize for Ad Hoc Workloads (if you are going to have a lot of Ad Hoc and you are tight on memory)

 

The Bottom Line:

Finishing the install is just the beginning. These post-install configurations set the foundation for a stable, secure, and high-performing SQL Server. Skip them, and you’ll be firefighting later. Not every single setting applies to every server. Click the links, read the docs, do the research.

Filed Under: Configuration, SQL Tagged With: syndicated

SQL Server Database Compatibility Levels

July 16, 2025 by Kevin3NF Leave a Comment

Why You Shouldn’t Overlook This Quiet but Critical SQL Server Setting

 

If you’ve ever upgraded a SQL Server instance and something just broke in your application, chances are high you’ve run into… Compatibility Level issues.

This quiet little setting determines how the SQL engine behaves—and it doesn’t always match the version of SQL Server you’re running.

Let’s unpack why this matters and how to keep it from biting you in production.

 

What Is a Compatibility Level, Anyway?

Every SQL Server database has a compatibility level that controls how certain features and behaviors operate—especially around T-SQL syntax, optimizer decisions, and deprecated functionality.

It’s Microsoft’s way of helping your database survive version upgrades… without immediately breaking your app.

Common levels:

  • 100 = SQL Server 2008
  • 110 = 2012
  • 120 = 2014
  • 130 = 2016
  • 140 = 2017
  • 150 = 2019
  • 160 = 2022
  • 170 = 2025 (presumably, still in CTP as of this writing)

Running SQL Server 2022 with a database at level 110? That means it’s still behaving like SQL 2012 in many ways.

 

Why This Can Cause Real Problems

Let’s say you upgrade your SQL Server from 2014 to 2019 and expect performance improvements, but instead things slow down, or worse, some queries fail entirely.

Why?

Because your database might still be running in compatibility level 120, and:

  • You’re missing optimizer enhancements introduced in later versions
  • Some new T-SQL features won’t work
  • You might even see unexpected errors or deprecated behaviors still being supported

On the flip side:

If you change the compatibility level too soon, you can break app functionality that relied on older behaviors.

 

Best Practices for Compatibility Levels

Check the current level before and after any upgrade:

SELECT name, compatibility_level FROM sys.databases;

Test thoroughly before changing it—ideally in a lower environment with production-like workloads.

Upgrade the compatibility level manually (it doesn’t change automatically with SQL Server version upgrades):

ALTER DATABASE YourDBName SET COMPATIBILITY_LEVEL = 150;

Monitor performance after changing it—you may need to update stats or review execution plans.

The Bottom Line:

Database compatibility level is easy to forget until it causes downtime or mysterious issues. Even then its rarely the first thing investigated (Query code and Indexes are usually first). Make it part of your upgrade checklist, not an afterthought.


New Pocket DBA® clients get the first month FREE!

https://pocket-dba.com/

Book a call, and mention “Newsletter”


Thanks for reading!

— Kevin

Filed Under: Configuration, Performance Tuning, SQL Tagged With: syndicated

SQL Server Auto-settings: The Good, the Bad, and the Ugly

July 9, 2025 by Kevin3NF Leave a Comment

Settings That Could Be Hurting Your Performance

If you’ve ever created a new SQL Server database and just left the default settings alone, you’re not alone.

Microsoft provides a lot of “helpful” options under the hood (especially at the database level) designed for flexibility and ease of use. Some of them are better suited for development and testing environments, not production.

Let’s take a look at the three most misunderstood “auto” settings in SQL Server, and how they might be quietly causing you pain behind the scenes.

Preface: SQL Server is a shared resource system. A database with bad settings/performance can be much like a screaming baby on a plane. One crying person, but everyone is affected.

 

Auto_Close – The Bad

This setting causes the database to shut itself down after the last user disconnects. Sounds resource-friendly, right? Until you realize it has to reopen the database every time someone queries it again, which can take time and may result in application timeouts. Per Microsoft, “The AUTO_CLOSE process is asynchronous; repeatedly opening and closing the database doesn’t reduce performance”

  • Impact: Adds latency and I/O traffic every time the database “wakes up”
  • Use case: Most useful in local development or single-user environments
  • Recommendation: Set to FALSE for anything but the most trivial uses

 

Auto_Shrink – The Ugly

This setting allows SQL Server to automatically shrink your database files to reclaim space. Unfortunately, it often creates more problems than it solves.

  • Impact: Causes frequent file growth/shrink cycles, which may lead to fragmentation and performance issues. Shrinking of data files can cause locking and blocking issues. Do these manually, and off hours when required.
  • Use case: Almost never appropriate for production
  • Recommendation: Set to FALSE and manage growth/shrink operations manually and intentionally

 

Auto_Update_Statistics – The Good (Usually)

This one usually helps. SQL Server uses statistics to create execution plans. Keeping them current is critical for performance.

  • Impact: Keeps query plans accurate and efficient
  • Watch out: On very large tables, this setting can trigger noticeable delays if updates happen mid-query
  • Recommendation: Leave it TRUE in most cases. Also consider enabling AUTO_UPDATE_STATISTICS_ASYNC to prevent blocking.

 

Other “Auto” Settings – Grouped for Sanity

Here are a few more database-level settings worth knowing about:

  • Auto_Create_Statistics: Generally good. Leave it on unless you’re manually managing stats at an advanced level.
  • Auto_Update_Statistics_Async: Helpful in larger environments to reduce blocking on stats updates.
  • Auto Create Incremental Statistics (2014+): Niche but useful for partitioned tables.
    • ALTER DATABASE [Yourdatabase] SET AUTO_CREATE_STATISTICS ON(INCREMENTAL = ON)
  • Automatic_Tuning (starting with SQL Server 2017):
    • Requires Query Store to be enabled and in read-write mode
    • AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = { DEFAULT | ON | OFF } )
    • I have not used this yet…proceed with caution

 

The Bottom Line: Set It and Regret It?

Default “auto” settings are often a compromise between convenience and control. For accidental DBAs and small teams, it’s easy to assume the defaults are “smart.” But smart for Microsoft’s general use case doesn’t always mean smart for your production workload.

It’s worth taking a few minutes per database to review these settings and make intentional decisions, especially Auto-Close and Auto-Shrink. These two alone are the source of more performance headaches than almost any others I see during client assessments.


SQL TidBits:

Want to quickly find which databases have these settings enabled?

SELECT 
    [name] AS DatabaseName,
    is_auto_close_on,
    is_auto_shrink_on,
    is_auto_create_stats_on,
    is_auto_update_stats_on,
    is_auto_update_stats_async_on
FROM 
    sys.databases;

Thanks for reading!

— Kevin

Filed Under: Configuration, Install, Performance Tuning, SQL Tagged With: syndicated

SQL Server Errorlog: Not Just for Errors!

July 2, 2025 by Kevin3NF Leave a Comment

A Goldmine of Clues You Might Be Ignoring

SQL Server’s Error Logs often get overlooked, until something breaks. But if you know how to use them, they can tell you what went wrong before you’re knee-deep in outage mode.

Whether you’re a sysadmin who inherited SQL or an MSP juggling multiple environments, understanding the ErrorLog is low effort, high return.

Let’s decode the basics.

What Is the SQL Server Error Log?

It’s not a crash dump. Its not even just errors. The errorlog is a plain text file that tracks:

  • Startup and shutdown events
  • Login successes and failures (settable)
  • Backup and restore activity
  • Deadlocks (if enabled)
  • Corruption errors
  • Memory dump ingo
  • Full data and log file warnings
  • And more!

 

It’s like a flight recorder for SQL Server. If something bad happens, it probably shows up here.

How Many Logs Are There? And Where Are They?

By default, SQL Server keeps:

  • 7 error logs (Errorlog (current) through Errorlog.6)
  • Each gets recycled when the SQL Server service restarts or the logs are manually cycled
  • Located in the same folder as your SQL Server binaries (or wherever SQL was installed)
    • If you can’t find them see this short video: How to find the SQL Server Errorlog
    • Also, the location is the -e startup parameter in SQL Sever Configuration Manager

 

You can view them in SQL Server Management Studio under Management > SQL Server Logs or directly from the file system.

Changing the Retention Settings

Seven logs may not be enough if you cycle the service frequently. Here’s how to increase retention:

USE [master];
GO

EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'ErrorLogSizeInKb', REG_DWORD, 0
GO

EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 25
GO

If you don’t like registry edits, use SSMS:

Management>>SQL Server Logs>>Right-click: Configure. Check the “Limit the number of error log…” box, change the “Maximum number of error log files” to your preference and click OK

 

You can also cycle the log manually without restarting SQL Server:

EXEC sp_cycle_errorlog;

 

Do this regularly (e.g., weekly) via SQL Agent job so each log stays readable and relevant.

I like weekly cycling, 25 retention so I have 6 months of reasonably sized log files.

Finding the Signal in the Noise (Filtering)

The error log can be…wordy. Thankfully, SSMS lets you filter by multiple options:

 

For command-line lovers:

EXEC sp_readerrorlog 0, 1, 'Login failed';

This searches the current log for login failures.

Want to scan older logs? Change the first parameter:

EXEC sp_readerrorlog 3, 1, 'backup';  -- Searches the 4th oldest log

(xp_readerrolog exists, but is undocumented. Use sp_readerrorlog instead)

More Filtering:

If you can, log only failed Logins, not failed and successful. Your industry may require other setups.

Use trace flag 3226 to NOT log successful transaction log backups. Those are already being logged in the msdb tables and job history.

 

The Bottom Line: Set It and Watch It

  • Check retention: 7 logs may not be enough
  • Automate log cycling so logs don’t get so big they are unreadable
  • Use filters to quickly find useful info

Some monitoring tools read the logs for you, but you should still know how

 

 

Filed Under: SQL, SSMS, Tools, Troubleshooting Tagged With: syndicated

SQL Server Editions: What’s the Difference and Why It Matters

June 25, 2025 by Kevin3NF Leave a Comment

Understanding the Limits Before You Hit Them

If you’re responsible for a SQL Server instance but don’t live and breathe licensing, you’re not alone.

Many IT managers and sysadmins inherit a SQL Server setup only to find out later that they’re boxed in by the edition they didn’t choose. And by the time those limits show up, it’s often during a crisis. I’ve seen this many times over the years.

Let’s break down the main SQL Server editions: Express, Web, Standard, and Enterprise as well as what makes each one different. Not just in features, but in what they let you do… or stop you from doing.

 

SQL Server Express: Free, But with Hard Limits

This is Microsoft’s free entry-level edition, often bundled with lightweight apps or used by developers.

  • Max database size: 10 GB per database
  • Max memory usage (SQL engine): 1.4 GB
  • Max CPU usage: lesser of 1 physical socket, or 4 cores
  • Agent jobs: Not supported – this is a big deal

 

Use case: OK for dev/test, very small apps, or teaching environments.

Gotchas: You’ll quickly hit resource limits on memory or database size. Backup and maintenance automation via SQL Server Agent isn’t available. You have to script around it, and schedule jobs externally.

 

SQL Server Web: Affordable for Hosting, Limited for Business Use

Designed for web hosting providers – only available through certain Service Provider License Agreement partners.

  • Max DB size: No enforced limit
  • Max memory usage: 64 GB for DB engine
  • Max CPU usage: lesser of 4 sockets or 16 cores
  • SQL Agent is available for job scheduling
  • Key limitations: No advanced high availability (e.g., no AGs), no TDE (encryption), limited BI and performance features

 

Use case: Public-facing websites and web apps hosted via service providers.

Gotchas: Often confused with Standard Edition, but it’s not intended for in-house business workloads.

 

SQL Server Standard: The Most Common, and Commonly Misunderstood

This edition powers a majority of small and mid-size businesses.

  • Max DB size: No enforced limit (limited only by hardware)
  • Max memory usage: 128 GB for the SQL engine (other components like SSRS have their own limits)
  • Max CPU usage: lesser of 4 sockets or 24 cores (configuring your VM setup is critical here)
  • Includes: Backup compression, basic availability features (e.g., basic Always-On AG with 1 DB), TDE (SQL 2019+)
  • SQL Server Failover Clustering on 2 nodes is available
  • SQL Agent is available for job scheduling

 

Use case: OLTP workloads, departmental databases, general business apps.

Gotchas: Hit memory and CPU limits faster than expected in busy environments. Lacks advanced features like partitioning, online index rebuilds, and many encryption/performance options.

 

SQL Server Enterprise: All the Bells and Whistles

The top-tier edition. If you’re asking “Can SQL Server do this?” Enterprise almost always says yes.

  • Max DB size: No enforced limit
  • Max memory usage: OS limit (can use terabytes)
  • Max CPU usage: OS limit (can use 64+ cores)
  • Includes: Online index operations, table partitioning, in-memory OLTP, Always On with multiple DBs, TDE, columnstore indexes, and more
  • SQL Agent is available for job scheduling

 

Use case: High-performance, mission-critical systems. Often used with HA/DR needs, heavy workloads, and large-scale BI.

Gotchas: Very expensive. Licensing is per core, with a 4-core minimum per physical server or VM.

 

SQL Server Developer: Enterprise for non-production usage.

SQL 2025 might have a Dev edition coming with Standard Edition compatibility. As of this writing 2025 is in public preview

 

So… Which Edition Do You Have?

You can find out by running this query:

SELECT
    SERVERPROPERTY('Edition') AS Edition,
    SERVERPROPERTY('ProductVersion') AS ProductVersion,
    SERVERPROPERTY('EngineEdition') AS EngineEdition;

 

The Bottom Line:

Don’t wait until a restore fails or CPU caps out to figure out what edition you’re on. Knowing your edition helps you plan capacity, budget smarter, and avoid downtime when the server starts groaning.

Filed Under: Configuration, Install, SQL Tagged With: syndicated

SQL Server in the Cloud – Are DBAs Still Needed?

June 18, 2025 by Kevin3NF Leave a Comment

Things your cloud vendor may not tell you

 

Here’s a common theme I hear from small IT teams:

“Our SQL Server is in the cloud now. We don’t need a DBA.”

Not quite.

When you’re running SQL Server on cloud virtual machines like Azure SQL VM or AWS EC2, you’re still responsible for a lot of the same challenges you faced on-prem. The difference? Now you’re paying cloud rates for the same headaches. But you don’t have to deal with hardware and some of the other infrastructure hassles.

Let’s bust some of the top myths.

Cloud VM Myths That Can Hurt Your SQL Server

Myth 1: The Cloud Auto-Scales SQL Server for Me

  • Cloud VMs don’t automatically adjust CPU, memory, or storage. If you don’t right-size your SQL Server, you may be paying too much or running into performance bottlenecks.
  • Even with the right mix of vCPUs and Memory, you may be getting IOPS/Throughput throttling to the disks.
  • The VM size needs to be in the right family as well. For database workloads, look for “Memory Optimized” options, especially for reporting workloads.

 

Myth 2: Cloud Patching Means I’m Always Safe

  • Azure and AWS offer patching options for the O/S, but they don’t know your business hours or your workloads. A patch could restart your SQL Server at the worst possible time.
  • Azure (and AWS?) will not auto-patch your SQL Server unless you turn it on via the SQL Server IaaS Agent extension. Is this really a good idea for your system? Are patches automatically supported by your software vendors?
    • The IaaS extension will only apply “Important or Critical” patches.
  • As always, test all patches before applying to production.

 

Myth 3: Backups are Handled Automatically in the Cloud

  • Cloud VMs give you the option to back up SQL Server. They don’t configure backups or validate them for your RTO/RPO needs.
  • A DBA ensures backups run properly, are tested, and can meet your recovery goals.

 

Myth 4: The Cloud Handles SQL Performance

  • The cloud gives you CPU, memory, and storage but it doesn’t tune your indexes, optimize queries, or troubleshoot blocking.
    • Some level of auto-tuning exists in the PaaS offering (Azure SQL DB), but I’ve seen it go hilariously wrong.
  • A DBA does that. Without them, you may see slow performance over time, higher costs, and user/customer frustration.

 

Myth 5: SQL Server Licensing is Simpler in the Cloud

  • Simple? Maybe. Cheaper? Probably not, even if you get it right.
  • Cloud billing for SQL Server licensing can be complex, and mistakes add up fast.
  • Overpaying for cores “just in case” is a silly way to spend. Start smaller and add resources, better disks, etc. as you determine your actual workload.
  • This is one of the key reasons the Memory Optimized VMs exist…VMs are priced based more on vCPU count than anything else. Get that core count down by having a DBA tune your system periodically.

 

The Bottom Line:

SQL Server in the cloud can be a game changer. Wandering into the vendor’s portal or dashboard and blindly picking “something” is not the best approach. Get input from independent cloud experts and have a DBA review things from time to time.


Could Your SQL Server Survive a Failure Without Major Downtime?

Take the Quiz!

Find out how bullet-proof your DR plan really is!


 

SQL TidBits:

Check your SQL Server uptime:

SELECT 
   sqlserver_start_time 
FROM 
   sys.dm_os_sys_info;

Thanks for reading!

–Kevin

Filed Under: AWS, Azure, Cloud, SQL Tagged With: syndicated

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 16
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...