• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

syndicated

Data Conferences – Worth Every Dollar

November 12, 2025 by Kevin3NF Leave a Comment

Some of the best career enhancers you can buy.

 

Why I Go to Conferences

I go for two big reasons:

  1. Learning from the best. The folks teaching at PASS Summit or SQLBits aren’t reading from slides. They’re the ones writing the scripts, blog posts, and tools we all use such as the First Responder Kit, Ola Hallengren’s maintenance solution, and countless others. You get to learn how the creators think.
  2. Community. I’ve built friendships at these events that turned into collaboration, mentorship, and yes, a few “help me right now” text messages at midnight. You can’t get that on YouTube.

A few favorite memories:

  • Watching Brent Ozar and Pinal Dave tag team with Pinal asking questions the audience should be asking.
  • That time Kalen Delaney saw ‘Kevin3NF’ on my badge and said “I know you!” (We had never met offline)
  • Late-night lounge war story sessions with strangers who became friends over adult beverages.
  • Game night at PASS Summit

This year, I’ll be at PASS Data Community Summit in Seattle and next year SQLBits in Wales). If you’re attending, come say hi! I’ll be the over-caffeinated guy in a mountain-bike shirt talking about index maintenance.

A sample of conferences, most certainly not all inclusive of every event:

 

 

If You’re on the Fence

Here’s how to make a conference worth it:

  • Plan ahead. Pick sessions that fill your knowledge gaps, not just what sounds cool.
  • Talk to people. Even if you’re introverted, one hallway conversation might change your career.
  • Bring something back. Document 3–5 takeaways to justify the trip (and remind your boss why it’s valuable).

 

If travel isn’t in the cards, start small: attend a local Data Saturday or User Group meeting. The ROI is incredible.

Filed Under: Career Tagged With: syndicated

SQL Server Versions: Out With the Old, In With the Supported

October 29, 2025 by Kevin3NF Leave a Comment

If your production SQL Servers are still running 2016 (or older) you’re basically banking on inertia. Sure, it’s been stable. But that doesn’t guarantee it’ll stay safe or compliant.

Microsoft shut off mainstream support for 2016 back in July 2022, and extended support ends in July 2026. Beyond that? You’re on your own for bug fixes, security updates, or emergency patches.

 

What You’re Missing

It’s easy to view upgrades as optional enhancements; in truth, staying current is about maintaining resilience. What you gain with 2019/2022 isn’t just bells and whistles. It’s reliability, defensive tools, and measurable performance.

 

Smarter Engines Under the Hood

“Better defaults” are no marketing fluff. From improvements in memory grants, parallelism, and hash joins, newer SQL Servers are tuned to make your workload more efficient out-of-the-box.

 

Adaptive Behavior Without Rewrites

Here’s where SQL Server 2019 and 2022 quietly earn their keep. Microsoft invested heavily in the Intelligent Query Processing (IQP) stack – features that make your existing code run better without touching a line of T-SQL (most of the time).

Older versions execute queries based on a single snapshot of estimated data volume, join paths, and parameter values. If those estimates are off (and they often are), the engine makes bad choices and never looks back. The newer engines don’t do that anymore.

Adaptive joins can switch between nested loop and hash join strategies while the query runs, based on how much data actually flows through. That means fewer “query plans from hell” when parameter values swing wildly between executions.

Interleaved execution gives the optimizer a second chance – especially for multi-statement table-valued functions. Instead of assuming a generic row count of “1,” SQL Server now runs the first statement, learns the real cardinality, and uses that for the rest of the plan.

Table variable deferred compilation fixes one of the longest-standing developer pain points. Instead of guessing that a table variable has exactly one row (which breaks most real-world queries), the engine waits until the table is populated, measures it, and builds an informed plan.

And if your code uses scalar user-defined functions , SQL Server 2019+ can inline them, turning what used to be a loop into a set-based operation. That alone can turn a 5-minute report into a 5-second one.

The beauty here is that you may not need to rewrite or refactor anything. You just get smarter plans, more consistent performance, and less time spent chasing parameter sniffing ghosts. All of the above have limitations. Do your homework and proper testing.

 

Faster Recovery & Safer Rollbacks

Ever had a long-running rollback or crash recovery hang your system? Accelerated Database Recovery (ADR) changes the game—making rollbacks and crash recoveries significantly faster, which is a safety net when things go sideways.

 

Query Store on Steroids

In older versions, you’d turn on Query Store, fiddle with settings, maybe capture plan regressions. In newer versions, it’s more mature, more integrated, and more automatic. You get insights, forced plan control, and regression protection with minimal overhead.

 

Security That Doesn’t Feel Optional

TDE, always encrypted, ledger capabilities (in 2022), granular auditing – these aren’t checkboxes anymore, they’re baseline expectations. Newer versions make it less painful to stay compliant and secure.

 

Hybrid & Cloud-Aware by Design

Backup to URL, cross-environment DR, and more. The newer SQL Server versions are built from the ground up to span on-prem, cloud, or hybrid without the constant “lift and re-architect” panic.

 

Predictability & Fewer Surprises

2019 and 2022 have been battle-tested at this point. Most of the early-stage regressions, bugs, or inconsistent behaviors have been discovered and addressed in the Cumulative Updates. That predictability is worth its weight in gold when you’re managing risk.

 

The Bottom Line

Stable isn’t the same as safe. SQL Server 2016 had a great run, but it’s time to let it retire gracefully.
Plan your move to 2019 or 2022 this quarter. Skip 2025 for now. You’ll sleep better.

———————————————————–

Need Migration Help?

Reach out to Dallas DBAs with code “Newsletter”

Contact Us

———————————————————–

Thanks for reading!

Filed Under: Migration, SQL, Upgrade Tagged With: syndicated

SQL Server Migration Overview

October 22, 2025 by Kevin3NF Leave a Comment

It’s Not Just Backup / Restore

At some point every company faces it: the SQL Server that’s been quietly running for years is due for retirement. Maybe the hardware is old, the lease is ending, or your CIO wants to move workloads to the cloud.

From the outside, a SQL Server “migration” can sound like a simple lift-and-shift. Just copy the databases over, right? The reality is closer to moving offices. You don’t just grab every box and throw it into a new building. You measure the space, update the wiring, decide what gets upgraded, and make sure everyone can find their desk again on Monday.

The Big Picture

 

Predict & Provision

The new environment needs to handle both today’s workload and tomorrow’s growth. Simply matching your old CPU, RAM, and storage can be a mistake if your business has grown since the last server was purchased. In the cloud, it’s even more important to right-size. Too small and you’ll choke performance, too large and you’ll bleed money. Planning capacity up front avoids both. For cloud VMs, provision low during testing and bump up the size as needed.

Install & Configure

SQL Server isn’t plug-and-play. A fresh installation with updated patches and best-practice settings sets the stage for stability. This is where you decide things like where to place (and separate) data and log files, how many tempdb files to allocate, and which default settings to avoid. A solid foundation here can prevent countless problems later.

Tune the Source Before the Move

One of the biggest mistakes in any migration is bringing old baggage (technical debt) into a new system. Giant log files, bloated indexes, and unnecessary jobs can cause just as much trouble on shiny new hardware as they did before. Tuning the source first is similar to decluttering your house before moving – you start fresh without dragging the junk along. Or cleaning the bathroom before your housekeeper shows up 😉

Move the Data

Databases aren’t the only things that need to come across. Logins, SQL Agent jobs, linked server definitions, and security settings are just as important. If you miss these, users may not be able to connect, backups may not run, or nightly jobs could fail. Successful migrations treat this as a holistic move, not just a database restore. There are multiple approaches to this, depending on your data size and cutover window.

Test, Test, Test

Once the new server is up, applications need to prove they can connect and perform. Something as small as a changed network name or a forgotten firewall rule can cause chaos. Testing gives you a safe window to discover what doesn’t carry over cleanly. It’s also a chance to capture new performance baselines so you can measure improvement.

Final Cutover

The actual “move day” should be planned, short, and closely monitored. Typically this means scheduling downtime, running one last backup and restore, and redirecting applications or DNS. The next 48 hours are critical: you’re confirming not only that the server is online, but that backups succeed, jobs run, and performance holds steady. With good prep, the cutover feels more like flipping a switch than rolling the dice.

For large databases in the TB+ range, a full backup/restore during the week with only a Differential needed on cutover day can reduce the amount of time dramatically.

Why Preparation Matters

Here’s the piece many companies miss: migrations are an opportunity to fix what wasn’t working. If you had jobs that failed silently, indexes that were never used, or security shortcuts, they’ll follow you into the new system unless you address them first. Treating the migration as a reset, or a chance to leave bad habits behind, means the business not only gets a new server, but a more reliable platform for the future.

The Bottom Line

A SQL Server migration is less about moving bits and more about moving confidence. With the right planning, you don’t just get a new server – you get a healthier, more reliable foundation for your business applications.

 

Free Disaster Readiness Quiz

I’ll trade you an email address for an honest assessment

DR Quiz – Are you ready?

 

Thanks for reading!

Filed Under: Migration Tagged With: syndicated

SQL Server Alerts

October 15, 2025 by Kevin3NF Leave a Comment

Don’t Let Trouble Sneak Up on You

 

Most SQL Servers run quietly. Until they don’t. By the time someone notices an application outage or a failed backup, you’re already behind. That’s why SQL Server’s built-in alerts exist – they give you an early warning before small problems become major outages.

There are a bunch of great 3rd party tools and community scripts available, but not every firm is going to make that investment or allow open-source code on their servers.

SQL Server Alerts are Microsoft-supported, built into the product, and rely on Database Mail for notifications. Configure them once, and you’ll have a safety net that runs 24/7. But like smoke detectors, too many false alarms and you’ll start ignoring them.

 

Step 1: Create an Operator

An operator is just the person (or distribution list) that gets notified.

In SSMS:

  • SQL Server Agent >> Operators >> (right-click) New Operator
  • Fill in a name and email address (use a group if possible). 100 character limit.

 

T-SQL Example:

USE msdb;

EXEC msdb.dbo.sp_add_operator  

    @name = N'DBA On Call',  

    @enabled = 1,  

    @email_address = N'[email protected]';

 

 

Step 2: Define the Alert

Alerts can fire on:

  • Specific errors (e.g., error 823 = disk I/O issue)
  • Severity levels (e.g., all severity 20+ errors)
  • Performance conditions or WMI Events

In SSMS:
SQL Server Agent >> Alerts  >> (right-click) New Alert  >> choose type and scope.

 

T-SQL Example:

USE msdb;

EXEC msdb.dbo.sp_add_alert  

    @name = N'Error 823 Alert',  

    @message_id = 823,  

    @severity = 0,  

    @enabled = 1,  

    @delay_between_responses = 300, -- 5 minutes 

    @include_event_description_in = 1,  

    @notification_message = N'Disk I/O error (823) detected!';

 

Step 3: Tie It Together

Link the alert to the operator so someone actually gets notified.

In SSMS:
Open the alert >> Response >> “Notify Operators”

 

T-SQL Example:

EXEC msdb.dbo.sp_add_notification  

    @alert_name = N'Error 823 Alert',  

    @operator_name = N'DBA On Call',  

    @notification_method = 1; -- Email

 

 

Step 4: Enable the Mail Profile

Emails won’t get sent without this often-overlooked step.

In SSMS:

SQL Server Agent >> (right-click) >> Properties >> Alert System >> Check “Enable Mail Profile” and pick a profile from the drop down. This required Database mail to be configured and working.

 

Step 5: Cut the Noise

Not every warning deserves an email at 3 a.m. Start with the essentials:

  • Possible Corruption (823, 824, 825)
  • Critical Job failures (Agent jobs)
  • Severity 19+ errors (fatal errors, serious resource issues)
    • Severity 20 may give false positives if you are using vulnerability testing software
  • HADR role changes for unexpected AG failovers

 

Then, test and adjust. If the alerts are noisy, you won’t trust them when it matters.

 

The Bottom Line

Setting up alerts in SQL Server is one of the easiest wins for DBAs. They’re built in, supported by Microsoft, and once tied to Database Mail and operators, they can catch serious issues before your phone rings. Just be selective, as too much noise, and the real signals get lost, or emails get “ruled” into a folder rather than acted on.

 


Free Disaster Readiness Quiz

I’ll trade you an email address for an honest assessment

DR Quiz – Are you ready?


 

Thanks for reading!

 

— Kevin

Filed Under: SQL, Troubleshooting Tagged With: syndicated

“SQL Server Is Slow” Part 4 of 4

October 7, 2025 by Kevin3NF Leave a Comment

Parts 1, 2 and 3 got you to the (SQL) engine room. Now we use community-trusted tools to find what’s going on, fix it safely, and hopefully keep it from coming back.

This post will lean heavily on the First Responder Kit from Brent Ozar, sp_WhoIsActive from Adam Machanic and others. They took what Microsoft provides and made them better. Hundreds or thousands of hours of work to make FREE things for you to use.

This is the most complex blog I’ve ever written. Your experiences may differ, and my code samples might have some bugs. Test first.

The Fork in the Road

After ruling out all the previous items from parts 1,2 and 3, you’ll probably land in one of two branches:

  • Branch A: Obvious & Fixable Now – A misconfiguration, a runaway query, or an ugly blocking chain.
  • Branch B: Systemic & Chronic – Indexing issues, bad query plan, stats/CE/database compat changes, or a new workload exposing weaknesses.

 

First Pulse (rank evidence before touching anything)

Run these back-to-back to see “right now” and “what changed”:

-- 60s wait/CPU/IO snapshot with expert details

EXEC sp_BlitzFirst @Seconds = 60, @ExpertMode = 1;
-- Active requests + blockers + plan handles (save to table if you’ll compare)

EXEC sp_WhoIsActive
     @get_plans = 1,
     @get_additional_info = 1,
     @find_block_leaders = 1;

 

If the server feels resource constrained (CPU/memory/tempdb/log), confirm with Erik Darling’s:

EXEC sp_PressureDetector;

 

Check Query Store (if enabled) for regressed queries over the last few hours. This is database level. Database Z that nobody thinks about could be sitting in the corner making life miserable for Database A.

 

Branch A — Obvious & Fixable Now (surgical)

 

A1) Confirm the bad actor (copy/paste or screenshot what you find)

  • sp_WhoIsActive: highest CPU/reads/duration, blocking leaders, tempdb usage.
    • You may get lucky and see an obvious long running query blocking everything, or hogging CPU
  • sp_BlitzFirst (0s mode) for a quick wait profile on demand:
    • EXEC sp_BlitzFirst @Seconds = 0, @ExpertMode = 1;

 

A2) Get permission & document the plan

  • Who’s affected, what’s the risk, what’s the rollback.

 

A3) Take the action (one change at a time)

  • Kill a true runaway query(after sign-off):
    • KILL <session_id>;
  • Fix low-hanging config already validated in earlier parts: MAXDOP, Cost Threshold, auto-close/auto-shrink OFF, compatibility/CE sanity.
  • Remove a bad plan:

— Remove the specific plan from the cache (sample)

DBCC FREEPROCCACHE (0x060006001ECA270EC0215D05000000000000000000000000);
  • Update some stats
UPDATE STATISTICS dbo.TableName WITH FULLSCAN;  -- or sampled if huge
    • This will purge some plans from the cache, but new plans will have great stats on at least one table.

A4) Re-measure immediately

EXEC sp_BlitzFirst @Seconds = 30, @ExpertMode = 1;

 

Goal: waits normalize, blockers drop, and users confirm latency relief.

Branch B — Systemic & Chronic (rank → fix → re-measure)

 

B1) Rank the biggest fish

EXEC sp_BlitzFirst  @Seconds = 0, @ExpertMode = 1;   -- overall pain profile
EXEC sp_BlitzCache  @SortOrder = 'cpu';   -- try 'reads', 'avg cpu', 'memory grant', ‘avg duration’ etc
EXEC sp_PressureDetector;                -- resource pressure confirmation

Use Query Store timelines to spot regressions and plan churn.

 

B2) Indexing reality check

EXEC sp_BlitzIndex @DatabaseName = 'YourDB', @Mode = 0;  -- database-wide health

Cross-check with native DMVs when you need specifics:

Top impact missing indexes (advisory; validate!)

EXEC sp_BlitzIndex @Mode = 3, @GetAllDatabases = 1, @BringThePain = 1;  -- database-wide health

Unused/rarely used indexes (drop/consolidate after monitoring)

-- Unused Index Script
-- Original Author: Pinal Dave
-- Edit the last Where clause to suit your situation

Create Table #Unused(
                [Database] varchar(1000),
                [Table] varchar (500),
                [IndexName] varchar(500),
                [IndexID] bigint,
                [UserSeek] bigint,
                [UserScans] bigint,
                [UserLookups] bigint,
                [UserUpdates] bigint,
                [TableRows] bigint,
                [Impact] bigint,
                [DropStatement] varchar(1500)
                )

exec sp_MSforeachdb

'use [?]
Insert #Unused

SELECT
                Db_Name(DB_ID()) as [Database]
                ,o.name AS [Table]
                , i.name AS IndexName
                , i.index_id AS IndexID
                , dm_ius.user_seeks AS UserSeek
                , dm_ius.user_scans AS UserScans
                , dm_ius.user_lookups AS UserLookups
                , dm_ius.user_updates AS UserUpdates
                , p.TableRows
                , dm_ius.user_updates * p.TableRows
                , ''DROP INDEX '' + QUOTENAME(i.name)
                + '' on '' + QUOTENAME(Db_Name(DB_ID())) + ''.''
                + QUOTENAME(s.name) + ''.''
                + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS ''drop statement''
FROM sys.dm_db_index_usage_stats dm_ius
                INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id
                                AND dm_ius.OBJECT_ID = i.OBJECT_ID
                INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID
                INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
                INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID
                                                                FROM sys.partitions p
                                                                GROUP BY p.index_id, p.OBJECT_ID) p
                                ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID
WHERE
                OBJECTPROPERTY(dm_ius.OBJECT_ID,''IsUserTable'') = 1
                AND dm_ius.database_id = DB_ID()
                AND i.type_desc = ''nonclustered''
                AND i.is_primary_key = 0
                --AND i.is_unique_constraint = 0
                --AND o.name in (''CloverSummary'')
ORDER BY
                (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC
--GO
'
Select *
from #unused
Where 1=1
                --and [IndexName] like '%_DDBA'
                --and [IndexName] IN ('')
                --and [database] Not in ('MSDB','tempdb')
                --and [database] in ('StackOverflow')
                --and UserSeek + UserScans + UserLookups < 1
                --and [Table] in ('')
Order By [Database] asc, UserSeek + userscans + UserLookups, impact desc

Drop table #Unused

 

B3) Bad Queries (identify → inspect plan → fix → handoff)

  • When is a query “bad”? Probably if it ranks in your top 10 by total CPU/reads in the target window, has high avg cost and high executions, triggers spills/memory grants, blocks, or just regressed in Query Store.
    • Using the SortOrder parameter, also check for ‘memory grants’ and ‘avg duration’
  • Find them fast:
EXEC sp_BlitzCache @SortOrder = 'cpu';           -- or 'reads' , 'avg cpu' , ‘avg duration’ etc.

EXEC sp_WhoIsActive @get_plans = 1
  • Classic face-palm: Key Lookup from “one more column”
    • What happens: a nonclustered index is used for a seek but doesn’t cover a new column added to SELECT, so the engine performs one lookup per row. This shows up in the Query Plan as a ‘Key Lookup’ or ‘RID Lookup’ operator.
    • Why it hurts: on hot paths, that lookup loop can hammer CPU/IO and tank a busy server.
    • Fix choices: INCLUDE the missing column(s) on the used index, reshape keys to cover, or trim the SELECT if it isn’t needed.

 

Quick hunt for plans with lookups:

SELECT TOP 50
       DB_NAME(qp.dbid) AS dbname,
       (qs.total_logical_reads*1.0/NULLIF(qs.execution_count,0)) AS avg_reads,
       qs.execution_count, qt.text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qp.query_plan.exist('//RelOp[@LogicalOp="Key Lookup" or @LogicalOp="RID Lookup"]') = 1
ORDER BY avg_reads DESC;

 

Typical covering fix (example):

CREATE NONCLUSTERED INDEX IX_Sales_OrderDate_CustomerID
  ON dbo.Sales (OrderDate, CustomerID)
  INCLUDE (TotalDue);   -- add the “new SELECT column” here
  • Other common suspects: implicit conversions on join/filter columns, parameter sniffing, spills from oversized sorts/hashes, scalar UDFs/multi-stmt TVFs, RBAR triggers.
  • Developer handoff package (keep it short and useful):
    • Evidence: normalized query text + sample parameters, actual plan (.sqlplan or XML), metrics window (total/avg CPU/reads/duration/executions), warnings (spills, lookups, implicit conversions).
    • Hypothesis & options: e.g., new column caused Key Lookup on PK_MyTable_ID → options: covering INCLUDE, index reshape/filtered index, query/ORM change, or plan stability tactic.
    • Safety: size/impact estimate, rollback (drop index/unforce plan), and success criteria (Query Store deltas, BlitzFirst/Cache snapshots).

 

B4) Plan stability

  • Verify statistics are being updated “often enough” and at the right sample size
  • Look for plan cache bloat and churn
  • Parameter sniffing fixes: targeted OPTION (RECOMPILE), OPTIMIZE FOR cases, or Query Store forced plan (monitor; keep rollback).
  • Memory grants/spills: better indexes (narrow keys + good includes), stats refresh, and watch row-goal operators.

 

B5) Stats/CE/compat sanity

  • Ensure AUTO_UPDATE_STATISTICS (and consider ASYNC) fit the workload.
  • Recent compat level or CE changes? Compare before/after in Query Store.

 

B6) Parallelism & CPU policy

  • Validate MAXDOP and Cost Threshold against core count + workload mix.
  • Use BlitzCache to spot skewed exchanges or thrashy parallel plans.

 

B7) Tempdb & log health

  • sp_PressureDetector will flag contention/pressure; confirm with file IO stats:
SELECT DB_NAME(vfs.database_id) AS dbname, mf.type_desc, mf.physical_name,
       vfs.num_of_reads, vfs.num_of_writes,
       vfs.io_stall_read_ms, vfs.io_stall_write_ms
FROM sys.dm_io_virtual_file_stats(NULL,NULL) vfs
JOIN sys.master_files mf ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id
ORDER BY (vfs.io_stall_read_ms + vfs.io_stall_write_ms) DESC;
  • Right-size VLFs (all active databases), ensure multiple equally sized tempdb data files, and watch version store if RCSI/SI is on.

 

B8) New workload exposure

  • Correlate Query Store/BlitzFirst windows with deploys, ORM queries, reporting jobs, ETL shifts, or seasonal peaks. Fix that pattern first (indexing, parameterization, caching, schedule changes).

 

Change Control & Safety Net

  • One change at a time → measure → document → keep/rollback.
  • Save “before/after” artifacts: sp_WhoIsActive snapshots (to table), BlitzFirst output, BlitzCache exports, Query Store screenshots/DMVs.
  • Always include who/when/why and expected KPI movement.
  • Assume someone will want a post-mortem. I wrote one this morning for a client outage.

 

What “Good” Looks Like

  • Waits shift away from bottleneck classes to benign background.
  • Top 10 statements show reduced CPU/reads; fewer/shorter blocking chains.
  • Tempdb/log growth stabilize; fewer off-hours alerts.
  • Users say “fast enough,” matched to your baseline and SLAs.

 

Keep It from Coming Back

  • Maintain baselines (perf counters, waits, file IO, Query Store top queries).
  • Align index & stats maintenance to your workload.
  • Add deploy gates for schema/index/compat changes with pre/post metrics.
  • Keep lightweight Extended Event sessions for spills, long-running, parameter-sensitive queries.
  • Review Query Store regressions and any forced plan safety periodically.

 

The Bottom Line:

Slow SQL isn’t mysterious; it’s measurable. Rank the pain, fix the biggest offender or the pattern behind it, and prove the result with before/after metrics. Keep notes as you go, and be methodical,

Above all…be calm. Everyone else can panic while you be the hero.


First Month of Pocket DBA® Free!

Pocket DBA


— Thanks for reading!

Filed Under: Indexing Strategies, Performance Tuning, SQL, Troubleshooting Tagged With: syndicated

“SQL Server is Slow” Part 3 of 4

October 1, 2025 by Kevin3NF Leave a Comment

In parts 1 and 2 of this series, we’ve gathered info and done the triage just like anyone in almost any industry does

At this point you’ve:

  • Defined what “slow” means and built a timeline (Part 1).
  • Checked things outside SQL Server like network, storage, and VM noise (Part 2).

Now it’s time to open the hood on SQL Server itself.


Step 1: Check Active Sessions

Run a quick session (sp_whoisactive is a favorite):

  • Who’s running queries right now?
  • What queries have been running the longest? Is that normal?
  • Any blocking chains?
  • Are any queries hogging resources?

At this stage, you’re only identifying potential offenders. Next issue, we’ll dig into queries and indexes more deeply.

 

Step 2: Look at Wait Stats

Wait stats tell you what SQL Server has really been waiting for (everything in SQL Server is waiting for something else):

  • PAGEIOLATCH: slow storage reads.
  • LCK_M_X: blocking/locking.
  • CXPACKET/CXCONSUMER: parallelism info.
  • THREADPOOL: CPU threads
  • RESOURCE_SEMAPHORE: memory
  • ASYNC_NETWORK_IO: Probably more of a client side problem than SQL Side
    • The most comprehensive list of wait types and explanations from my friends at SQL Skills

This isn’t about solving yet – it’s about categorizing where SQL feels the pain.

 

Step 3: Review Agent Jobs & Error Logs

SQL may already be waving red flags:

  • Overlapping or stuck Agent jobs. A long running purge job or index rebuild can cause all sorts of issues during the day.
  • Failed backups or CHECKDB runs. A failed CHECKDB could mean corruption. Read this
  • Errors or memory dumps tied to patching or system instability. Look in the same folder as your ERRORLOG location
    • Can’t find that folder? Watch this

 

Step 4: Don’t Forget the “Gotchas”

Other less obvious issues can cause system-wide drag:

  • High VLF count (often from failed or missing log backups).
  • Database compatibility or config changes – check SSMS reports like:
    • Server level: Configuration Changes History
    • Database level: All Blocking Transactions, Index Usage Statistics
  • Recent patching issues (especially if tied to errors or dump files).

These aren’t everyday culprits, but when they show up, they can cripple performance.

 

Step 5: Compare Against Your Baseline

Today’s “slow” may be tomorrow’s “normal.”

  • Track batch requests/sec, CPU Utilization, wait stats, I/O latency, and log file size/VLF count.
  • Without this baseline, every slowdown feels like a brand-new mystery.

If you don’t already have a baseline, NOW is the time to start, while the server is healthy.

  • Collect I/O stats and wait stats regularly.
  • Run sp_Blitz for a full health snapshot (free tool from Brent Ozar)
  • Capture DMV performance counters (sys.dm_os_performance_counters) on a schedule.

A baseline doesn’t need to be fancy, it just needs to exist so you know what “normal” looks like before things go sideways.


The Bottom Line

Part 3 is about categorizing slowness inside SQL Server: sessions, waits, jobs, error logs, and configuration gotchas. Don’t jump straight into query rewrites yet. You’re still isolating the nature of the slowdown. Having a consistent process for this reduces panic and anxiety.

In Part 4, we’ll cover what to do when the culprit is truly inside SQL Server: look at queries, indexes, and design choices.

 

Thanks for reading!

Filed Under: Beginner, Performance Tuning, SQL, Troubleshooting Tagged With: syndicated

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 16
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...