• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Kevin3NF

Stop Wasting Time Rebuilding Indexes Every Night

May 14, 2025 by Kevin3NF Leave a Comment

Too many IT teams run index rebuild jobs on autopilot. Every night. Every index. No questions asked.

Here’s the truth: if you’re doing this daily on indexes smaller than 10,000 pages, you might be chewing up CPU, bloating your logs, and annoying your users  for zero gain.

Let’s fix that.

Internal vs. External fragmentation explained

SQL Server tracks two kinds of fragmentation:

  • Internal = too much free space inside data pages
  • External = out-of-order pages on disk. The logical sequence doesn’t match physical placement

Think:

  • Internal = wasted space
  • External = wasted reads

 

Does External Fragmentation Even Matter on SSDs?

Not really*.

On spinning disks, scattered reads could mean lots of tiny movements by the drive head = slower performance.

But on SSDs? Everything is basically a memory chip, so page order has minimal impact on read speed. Yet SQL Server still reports external fragmentation like it’s 2008.

That’s why blindly reacting to fragmentation numbers is often misleading.

* Not to be confused with internal

Don’t Worry about % – look at the page counts

My rule of thumb? If it’s under 10,000 pages, skip it.

You won’t see a meaningful performance benefit, and you’re just burning maintenance time and log space.

Microsoft’s default page count for index maintenance is 1000 8K pages. 8K x1000 = approx. 8MB. That is tiny.

Rebuild vs. Reorganize

Rebuild = drops and recreates the index (resource-heavy, spikes log use, can block unless ONLINE)

Reorganize = shuffles leaf-level pages gently (lighter, but less effective)

Here’s a smarter guideline than just looking at fragmentation %:

 

Don’t overlook statistics! They matter more than fragmentation in most cases.

SQL Server uses statistics to estimate how many rows a query will return. These estimates help the optimizer decide whether to use an index, do a table scan, join methods, and more.

If the stats are outdated or inaccurate, SQL Server can choose a bad query plan, which can lead to slow performance, excessive reads, or blocked processes.

That’s why I recommend updating stats daily in most transactional systems. It has a much higher ROI than blindly rebuilding indexes.

I usually let SQL Server start with its default sampling algorithm, then adjust if needed, especially if your data distribution is skewed or volatile.

Recommended Tools for Smarter Index Maintenance

Use tools that let you filter by both fragmentation level and page count, like:

  • Ola Hallengren’s IndexOptimize (this can be tweaked to do Stats Only)

These give you control rather than nightly sledgehammer.

The Bottom Line

Stop defaulting to nightly index rebuilds.
If you’re on SSDs and under 10K pages, you’re solving a problem that doesn’t exist — and possibly creating new ones (like log growth and blocking).

Be strategic:

  • Focus on meaningful fragmentation
  • Consider index size before acting
  • Separate stats updates from index maintenance

Smarter maintenance = better performance with fewer headaches.


Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Index, SQL Tagged With: syndicated

Your SQL Server Is Whispering…Are You Listening?

May 7, 2025 by Kevin3NF Leave a Comment

Early Warnings of a Meltdown (and How to Catch Them)

Most SQL Server crashes don’t come out of nowhere.
They leave breadcrumbs – red flags that something’s not right. The problem? If you don’t know where to look, you miss the signs…until it’s 2am and your CEO’s calling.

Let’s talk about how to listen for those whispers before they turn into full-blown alarms.

The Canary in the Query Mine

Here are a few subtle warning signs I see often before a server falls over:

  • Backup jobs start taking longer for no obvious reason.
  • TempDB grows larger than usual
  • Login failures spike
  • High CPU or long-running queries suddenly become “normal.”
  • End users QUIT complaining…because they’ve gotten used to the problem.
  • DBCC CHECKDB gets skipped because “it blocks things”

 

If any of these ring a bell, your SQL Server might be trying to tell you something.

Logs You Shouldn’t Ignore

Quick reality check: Most people never check the logs until the server is on fire, or after

  • SQL Server Errorlog:
    • Corruption messages (Severity 24 and 25).
      • Run this: Select * from msdb.dbo.suspect_pages
    • Failed login attacks (possible brute force or misconfigurations).
      • Some of these might be from vulnerability testing tools and can be ignored. Might.
    • IO warnings that your storage is too slow or unreliable.
    • How to find the SQL Server ErrorLog on disk
  • Agent job failures that are just quietly failing in the background.
  • Windows Event logs
  • Cluster logs

Pro tip: Set a calendar reminder to review logs once a week. Seriously—it takes 5 minutes and could save your bacon.

 

DIY Monitoring: No Budget? No Excuse.

You don’t need fancy tools to catch problems early. Here are a few dead-simple tricks:

  • SQL Agent job that checks disk space and emails you when it drops below 10%.
  • Query to track the longest-running queries over time—schedule it to run daily.
  • Custom SQL alerts for CPU over 90% for more than X minutes.

 

Set Up Built-In SQL Alerts (Takes 5 Minutes)

SQL Server has a native Alerting feature. No third-party tools required.

Here’s how to set up critical alerts for things you never want to miss:

--Step 1:

EXEC msdb.dbo.sp_add_operator 
    @name = N'DBA_Alerts', 
    @enabled = 1, 
    @email_address = N'[email protected]';
-- This requires Database Mail and an SMTP server

--Step 2
-- Severity 19-25 (from “Something’s wrong” to “We’re toast”)
-- Change and run for each severity you want to track. Sev 20 may give false positives

EXEC msdb.dbo.sp_add_alert 
    @name = N'Severity 19', 
    @message_id = 0, 
    @severity = 19, 
    @notification_message = N'Severity 19 error occurred', 
    @job_id = NULL, 
    @enabled = 1, 
    @delay_between_responses = 300, 
    @include_event_description_in = 1, 
    @database_name = N'master', 
    @operator_name = N'DBA_Alerts', 
    @notification_method = 1;

Repeat this for severity levels 19 to 25, or use a script to loop through them.
Bonus: Create alerts for SQL error numbers like:

  • 823, 824, 825 – signs of potential data corruption
  • 9002 – log file full

The Bottom Line

SQL Server doesn’t generally go down without a fight. But it does give off signals.
Learn to listen, and you’ll dramatically reduce those “why is the website down?”.


Database Emergencies Cost Thousands Per Minute 

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

• Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Emergency, SQL, Troubleshooting Tagged With: syndicated

SQL Server Budgets: How to Justify Cost to Management

May 1, 2025 by Kevin3NF Leave a Comment

Ever tried to request SQL Server funding from the CFO?

Your systems, your data, your customer experience – they all rely on that “invisible” database engine humming along behind the scenes. And if you’re responsible for keeping it running, you need the budget to do it right.

Here’s how to make your case without getting buried in tech jargon or glazed-over stares.

Speak Their Language: Business Impact

Skip the CPU counts and version numbers. Start with outcomes:

  • “We’re running SQL Server 2014. It’s out of mainstream support, which means we’re vulnerable to security issues, and Microsoft won’t help us fix them.”
  • “We’ve hit memory and CPU limits on Standard Edition. That’s why reports take forever, and our systems slow down during busy hours.”
  • “We’re storing critical data on spinning hard drives. Replacing them with SSDs will improve performance significantly and cut risk of disk failure.”

Translation Tip:

Focus on Risk, Cost, and Growth

Every budget conversation should touch at least one of these:

Risk Avoidance

  • Running an outdated version? That’s unsupported = risky.
  • No proper backups or DR? That’s a business continuity issue.
  • One bad disk or query can bring the whole app down.

Cost Efficiency

  • SQL Server Standard Edition: ~$3K.
  • Emergency consulting to recover a failed system: $10K–$20K.
  • SSD upgrade: one-time investment.
  • Hourly downtime across 20 staff: thousands.

Growth & Performance

  • Hitting the ceiling on CPU/memory? You’ve outgrown Standard Edition.
  • Enterprise Edition gives you scale-up options, better indexing, and more.
  • SSDs reduce query lag – users and customers notice speed.

Use Visuals or Simple Metrics

Try:

  • A chart of your SQL Server data growth over the last 3 years
  • A simple graph showing CPU/memory usage vs. Standard Edition limits
  • Pie chart showing how SQL touches Sales, Finance, HR, etc.

Anything to remind them this isn’t “just IT stuff”. It’s tied to the business’ success.

Offer Tiered Options

Sometimes, they just need a menu.

Good:

  • SSD storage upgrade
  • SQL patching and monitoring
  • Basic backup review

Better:

  • Upgrade to supported SQL Server version
  • Performance tuning
  • Regular DR tests

Best:

  • Edition upgrade (if needed)
  • SSDs, full DR plan
  • Proactive monthly health checks

 

You’re giving them choices, but you’re also giving them consequences. They pick the investment level; you just make sure they understand what each buys them.

The Bottom Line: Be Their Strategic Partner

Don’t just ask for money. Frame it as helping them sleep better at night. You’re not spending their budget… you’re protecting their business. That’s the real value of good data infrastructure.

Filed Under: SQL Tagged With: syndicated

SQL Server Index Primer

April 23, 2025 by Kevin3NF Leave a Comment

Indexes 101: What, Why, and When?

“What Is an Index?”
I get this question a lot, especially from developers and sysadmins who’ve been handed a SQL Server and told, “Keep it running fast.” No pressure.

At a basic level, an index is a roadmap. Instead of scanning every house on the street to find the one you want, you check a list. Indexes help SQL Server find data faster – less scanning, more targeted seeking. All those pages at the back of a technical book? Those are indexes. The same idea applies to tables.

Now, not all indexes are created equal. Some are used every day. Others are…let’s say, special occasion. Let’s walk through them in order from most to least common (based on my experience).

Clustered Indexes

What it is: Defines how the data is stored on disk. SQL Server physically orders the table based on this. One per table.
Why it matters: Every seek operation relies on this index. If you don’t create one, SQL Server makes a hidden “heap”, which adds time to the query.
When to use: Pretty much always, unless you’re loading massive amounts of data fast and don’t care about retrieval speed such as an ETL/ELT process.
Pro tip: Pick a column (or set) that is narrow, static, and unique (like an identity column). Even better, use a column that is frequently used to retrieve sets of data such as date for “Last month’s sales” types of reports.

Non-Clustered Indexes

What it is: Think of these like side tables (they do add to the database size) with pointers back to the main data. You can have many per table, but too many can be an issue.
Why it matters: Great for covering queries. Especially helpful if they include only a few columns.
When to use: When the same query pattern happens often and doesn’t match the clustered key.
Watch out for: Too many non-clustered indexes = slow inserts and updates. Balance is key.

…Included Columns…

What it is: Bonus data tacked onto a non-clustered index. Doesn’t affect sort order but helps avoid expensive trips to the table to find more data.
Why it matters: Lets you “cover” more queries without bloating the main index definition.
When to use: If the query needs extra columns but they don’t need to be sorted.

Filtered Indexes

What it is: Indexes with WHERE clauses.
Why it matters: Small, fast, and efficient for predictable filters.
When to use: Perfect for things like IsActive = 1 or Status = ‘Pending’.

Columnstore Indexes

What it is: Think column-based storage, not row-based. Optimized for analytics.
Why it matters: Game-changing for reporting queries across millions of rows.
When to use: Data warehouses, not your OLTP system.
Bonus: Clustered columnstore = high compression. Great for archiving.

XML Indexes

What it is: Indexes specifically for querying XML data types.
Why it matters: Without them, XML queries crawl.
When to use: If you’re unlucky enough to be storing structured data as XML blobs.

Spatial Indexes

What it is: Indexes that help with geography and geometry data types.
Why it matters: SQL Server can’t efficiently process spatial queries without these.
When to use: Mapping, geolocation apps, routing engines—very niche but powerful. I’ve only seen a spatial index in production once in my career.

The Bottom Line:

Indexing is both art and science. You want just enough to make reads fly and not too many that writes crawl. Think of it like your car with just the right tire pressure—too little and you drag, too much and you lose control.

— Thanks for reading!

Kevin

 

Filed Under: Beginner, Index, SQL Tagged With: syndicated

SQL Server Wait Stats

April 16, 2025 by Kevin3NF Leave a Comment

Everything in SQL Server is waiting for something…this is by design. And most people never think about it…

Until your application is waiting to complete its request, sell the product or provide “the report” to the CEO!

Most SQL Server performance issues come down to one thing: waiting. Every time a query slows down, SQL Server records what it was waiting on. These wait stats are one of the best diagnostic tools for DBAs, sysadmins, and developers – but only if you know how to use them.

What Are Wait Stats?

SQL Server tracks the time spent waiting on resources like CPU, memory, disk, or locks. Think of them as traffic reports – some delays are normal, while others indicate serious bottlenecks. If your database is slow, wait stats tell you where to start looking.

How to Read Wait Stats

A simple query can reveal where your server is spending the most time:

SELECT wait_type, waiting_tasks_count, wait_time_ms, signal_wait_time_ms 
FROM sys.dm_os_wait_stats 
ORDER BY wait_time_ms DESC;

Key columns:

  • wait_type – The specific type of wait that might be affecting performance
  • waiting_tasks_count – Number of times this wait has occurred
  • wait_time_ms – Total time spent waiting (cumulative since last clear)

 

Common Wait Types and What They Mean:

A table of various SQL Server Wait types and possible fixes

 

Many other waits exist, but these are some of the most common. Microsoft provides a full reference here.

SQL Skills has the best list of wait types and things to know I’ve ever seen. Go there and read before changing anything. MANY recorded waits are benign and not worth looking into.

Finding and Fixing Performance Issues: A guide for you to try in your development environment

  1. Run the wait stats query to capture the current state.
  2. Identify the top wait type and correlate it with system metrics. Research it
  3. Apply a tuning change (indexing, parallelism adjustments, memory tuning).
  4. Re-run the query to compare results.

This process provides a clear before-and-after snapshot of how performance improves.

The Bottom Line

Wait stats don’t tell you exactly what’s broken, but they show where to investigate. The key is to correlate them with query execution plans, server configuration, and system resource usage. Tracking these waits over time can also help spot performance trends before they become critical issues.

SQL Server is always waiting for something. The question is – what are you going to do about it? What are you waiting for?!

Thanks for reading!

–Kevin

Get the first month of Pocket DBA® FREE from Dallas DBAs!

 

Filed Under: Performance, SQL Tagged With: performance, SQL, syndicated

Why Your SQL Server Backups Aren’t Enough

April 9, 2025 by Kevin3NF Leave a Comment

SQL Server backups are database insurance – you don’t really appreciate them until disaster strikes. But here’s the kicker: having backups doesn’t mean your data is safe. If you’re not regularly doing test restores, your backup strategy is a ticking time bomb.

 

The False Sense of Security

Many IT leaders and system admins think, “We have full backups every night. We’re covered.” But when the time comes to restore, they discover:

· The backup file is corrupt.
· The storage location is inaccessible.
· The restore process takes way longer than expected.
· The recovery model wasn’t configured properly.
· The point-in-time restore doesn’t actually bring back the data they need.

At that point, it’s not a “backup strategy.” It’s a data loss incident.

Why Restores Must Be Tested

SQL Server backups are only as good as your ability to restore them. If you’re not proactively testing restores, you’re just hoping they work when disaster strikes. Hope is not a strategy.

• Test different restore scenarios: Full, differential, and transaction log restores.
• Validate integrity with DBCC CHECKDB after restores.
• Ensure RTO and RPO align with business needs. (Translation: Can you restore fast enough to avoid a business meltdown?)
• Automate restore tests so you’re not guessing when the pressure is on.

Automating Restore Tests with dbatools

If you’re not testing restores because it’s “too time-consuming,” dbatools.io has your back. The Test-DbaLastBackup command automates the process by:

1. Restoring the last backup of a database to a test environment.
2. Running DBCC CHECKDB to ensure data integrity.
3. Reporting on success or failure, so you know where issues exist before you actually need the backup.

Example Command:
Test-DbaLastBackup -SqlInstance YourSQLServer -Databases YourDatabase

This takes a lot of manual effort out of the equation—no more excuses for not testing!

Action Plan for SQL Server Resilience

1. Schedule regular test restores (weekly or monthly) and document the process.
2. Use DBCC CHECKDB to verify database integrity post-restore.
3. Leverage Test-DbaLastBackup to automate restore verification.
4. Confirm backup files are accessible and stored securely offsite.
5. Train your team on recovery processes—because backups are useless if no one knows how to restore them.
6. Monitor backup success, but more importantly, track restore success.

The Bottom Line:

A backup plan without restore testing is a disaster waiting to happen. Don’t wait until you’re knee-deep in downtime to realize your backups weren’t as solid as you thought. Test your restores. Validate your data. Sleep better at night.

Thanks for reading!

–Kevin

New Pocket DBA® clients get the first month FREE!

Get a Free SQL Server Health Check!

 

 

Filed Under: backup, HADR, Restore, SQL Tagged With: syndicated

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 40
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...