• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

Index

Stop Wasting Time Rebuilding Indexes Every Night

May 14, 2025 by Kevin3NF Leave a Comment

Too many IT teams run index rebuild jobs on autopilot. Every night. Every index. No questions asked.

Here’s the truth: if you’re doing this daily on indexes smaller than 10,000 pages, you might be chewing up CPU, bloating your logs, and annoying your users  for zero gain.

Let’s fix that.

Internal vs. External fragmentation explained

SQL Server tracks two kinds of fragmentation:

  • Internal = too much free space inside data pages
  • External = out-of-order pages on disk. The logical sequence doesn’t match physical placement

Think:

  • Internal = wasted space
  • External = wasted reads

 

Does External Fragmentation Even Matter on SSDs?

Not really*.

On spinning disks, scattered reads could mean lots of tiny movements by the drive head = slower performance.

But on SSDs? Everything is basically a memory chip, so page order has minimal impact on read speed. Yet SQL Server still reports external fragmentation like it’s 2008.

That’s why blindly reacting to fragmentation numbers is often misleading.

* Not to be confused with internal

Don’t Worry about % – look at the page counts

My rule of thumb? If it’s under 10,000 pages, skip it.

You won’t see a meaningful performance benefit, and you’re just burning maintenance time and log space.

Microsoft’s default page count for index maintenance is 1000 8K pages. 8K x1000 = approx. 8MB. That is tiny.

Rebuild vs. Reorganize

Rebuild = drops and recreates the index (resource-heavy, spikes log use, can block unless ONLINE)

Reorganize = shuffles leaf-level pages gently (lighter, but less effective)

Here’s a smarter guideline than just looking at fragmentation %:

 

Don’t overlook statistics! They matter more than fragmentation in most cases.

SQL Server uses statistics to estimate how many rows a query will return. These estimates help the optimizer decide whether to use an index, do a table scan, join methods, and more.

If the stats are outdated or inaccurate, SQL Server can choose a bad query plan, which can lead to slow performance, excessive reads, or blocked processes.

That’s why I recommend updating stats daily in most transactional systems. It has a much higher ROI than blindly rebuilding indexes.

I usually let SQL Server start with its default sampling algorithm, then adjust if needed, especially if your data distribution is skewed or volatile.

Recommended Tools for Smarter Index Maintenance

Use tools that let you filter by both fragmentation level and page count, like:

  • Ola Hallengren’s IndexOptimize (this can be tweaked to do Stats Only)

These give you control rather than nightly sledgehammer.

The Bottom Line

Stop defaulting to nightly index rebuilds.
If you’re on SSDs and under 10K pages, you’re solving a problem that doesn’t exist — and possibly creating new ones (like log growth and blocking).

Be strategic:

  • Focus on meaningful fragmentation
  • Consider index size before acting
  • Separate stats updates from index maintenance

Smarter maintenance = better performance with fewer headaches.


Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Index, SQL Tagged With: syndicated

SQL Server Index Primer

April 23, 2025 by Kevin3NF Leave a Comment

Indexes 101: What, Why, and When?

“What Is an Index?”
I get this question a lot, especially from developers and sysadmins who’ve been handed a SQL Server and told, “Keep it running fast.” No pressure.

At a basic level, an index is a roadmap. Instead of scanning every house on the street to find the one you want, you check a list. Indexes help SQL Server find data faster – less scanning, more targeted seeking. All those pages at the back of a technical book? Those are indexes. The same idea applies to tables.

Now, not all indexes are created equal. Some are used every day. Others are…let’s say, special occasion. Let’s walk through them in order from most to least common (based on my experience).

Clustered Indexes

What it is: Defines how the data is stored on disk. SQL Server physically orders the table based on this. One per table.
Why it matters: Every seek operation relies on this index. If you don’t create one, SQL Server makes a hidden “heap”, which adds time to the query.
When to use: Pretty much always, unless you’re loading massive amounts of data fast and don’t care about retrieval speed such as an ETL/ELT process.
Pro tip: Pick a column (or set) that is narrow, static, and unique (like an identity column). Even better, use a column that is frequently used to retrieve sets of data such as date for “Last month’s sales” types of reports.

Non-Clustered Indexes

What it is: Think of these like side tables (they do add to the database size) with pointers back to the main data. You can have many per table, but too many can be an issue.
Why it matters: Great for covering queries. Especially helpful if they include only a few columns.
When to use: When the same query pattern happens often and doesn’t match the clustered key.
Watch out for: Too many non-clustered indexes = slow inserts and updates. Balance is key.

…Included Columns…

What it is: Bonus data tacked onto a non-clustered index. Doesn’t affect sort order but helps avoid expensive trips to the table to find more data.
Why it matters: Lets you “cover” more queries without bloating the main index definition.
When to use: If the query needs extra columns but they don’t need to be sorted.

Filtered Indexes

What it is: Indexes with WHERE clauses.
Why it matters: Small, fast, and efficient for predictable filters.
When to use: Perfect for things like IsActive = 1 or Status = ‘Pending’.

Columnstore Indexes

What it is: Think column-based storage, not row-based. Optimized for analytics.
Why it matters: Game-changing for reporting queries across millions of rows.
When to use: Data warehouses, not your OLTP system.
Bonus: Clustered columnstore = high compression. Great for archiving.

XML Indexes

What it is: Indexes specifically for querying XML data types.
Why it matters: Without them, XML queries crawl.
When to use: If you’re unlucky enough to be storing structured data as XML blobs.

Spatial Indexes

What it is: Indexes that help with geography and geometry data types.
Why it matters: SQL Server can’t efficiently process spatial queries without these.
When to use: Mapping, geolocation apps, routing engines—very niche but powerful. I’ve only seen a spatial index in production once in my career.

The Bottom Line:

Indexing is both art and science. You want just enough to make reads fly and not too many that writes crawl. Think of it like your car with just the right tire pressure—too little and you drag, too much and you lose control.

— Thanks for reading!

Kevin

 

Filed Under: Beginner, Index, SQL Tagged With: syndicated

3rd Party Applications Have Issues

May 2, 2024 by Kevin3NF Leave a Comment

As a SQL DBA, what do you do when a vendor application has performance problems that are code related?

Server settings don’t generally seem to be an issue.

Queries and vendor code…total hands off. I just point at code and say “There’s a great choice for optimizing in your next update!”

Indexes are the “Sticky Bits” in between client data and vendor code.

Seems like we get 30 indexes, or zero. Neither is usually right.

I get it…how do you tune an app for every possible use case for any client that may install it?

I always have my client get the official rules from the vendor. We don’t want to break an upgrade, or void a support agreement.

In some cases we’ve had to “Don’t Ask, Don’t Tell” our way into a solution with undo/redo scripts for very necessary indexes.

What do you do when tuning a 3rd party app?

 

— Kevin

Filed Under: Index, Performance, SQL Tagged With: syndicated

Duplicate Indexes Explained

November 13, 2019 by Kevin3NF Leave a Comment

What are duplicate indexes, and why do I care?

This is an entry level explanation, with an analogy for new DBAs.

Duplicate indexes are those that exactly match the Key and Included columns.  That’s easy.

Possible duplicate indexes are those that very closely match Key/Included columns.

Why do you care?

Indexes have to be maintained. When I say that, most people immediately think of Reorganizing, rebuilding and updating statistics, and they are not wrong.

But..don’t overlook the updates that happen to indexes when the data changes in the columns they are based on. If you have a duplicate index and you add, change or delete a row…BOTH of the indexes are changed. This takes CPU, memory and log space to do.  Magnify by multiple indexes across a databases with tables that have millions or billions of rows and you start feeling this effort.

Duplicates:

Consider the following two indexes:

-- DisplayName only
CREATE INDEX [NC_DisplayName] ON [dbo].[Users]
	([DisplayName] ASC)
GO

--DisplayName plus additional info
CREATE INDEX [NC_DisplayName_Includes] ON [dbo].[Users]
	([DisplayName] ASC)
INCLUDE ([Reputation],[CreationDate],[UpVotes],[Views]) 
GO

If you query the StackOverflow database for me (Kevin3NF) using

Select DisplayName
From Users
Where DisplayName = 'Kevin3NF'

It will use the first index *

If you add ‘Upvotes’ to the Select, it will use the second index.

So…how is this a duplicate index?

The Key column (DisplayName) is the same.

Drop the NC_DisplayName index, and your first query will use the second index because even though there are 4 additional columns, the Key column is still there and this index is better than a table scan.

You also get the benefit of not having to update NC_DisplayName any time data is changed.

An Analogy of Two Doors:

You have two doors:

The First door is simple.  It has one aspect to it…a handle.

The Second door has a handle, as well as all sorts of Included extras.

Both doors lead to the same place.  Assume some people like the simplicity of Door1 and some people really want to go through the Steampunk style Door2.  Both get what they want (destination), by picking the door that suits them.

If you take away Door1 (Drop Index), the folks that prefer it can simply go through the fancy Door2 and get where they needed to be. Door2 has included extras (columns) that the “Simple door” folks just ignore.

It really is that simple.

But…don’t go make an index with all possible columns!  That’s a whole different kind of bad indexing.

Now…this does not mean you need to go blindly dropping indexes.  Research, test, verify which indexes are being used/unused, etc.  Definitely don’t go and drop a Primary Key index just because its a possible duplicate.

I use sp_BlitzIndex for my initial info gathering when doing HealthChecks, Index Tuning, etc. Its free and solid.

Video:

* I have run into a weird issue I am trying to sort out. In my testing, it appears that SQL Server might be basing its index decisions on the Index Creation date, all other things being equal. I will update as I find out more. This was done on SQL 2016, SP2 with the StackOverflow public data dump.

Per my good friend Pinal Dave (b|t), this is a known behavior: ” if two index has same key columns the one created later is used.”

Go forth and research!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Index

Getting Started with Index and Execution Plan Basics

August 24, 2018 by Kevin3NF Leave a Comment

If you are a developer or new SQL Server DBA and have never really worked with query tuning, this post is for you.  If you are an experienced DBA/Tuner and want to nit-pick me saying field vs. column, feel free to leave a comment that I will delete or never publish, lol.  This is basics, yo…

I concentrate on how to measure the performance of a query, as well as the basics of “seeing” how SQL Server is executing the query to return the results.

I could type all of this up, but watching the video is going to be a WHOLE lot better teacher, so here you go:

As always, feel free to reach out via the comments here or on the video.  Or, follow me on Twitter and ask away!

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Accidental DBA, Beginner, Index, Performance

Determine an Index Size in SSMS

July 28, 2017 by Kevin3NF Leave a Comment

Quick post today…I needed to find a quick way to get the size of a specific index without using T-SQL.

Tables are easy…right-click>>Properties>>Storage.   Indexes don’t have a storage option.   You can get the sum of all indexes on a table from the “Disk Usage by Top Tables” built-in report, but not one by one.

To do it in the SSMS GUI, expand the Database>>Tables>>Indexes:

SQL Index Table Size

 

Right-Click the index in question, select “Properties”, then “Fragmentation.”:

index fragmentation

As we know, pages are 8KB, so now its just a simple math problem:

46236 pages x 8 KB /1024 to get to MB = 361 MB (rounded for conversation’s sake):

SQl Server Index size calculator

If you need more than one, but not all…you’re going to need to Google up some T-SQL to run.  I found a variety of ways to do it, and didn’t like any of them.

Thanks for reading!

Kevin3NF

Follow Dallas DBAs on LinkedIn

 

Filed Under: Accidental DBA, Beginner, Index, SSMS

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...