• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Kevin3NF

Your SQL Servers are Misconfigured

August 2, 2018 by Kevin3NF Leave a Comment

Corgi pupper is sad your Server is slow

So are some of mine.

So are 90% of the servers I run across, especially in mid-size businesses ( <1000 employees, or 10-20 servers typically).

Its probably not your fault.

There are a bunch of instance level settings in SQL Server that need to be set that are specific to your environment and the workload you are going to run against it.  Most of the defaults that are in the SQL Server installer are set up to make sure the performance is acceptable on a very modest server.  Until version 2016, the installer had not changed much.

The problem is that in a large number of organizations, the person doing the installation of SQL Server is a Developer, or a SysAdmin running a script.   Both are great at their core jobs, but does either of them know what Cost Threshold for Parallelism is?  It is not in the installer…it has to be configured post install or left at a very old default number that most likely makes no sense for you.

Likewise, if you are running SQL Server installed from a script that came with a 3rd party software product, you very likely have the “safest” possible settings in place.

Some of the most common things I see everywhere:

  • Instant File Initialization not enabled
  • Cost Threshold for Parallelism not set correctly
  • MAXDOP set to 1 or ALL CPUs
  • MAX Memory left at default
  • MAX Memory left at default on a multi-instance server
  • Default file locations for system databases, or all
  • No maintenance processes

That is the short list.

Let’s talk.  Click HERE to set a meeting on my Calendar.

Thanks for reading!

Kevin3NF

Filed Under: Configuration, Install

T-SQL Tuesday: Code You Would Hate To Live Without

July 10, 2018 by Kevin3NF 2 Comments

 

T-SQL Tuesday is a monthly blog party for the SQL Server community. It is the brainchild of Adam Machanic (b|t) and this month’s edition is hosted by Bert Wagner (b|t) who has asked us to “write about code you’ve written that you would hate to live without.”

Interesting.  I go into my toolbox of code/scripts every day.  All day long.  Oddly, most of the things I’ve written are simple things I don’t want to re-type, or are client specific (some going back to 2006 for a custom log shipping troubleshooting process…).  A great many of them are written by others…most often in my case I use things from Adam, Brent and Pinal.  Some are customized, most not.

Recently, I’ve been getting a lot of performance tuning work, much of which is basically “things are slow…can you fix them?” type of requests.  Most experienced DBAs know that there a few gazillion factors that can lead to this request, and I won’t re-hash them here.

Lets assume, that we’ve optimzed or eliminated the server, disks, network, etc. and were now looking at SQL code and everyone’s favorite – indexes.

I have two scripts I use that give me a quick overview of the type of work SQL Server is doing.   These complement each other, and are used AS A STARTING POINT to locate the low-hanging fruit that can be causing excessive work for the disks and memory of the server.

First off – lets look at any HEAPS and Clustered indexes are getting full scans (this is not perfect info…just a starting point remember):

-- base query provided by Jonathan Fite:
-- https://dba.stackexchange.com/questions/210341/number-of-full-table-scans
-- Added sys.Partitions for rowcount, Imapct computed column, Where conditions and Sorting
-- Kevin3NF, https://dallasdbas.com

exec sp_MSforeachdb
'use [?]

--for just one database, choose the database context and simply highlight andrun the below query
SELECT
	db_name() 
	, S.name AS SchemaName
    , O.name AS ObjectName
    , I.name AS IndexName
    --, I.type AS IndexType
    , I.type_desc AS IndexType_Desc
    , IUS.last_user_scan
	, IUS.user_scans
	, p.rows
	, IUS.user_scans * p.rows as [Impact]
FROM sys.objects AS O 
    INNER JOIN sys.schemas AS S ON S.schema_id = O.schema_id
    INNER JOIN sys.indexes AS I ON I.object_id = O.object_id
    INNER JOIN sys.dm_db_index_usage_stats AS IUS ON IUS.index_id = I.index_id AND IUS.object_id = I.object_id
	JOIN sys.partitions p on p.object_id = I.object_id and p.index_id = I.index_id
WHERE 1=1 
	and I.type IN (0,1) --0=HEAP, 1=CLUSTERED (table)
	and IUS.user_scans > 5
	and p.rows > 1000
Order by 
	--IUS.user_scans desc
	--p.Rows Desc
	IUS.user_scans * p.rows DESC
'

When I run this in its entirety, I get a section of results for every database, for any scans returned by sys.dm_db_index_usage_stats. I can quickly scroll up and down that result set and eyeball any massive numbers for the database(s) in question, or all of them for the server as a whole.  In a sample server I have I found 2 databases with crazy numbers, so I copied the results for those two into Excel and sorted by the Impact column (which is just user_scans * rows):

Results worth Investigating

As you can see on DatabaseB there is a 27 million row table that gets scanned…a lot.  I know some of this is a daily copy from one DB to another…but not that many, so I need to go find out what else is causing these.  While I’ve been writing this, it has been scanned 2000 more times.  On DatabaseA, I have 3 tables I want to look into…2 of them due to sheer size of the tables and one because it is hit so frequently.  In the SQL Query results, the last_user_scan column shows current date and time for all of these…copying to Excel didn’t give me the time…just the date, but that is just a formatting issue.

Note that you can change the results in the WHERE clause to suit your needs.

 

My next step in this process is to run a Query I based off the Pinal Dave’s (b|t) Missing Index query, which simply gets the Top 5 Missing Indxes for each database, ordered by Impact:

-- Get the top 5 missing indexes for each database on an instance
-- and output to a format that can be read as-is, or pasted to Excel for filtering, etc.

-- Thanks to Pinal Dave at www.sqlauthority.com for providing the base query this is built on.
-- Pinal is good guy and great teacher...you should hire him for your performance issues
-- or me, lol

-- Test in your Dev environment before trying this in Prod.  
-- I make no guarantees on performance or SQL version compatibility

-- Filtering out various things is done in the WHERE clause (impact, last_seek, etc.)


-- Create a temp table to hold the results:
CREATE TABLE [dbo].[#MI](
	[dbname] [nvarchar](128) NULL,
	[object_id] [nvarchar](255) NOT NULL,
	[improvement_measure] [bigint] NULL,
	[create_index_statement] [nvarchar](4000) NULL,
	[group_handle] [bigint] NOT NULL,
	[unique_compiles] [bigint] NOT NULL,
	[user_seeks] [bigint] NOT NULL,
	[user_scans] [bigint] NOT NULL,
	[last_user_seek] [datetime] NULL,
	[last_user_scan] [datetime] NULL,
	[avg_total_user_cost] [float] NULL,
	[avg_user_impact] [float] NULL,
	[system_seeks] [bigint] NOT NULL,
	[system_scans] [bigint] NOT NULL,
	[last_system_seek] [datetime] NULL,
	[last_system_scan] [datetime] NULL,
	[avg_total_system_cost] [float] NULL,
	[avg_system_impact] [float] NULL,
	[database_id] [int] NOT NULL	
) 

-- Run through each db on the instance and record results to the temp table
-- yes, sp_MSforeachdb is both undocumented and unsupported, but for this it works just fine

exec master.sys.sp_MSforeachdb
'  use [?];
INSERT #MI
SELECT top 5
	db_name(mid.database_id) as [dbname],
	Object_name(mid.[object_id]),
	Cast(((migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans)))as INT) AS [improvement_measure],
	''CREATE INDEX [missing_index_'' + CONVERT (varchar, mig.index_group_handle) + ''_'' + CONVERT (varchar, mid.index_handle)
	+ ''_'' + LEFT (PARSENAME(mid.statement, 1), 32) + '']''
	+ '' ON '' + mid.statement
	+ '' ('' + ISNULL (mid.equality_columns,'''')
	+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN '','' ELSE '''' END
	+ ISNULL (mid.inequality_columns, '''')
	+ '')''
	+ ISNULL ('' INCLUDE ('' + mid.included_columns + '')'', '''') AS create_index_statement,
	migs.*, 
	mid.database_id
FROM sys.dm_db_missing_index_groups mig
	INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
	INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE 1=1
	--and migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 40000 --uncomment to filter by improvement measure
	--and migs.user_seeks > 250									-- change this for activity level of the index, leave commented out to get back everything
	and mid.database_id > 4										--Skip system databases
	and db_name(mid.database_id) = ''?''						--Get top 5 for only the current database
ORDER BY 
	migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
'

--now that the table is populated, show me the data!
--no data for a DB means no index recommendations.

Select
	[dbname] 
	,[object_id] 
	,[Improvement_Measure]
	,[create_index_statement]
	,[user_seeks]
	,[user_scans]
	,[last_user_seek]
	,[last_user_scan]
	,[avg_total_user_cost]
	,[avg_user_impact]
From #MI
Where 1=1
Order by [dbname],[improvement_measure] desc

--Clean up your mess, you weren't raised in a barn!
Drop Table #MI

Again, use the WHERE clause(s) to filter as you see fit.  In my case I see only one of the tables  from the FULL SCAN query here, so I already know I may have a good shot at seeing at least a valid place to start looking:

I use the Missing Index script all the time, in a variety of ways.   Also, Unused indexes to see if newly created ones are actually in play and being used by the Query Optimizer.

And that’s all I have to say about that…except: USE THESE AT YOUR OWN RISK AND LEARN WHAT THEY DO FIRST! 🙂

Thanks for reading!

Kevin

Filed Under: Uncategorized

How I made money off Brent Ozar

June 29, 2018 by Kevin3NF Leave a Comment

Dallas DBAs is a small database administration consulting firm. We don’t have a sales staff. Its just me, a Junior DBA, and my wife to inject reality into my delusions of grandeur from time to time.

We live and die from referral business. Nothing beats a happy client sending you something to put on the site, or post on LinkedIn. Even better when they tell someone else.

So how did I make money off Brent Ozar…one of the best known SQL Server DBA/performance dudes in the world?  Well…without him knowing it of course!

Some months ago I got an email from my friend Russ.  A recruiter had approached him about a 4-6 week full time contract cleaning up some SQL Servers.  Russ is more on the Business Intelligence side of the SQL Server world.  I’m a pure DBA so he forwarded the email to me.  I didn’t have the bandwidth for a 40-hour week, but most DBA gigs I see are not actually working a full 8 hours a day, so I fired off an email to the recruiter offering remote, part-time as a choice.  I figured that would be the end of it.  I told him “Cleaning up SQL Server messes is my specialty, and I can probably do it 3x faster than most DBAs.  Plus I won’t milk the client for extra hours.”

Within an hour, the recruiter was setting up a phone call with the client. That day. They lead with “We’ve read your blog, so we aren’t going to ask you technical questions.” (that was very cool…)

Essentially they told me they had signed up for SQL ConstantCare from Brent Ozar Unlimited, which in a nutshell reviews your servers, looks for bad things and tells you what to fix.  I only knew of this product’s existence prior to the call, but no details. I told them I was well aware of the type of tools that Brent and team had created and had been to a number of talks and classes from Brent so I had a pretty good idea what they were looking at. I also told them that if they were basically asking for someone to come in and apply best practices that made sense for them, then I was the guy.

Done deal, send me a contract Mr. Recruiter Guy.

All of this happened in one business day.  From Brent to Client to Recruiter to Russ to me to Recruiter to Client to Me.

Total billing so far for what the client thought was a 4-6 week effort?  Right at 40 hours, done in my spare morning hours I keep open for simple tasks on smaller clients.

If you have a bunch of servers, no SQL DBA and you need to know where you stand…ConstantCare is a great idea.  Then ping me and I’ll fix them for you.

Update: This same client has been on a weekly minimum hours contract with me since October 2018 including on-call, which they have never needed to use.

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Uncategorized

Impact of FULL Scans in your SQL Server DB

June 22, 2018 by Kevin3NF 2 Comments

Full scans of a HEAP or Clustered index in SQL Server aren’t necessarily a bad thing.  But sometimes they are scanned because the Query Optimizer (“Optimus” here at Dallas DBAs HQ) cannot find a suitable index to use.  Sometimes its just because the table is tiny.  Either way, its worth investigating WHY all the pages are being read…especially if you are having performance issues or memory pressure.

Just for fun, I took the query found in this DBA StackExchange Answer and modified it to fit a client’s need.   Decided to share:


-- base query provided by Jonathan Fite:
-- https://dba.stackexchange.com/questions/210341/number-of-full-table-scans
-- Added sys.Partitions for rowcount, Imapct computed column, Where conditions and Sorting
-- Kevin3NF, https://dallasdbas.com

SELECT S.name AS SchemaName
  , O.name AS ObjectName
  , I.name AS IndexName
  , I.type_desc AS IndexType_Desc
  , IUS.last_user_scan
  , IUS.user_scans
  , P.rows
  , IUS.user_scans * P.rows as [Impact]
FROM sys.objects AS O 
  INNER JOIN sys.schemas AS S ON S.schema_id = O.schema_id
  INNER JOIN sys.indexes AS I ON I.object_id = O.object_id
  INNER JOIN sys.dm_db_index_usage_stats AS IUS ON IUS.index_id = I.index_id AND IUS.object_id = I.object_id
  INNER JOIN sys.Partitions P on p.object_id = I.object_id and p.index_id = i.index_id
WHERE 1=1 
  and I.type IN (0,1) --0=HEAP, 1=CLUSTERED (table)
  and IUS.user_scans > 5
  and p.rows > 1000

Order by 
--IUS.user_scans desc
--p.Rows Desc
IUS.user_scans * P.rows DESC


The top rows are very likely good candidates for an index review.

Edit as you see fit for your environment, especially the WHERE clause. Please leave the headers.

Thanks for reading.

Kevin3NF

 

Filed Under: Uncategorized

SQL Saturday Dallas, 2018

June 4, 2018 by Kevin3NF Leave a Comment

 

Quacky rides bikes!

 

SQL Saturday Dallas #734 is a wrap!

I’m still tired, 2 weeks later!

This is my 7th or 8th SQL Saturday as an attendee.  3rd as a pre-con presenter.  5th delivering a Saturday session, 1st as a sponsor and 1st as a co-organizer.

And it was a blast.

Months of prep work went into getting things ready…venue change, sponsor chasing, speakers, goodies for speakers and attendees, pre-con stuff, after-part, speaker dinner…you name it, I at least read an email about it.  This was my first time around so I was following the lead of veterans on the NTSSUG board.  We had over 1000 people register…I don’t recall the final tally of actual attendees but the place was packed all day!

My pre-con (DBA Fundamentals) went really well, with a record number of attendees. By design this class is usually small and intimate.   The BI session next door had 65 or so!  I’ve done this pre-con 3 times now, and this was the first time I actually had a technical issue (fonts went wacky) and a brain cramp in the middle.  But, I recovered and people (hopefully) learned something they can take back with them to the office.

I also did my Disaster recovery session on Saturday, which I think went really well, and got a bonus of meeting someone I knew through a previous client and only by phone.  I always think you’ve done your session well when people line up to chat with you afterwards:

Either something was really funny, or my backpack has way too much stuff in it…

 

Our Dallas DBAs sponsor table went really well…except everyone seemed to think I was raffling off my personal racing bike.  Lots of folks came by for the raffle and the free t-shirts (100 only…gone in 90 minutes).  Some even stopped to ask what we do, giving me the chance to promote our Apprentice program.  Thanks to everyone that came by!  And also to Richardson Bike Mart for providing the $150 gift card for our raffle.

Not the raffle item….just clickbait apparently:

The entire staff of Dallas DBAs, in official uniform

 

A very special shout out to my good friend and graphics/web guy, Jeff Miller of Igniss Images for shooting this event as a free service to the SQL Server Community.  See the pics!

Thanks to all the attendees for coming, and to the Sponsors that make it possible!

Kevin3NF

 

Filed Under: Apprentice, Cycling, Dallas DBAs, HADR, SQLSaturday, Training

T-SQL Tuesday: Giving Back

May 8, 2018 by Kevin3NF 2 Comments

T-SQL Tuesday is a monthly blog party for the SQL Server community. It is the brainchild of Adam Machanic (b|t) and this month’s edition is hosted by Riley Major (b|t) who has challenged us to “Pick some way you can help our community“…but not just give a few sentences on how you want to help, but to actually put feet, brain cells and dates to it.

Community service is something I’ve been involved in for decades in many different ways.  Within the SQL Server specific community, I got serious about it a couple of years ago when I changed the nature of this blog from “weird stuff I saw at work today” to “here is some info you can use to make your life better today.”  I also started speaking at SQL Saturdays last June in Houston.

Last year, I decided to take this a step further and set up an unofficial “Apprentice” program to help young people get into tech careers that don’t (for whatever reason) have the same opportunities most of us do.   My barrier to entry was low…others not so much.

The 1st Apprentice is ready to become a part-time Junior DBA and I couldn’t be more proud of him for his hard work.  He entered the program as a bit of a test case.  His barrier to entry – zero desire to go to a 4-year school and chase a possibly useless (for him) degree.

I will soon be looking for the next Apprentice.  Will s/he be like the first?  Maybe from a low income area?   Possibly aging out of the Foster Care system?  On the Autism spectrum?  Who knows.

For now, I will be working with one at a time.  I would like to get to the point where I have 2 or 3 that can learn together by working out solutions as a team.   This will help them not only learn Database Administration, but teamwork, trust, communication skills, etc.  Some of them may decide they hate databases but think Security is really cool.  Or Python.  Or Networking.  And that is great!  If they can find a career that they enjoy and provides a stable lifestyle financially, then I’m thrilled to be a step along a glorious path for them.

HOW YOU CAN HELP!

As these young people get to the point that I run out of things to teach them, they need real-world practice. Eventually I will have an “Apprentice” rate/offering on my Services and Pricing page for them to do the grunt work on your servers.  Part-time, with oversight from me or another Sr. Level DBA.  There are caveats and conditions on this I have not sorted out yet, but its coming sometime in the next 6-12 months.  Please keep an eye out for my posts here, on Twitter and on the Dallas DBAs LinkedIn Page.

I want your thoughts on this…privately or in the comments as you see fit.

Thanks for reading,

Kevin3NF

 

Filed Under: Apprentice, Beginner, Career, EntryLevel, Training, TSQL2sday

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 23
  • Go to page 24
  • Go to page 25
  • Go to page 26
  • Go to page 27
  • Interim pages omitted …
  • Go to page 44
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...