• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

Uncategorized

PASS Summit 2018 Game Night

August 16, 2018 by Kevin3NF Leave a Comment

Game Night 3.0 is happening at PASS Summit 2018!!

 

Game night is an annual gathering at PASS Summit aimed at those attendees that are not necessarily the bar hopping or karaoke sort (not that there is anything wrong with that…).  Or for those that just want to spend some non-technical time with like minded people in a somewhat quieter setting and make some new friends.

Steve Jones and Andy Warren hosted the first ever Summit Game Night in 2016  and it was a great success.  I was able to teach a few newbies how to play Spades, along with some other sharks 😉

They handed the reigns over to PASS in 2017 and let them do the promotion, coordination with the convention center, etc.  PASS also bought quite a few games, partially influenced by a poll I ran last year.  They also graciously let me be the loud-mouthed spammer of the event on Twitter, Slack and wherever else SQL folks hang out.

IT’S BACK!  2 nights this year!!!

If you want more info about Game Night, or want to give feedback on previous events, PLEASE put something in a comment below and I will make sure you are heard.

This is not a free event though…there will be a choice of alcoholic or non-alcoholic beverages.  $12 gets you in the door with drink tickets.  And of course the PASS Anti-Harassment Policy applies, so that everyone can have a good, safe time.  This includes the games.  You can bring your own, and are encouraged to bring a favorite…but it must be AHP appropriate.  Assume there will be random 8 year-olds watching and you should be fine.

Updated list of the games that PASS owns and will provide:

  • 7 Wonders
  • Apples to Apples *
  • Brain Games
  • Carcassone *
  • Catan
  • Chess *
  • Clue *
  • Codenames
  • Dominoes
  • Dominion
  • Exploding Kittens *
  • Fluxx *
  • Generic card decks *
  • Jenga *
  • Monopoly *
  • Munchkin *
  • Pandemic
  • Scrabble
  • Sequence
  • Stratego *
  • Ticket to Ride *
  • Uno *

* I’ve played this 🙂

Registration links are up:

Wednesday night, 8pm – 10:30pm

Thursday night, 7pm – 10:30pm

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Uncategorized

T-SQL Tuesday: Code You Would Hate To Live Without

July 10, 2018 by Kevin3NF 2 Comments

 

T-SQL Tuesday is a monthly blog party for the SQL Server community. It is the brainchild of Adam Machanic (b|t) and this month’s edition is hosted by Bert Wagner (b|t) who has asked us to “write about code you’ve written that you would hate to live without.”

Interesting.  I go into my toolbox of code/scripts every day.  All day long.  Oddly, most of the things I’ve written are simple things I don’t want to re-type, or are client specific (some going back to 2006 for a custom log shipping troubleshooting process…).  A great many of them are written by others…most often in my case I use things from Adam, Brent and Pinal.  Some are customized, most not.

Recently, I’ve been getting a lot of performance tuning work, much of which is basically “things are slow…can you fix them?” type of requests.  Most experienced DBAs know that there a few gazillion factors that can lead to this request, and I won’t re-hash them here.

Lets assume, that we’ve optimzed or eliminated the server, disks, network, etc. and were now looking at SQL code and everyone’s favorite – indexes.

I have two scripts I use that give me a quick overview of the type of work SQL Server is doing.   These complement each other, and are used AS A STARTING POINT to locate the low-hanging fruit that can be causing excessive work for the disks and memory of the server.

First off – lets look at any HEAPS and Clustered indexes are getting full scans (this is not perfect info…just a starting point remember):

-- base query provided by Jonathan Fite:
-- https://dba.stackexchange.com/questions/210341/number-of-full-table-scans
-- Added sys.Partitions for rowcount, Imapct computed column, Where conditions and Sorting
-- Kevin3NF, https://dallasdbas.com

exec sp_MSforeachdb
'use [?]

--for just one database, choose the database context and simply highlight andrun the below query
SELECT
	db_name() 
	, S.name AS SchemaName
    , O.name AS ObjectName
    , I.name AS IndexName
    --, I.type AS IndexType
    , I.type_desc AS IndexType_Desc
    , IUS.last_user_scan
	, IUS.user_scans
	, p.rows
	, IUS.user_scans * p.rows as [Impact]
FROM sys.objects AS O 
    INNER JOIN sys.schemas AS S ON S.schema_id = O.schema_id
    INNER JOIN sys.indexes AS I ON I.object_id = O.object_id
    INNER JOIN sys.dm_db_index_usage_stats AS IUS ON IUS.index_id = I.index_id AND IUS.object_id = I.object_id
	JOIN sys.partitions p on p.object_id = I.object_id and p.index_id = I.index_id
WHERE 1=1 
	and I.type IN (0,1) --0=HEAP, 1=CLUSTERED (table)
	and IUS.user_scans > 5
	and p.rows > 1000
Order by 
	--IUS.user_scans desc
	--p.Rows Desc
	IUS.user_scans * p.rows DESC
'

When I run this in its entirety, I get a section of results for every database, for any scans returned by sys.dm_db_index_usage_stats. I can quickly scroll up and down that result set and eyeball any massive numbers for the database(s) in question, or all of them for the server as a whole.  In a sample server I have I found 2 databases with crazy numbers, so I copied the results for those two into Excel and sorted by the Impact column (which is just user_scans * rows):

Results worth Investigating

As you can see on DatabaseB there is a 27 million row table that gets scanned…a lot.  I know some of this is a daily copy from one DB to another…but not that many, so I need to go find out what else is causing these.  While I’ve been writing this, it has been scanned 2000 more times.  On DatabaseA, I have 3 tables I want to look into…2 of them due to sheer size of the tables and one because it is hit so frequently.  In the SQL Query results, the last_user_scan column shows current date and time for all of these…copying to Excel didn’t give me the time…just the date, but that is just a formatting issue.

Note that you can change the results in the WHERE clause to suit your needs.

 

My next step in this process is to run a Query I based off the Pinal Dave’s (b|t) Missing Index query, which simply gets the Top 5 Missing Indxes for each database, ordered by Impact:

-- Get the top 5 missing indexes for each database on an instance
-- and output to a format that can be read as-is, or pasted to Excel for filtering, etc.

-- Thanks to Pinal Dave at www.sqlauthority.com for providing the base query this is built on.
-- Pinal is good guy and great teacher...you should hire him for your performance issues
-- or me, lol

-- Test in your Dev environment before trying this in Prod.  
-- I make no guarantees on performance or SQL version compatibility

-- Filtering out various things is done in the WHERE clause (impact, last_seek, etc.)


-- Create a temp table to hold the results:
CREATE TABLE [dbo].[#MI](
	[dbname] [nvarchar](128) NULL,
	[object_id] [nvarchar](255) NOT NULL,
	[improvement_measure] [bigint] NULL,
	[create_index_statement] [nvarchar](4000) NULL,
	[group_handle] [bigint] NOT NULL,
	[unique_compiles] [bigint] NOT NULL,
	[user_seeks] [bigint] NOT NULL,
	[user_scans] [bigint] NOT NULL,
	[last_user_seek] [datetime] NULL,
	[last_user_scan] [datetime] NULL,
	[avg_total_user_cost] [float] NULL,
	[avg_user_impact] [float] NULL,
	[system_seeks] [bigint] NOT NULL,
	[system_scans] [bigint] NOT NULL,
	[last_system_seek] [datetime] NULL,
	[last_system_scan] [datetime] NULL,
	[avg_total_system_cost] [float] NULL,
	[avg_system_impact] [float] NULL,
	[database_id] [int] NOT NULL	
) 

-- Run through each db on the instance and record results to the temp table
-- yes, sp_MSforeachdb is both undocumented and unsupported, but for this it works just fine

exec master.sys.sp_MSforeachdb
'  use [?];
INSERT #MI
SELECT top 5
	db_name(mid.database_id) as [dbname],
	Object_name(mid.[object_id]),
	Cast(((migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans)))as INT) AS [improvement_measure],
	''CREATE INDEX [missing_index_'' + CONVERT (varchar, mig.index_group_handle) + ''_'' + CONVERT (varchar, mid.index_handle)
	+ ''_'' + LEFT (PARSENAME(mid.statement, 1), 32) + '']''
	+ '' ON '' + mid.statement
	+ '' ('' + ISNULL (mid.equality_columns,'''')
	+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN '','' ELSE '''' END
	+ ISNULL (mid.inequality_columns, '''')
	+ '')''
	+ ISNULL ('' INCLUDE ('' + mid.included_columns + '')'', '''') AS create_index_statement,
	migs.*, 
	mid.database_id
FROM sys.dm_db_missing_index_groups mig
	INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
	INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE 1=1
	--and migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 40000 --uncomment to filter by improvement measure
	--and migs.user_seeks > 250									-- change this for activity level of the index, leave commented out to get back everything
	and mid.database_id > 4										--Skip system databases
	and db_name(mid.database_id) = ''?''						--Get top 5 for only the current database
ORDER BY 
	migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
'

--now that the table is populated, show me the data!
--no data for a DB means no index recommendations.

Select
	[dbname] 
	,[object_id] 
	,[Improvement_Measure]
	,[create_index_statement]
	,[user_seeks]
	,[user_scans]
	,[last_user_seek]
	,[last_user_scan]
	,[avg_total_user_cost]
	,[avg_user_impact]
From #MI
Where 1=1
Order by [dbname],[improvement_measure] desc

--Clean up your mess, you weren't raised in a barn!
Drop Table #MI

Again, use the WHERE clause(s) to filter as you see fit.  In my case I see only one of the tables  from the FULL SCAN query here, so I already know I may have a good shot at seeing at least a valid place to start looking:

I use the Missing Index script all the time, in a variety of ways.   Also, Unused indexes to see if newly created ones are actually in play and being used by the Query Optimizer.

And that’s all I have to say about that…except: USE THESE AT YOUR OWN RISK AND LEARN WHAT THEY DO FIRST! 🙂

Thanks for reading!

Kevin

Filed Under: Uncategorized

How I made money off Brent Ozar

June 29, 2018 by Kevin3NF Leave a Comment

Dallas DBAs is a small database administration consulting firm. We don’t have a sales staff. Its just me, a Junior DBA, and my wife to inject reality into my delusions of grandeur from time to time.

We live and die from referral business. Nothing beats a happy client sending you something to put on the site, or post on LinkedIn. Even better when they tell someone else.

So how did I make money off Brent Ozar…one of the best known SQL Server DBA/performance dudes in the world?  Well…without him knowing it of course!

Some months ago I got an email from my friend Russ.  A recruiter had approached him about a 4-6 week full time contract cleaning up some SQL Servers.  Russ is more on the Business Intelligence side of the SQL Server world.  I’m a pure DBA so he forwarded the email to me.  I didn’t have the bandwidth for a 40-hour week, but most DBA gigs I see are not actually working a full 8 hours a day, so I fired off an email to the recruiter offering remote, part-time as a choice.  I figured that would be the end of it.  I told him “Cleaning up SQL Server messes is my specialty, and I can probably do it 3x faster than most DBAs.  Plus I won’t milk the client for extra hours.”

Within an hour, the recruiter was setting up a phone call with the client. That day. They lead with “We’ve read your blog, so we aren’t going to ask you technical questions.” (that was very cool…)

Essentially they told me they had signed up for SQL ConstantCare from Brent Ozar Unlimited, which in a nutshell reviews your servers, looks for bad things and tells you what to fix.  I only knew of this product’s existence prior to the call, but no details. I told them I was well aware of the type of tools that Brent and team had created and had been to a number of talks and classes from Brent so I had a pretty good idea what they were looking at. I also told them that if they were basically asking for someone to come in and apply best practices that made sense for them, then I was the guy.

Done deal, send me a contract Mr. Recruiter Guy.

All of this happened in one business day.  From Brent to Client to Recruiter to Russ to me to Recruiter to Client to Me.

Total billing so far for what the client thought was a 4-6 week effort?  Right at 40 hours, done in my spare morning hours I keep open for simple tasks on smaller clients.

If you have a bunch of servers, no SQL DBA and you need to know where you stand…ConstantCare is a great idea.  Then ping me and I’ll fix them for you.

Update: This same client has been on a weekly minimum hours contract with me since October 2018 including on-call, which they have never needed to use.

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: Uncategorized

Impact of FULL Scans in your SQL Server DB

June 22, 2018 by Kevin3NF 2 Comments

Full scans of a HEAP or Clustered index in SQL Server aren’t necessarily a bad thing.  But sometimes they are scanned because the Query Optimizer (“Optimus” here at Dallas DBAs HQ) cannot find a suitable index to use.  Sometimes its just because the table is tiny.  Either way, its worth investigating WHY all the pages are being read…especially if you are having performance issues or memory pressure.

Just for fun, I took the query found in this DBA StackExchange Answer and modified it to fit a client’s need.   Decided to share:


-- base query provided by Jonathan Fite:
-- https://dba.stackexchange.com/questions/210341/number-of-full-table-scans
-- Added sys.Partitions for rowcount, Imapct computed column, Where conditions and Sorting
-- Kevin3NF, https://dallasdbas.com

SELECT S.name AS SchemaName
  , O.name AS ObjectName
  , I.name AS IndexName
  , I.type_desc AS IndexType_Desc
  , IUS.last_user_scan
  , IUS.user_scans
  , P.rows
  , IUS.user_scans * P.rows as [Impact]
FROM sys.objects AS O 
  INNER JOIN sys.schemas AS S ON S.schema_id = O.schema_id
  INNER JOIN sys.indexes AS I ON I.object_id = O.object_id
  INNER JOIN sys.dm_db_index_usage_stats AS IUS ON IUS.index_id = I.index_id AND IUS.object_id = I.object_id
  INNER JOIN sys.Partitions P on p.object_id = I.object_id and p.index_id = i.index_id
WHERE 1=1 
  and I.type IN (0,1) --0=HEAP, 1=CLUSTERED (table)
  and IUS.user_scans > 5
  and p.rows > 1000

Order by 
--IUS.user_scans desc
--p.Rows Desc
IUS.user_scans * P.rows DESC


The top rows are very likely good candidates for an index review.

Edit as you see fit for your environment, especially the WHERE clause. Please leave the headers.

Thanks for reading.

Kevin3NF

 

Filed Under: Uncategorized

DBA Fundamentals class Promo video

February 3, 2018 by Kevin3NF Leave a Comment

Quick note…

If you are considering attending my Day-long DBA Fundamentals class at a SQL Saturday, or any other venue, check out this video to help you decide if it is the right class for you.  I want to make sure you spend your time and money well!

You can always email me questions as well: [email protected]

— Kevin

Filed Under: Uncategorized

SQL Server: PLE > 300 is nonsense

November 20, 2017 by Kevin3NF 3 Comments

I’ve been hearing throughout my entire DBA career that 300 seconds is a good counter for PLE (Page Life Expectancy) to be above.

Paul Randal calls this “utter nonsense“.  If anyone would have the right to say that, its Paul.

There was a time when this MIGHT have been a useful number…but the hardware has changed so much its no longer so.  I “heard” that Paul actually made up that number just for documentation purposes…but that may just be a fun story.   Ask Paul sometime.

Go here to find a good number: Page Life Expectancy isn’t what you think

While you are there, check out all of the posts that start with “SQL Server DBA Myth a Day” and see how many you thought were gospel truth 🙂

Thanks for reading,

Kevin3NF

 

Filed Under: Uncategorized

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...