• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

Kevin3NF

Refreshing SQL Server development databases

August 10, 2016 by Kevin3NF Leave a Comment

Refreshing the Dev environment

I started a new contract recently.  This is a global company, but I am on a relatively small product team within it.  10 developers, 3 admins (SQL, O/S, Storage), 2 Business Analysts.

The company has a formal Change Management process, which is great.  However, this team manages and operates outside of that structure…not so good.   Devs have sysadmin to production. For now.

I and the other DBAs are working on all of the things you would expect to be missing from this sort of environment…consistent backup strategy, documentation, proper security, etc.

As with most places our Developers have a Dev copy of the database and are told not to touch the Pre-Prod version.  But, since we are test loading data there, Dev is way out of date.  Also, there are objects in Dev not moved to Pre-Prod yet.  So, they regularly point their queries, procedures and even the website to the PP database.  And then forget what they are connected to after lunch.

This makes for interesting hallway conversations!

The solution of course is to refresh the Dev db from Pre-prod, without losing any Dev code or users.

DBAs that have been around for awhile know where I am going with this…

Basic steps:

1. Backup the Production (or Pre-prod in my case) database.   Or pull from your overnight backup.   You DO have an overnight backup, right?   RIGHT?!?!?   😉

2. Restore that backup to the Dev server as MyDB_Refresh:

— Get the logical and physical filenames

Restore filelistonly
From disk =‘B:BackupFULLMyDB_FULL_20160810_000012.bak’
—Restore, not overwriting the existing Dev db
Restore Database MyDB_Refresh
From disk =‘B:BackupFULLMyDB_FULL_20160810_000012.bak’
With move ‘MyDB’ to‘B:DataMyDB_Refresh.mdf’
       ,move ‘MyBD_log’ to‘B:LogMyDB_Refresh_log.ldf’
       ,recovery

3.  Use SQL Server Data Tools/Visual Studio, or a 3rd party tool such as Red Gate SQL Compare to compare and synchronize the schema in MyDB (Dev copy) to MyDB_Refresh.  SSDT works well for this if you don’t have a license for a vendor tool, or don’t want to download a trial version.  The Refresh database is the Target…
4. Once you have the tables, procedures, USERS, and other objects exactly the same as the old Dev database, you need to rename both databases:
–Make sure all connections are closed:
Use master;
Go
exec Sp_renameDB ‘MyDB’, ‘MyDB_Old’
exec sp_renameDB ‘MyDB_Refresh’, ‘MyDB’

If you are having trouble getting exclusive use because Devs or apps keep re-connecting, you need to use a 2-step process (run both at the same time):

–Take the database offline and kill connections
Alter database MyDB Set offline
with rollback immediate
–Bring it back online and rename all at once
Alter database MyDB set online
use master;
go
Sp_renamedb ‘MyDB’, ‘MyDB_Old’

Change the database names if you need to do the same on MyDB_Refresh.  You can use the same offline/online code when you are doing an emergency restore in prod and the app servers won’t let go of their connections.

Compare a select of tables between Prod and Dev to make sure the data is refreshed, and double check using schema compare that Dev is different than Prod (likely, since that is what Devs do…).

This is a basic process, best done in a maintenance window, early in the morning or late at night.  And you can automate most of this.  I have not tried to automate schema compares other than using SQLPackage.exe in the SSDTools to “upgrade” a target data-tier application, or create a Drift report.   But that is way beyond the scope of this post.  Maybe another time 🙂

Please feel free to post up your processes, or enhancements to this one that some new DBA or Windows admin may be able to use 3 years from now when he finds this post on Google.

Have a fantastic day!

Kevin3NF

Filed Under: Uncategorized

Join vs. IN Not IN

August 9, 2016 by Kevin3NF 5 Comments

Experienced DBAs and Developers…might as well pass on by this one 🙂

Today, I want to turn SQL into English for (mostly) Developers and (some) DBAs that are asked to develop or tune queries.

No matter where I go, I always run into a Dev that loves to use NOT IN.   I get it.  Its simple, it makes sense, and it works.  You will get back the correct rows.

BUT!!!   Lets talk through a simple example as to why a JOIN is going to be far more efficient and make your DBAs and managers very happy….

Scenario:

You are a book lover, and want something new to read.  So you go to the local Narnes and Boble.   When you get there, you find something that sounds familiar, but you can’t remember if it is in your collection at home or not.

The Setup:

I created a table (AllBooks) with a thousand rows and another table (MyBooks) with 49.  Both have a BookID (pretend its the ISBN number so the analogy works).

My two choices for determining what books I do not own…look for books NOT IN my current collection, or JOIN the two lists (yeah…the analogy breaks down a bit here, but stick with me).

The Queries:

Basic stuff, right?   Both will return 951 records (books) that I do not own.  And, very quickly…because the tables are tiny.   Sub-1 second is fast.

The issue here is HOW the rows are compared.

English version now, techy stuff later:

In the first query, this is equivalent to you standing at the bookstore and calling home to have someone check to see if the book in your hand is already in your collection.  EVERY time.  One by one.

In the second, you got really smart and brought a list with you, which you are comparing to the books on the shelf at the store.   You’ve got both “lists” in one place, so it is far more efficient.

Techy stuff/Execution Plans:

Disclaimer…I’m not trying to teach you to read execution plans or even know the deeper meaning of the various operators…just going to show the plans behind the queries above.  If you DO want to know more than you can ever use, go download Grant Fritchey‘s e-book.   And Follow/Stalk Grant on Twitter.  He loves that.

The first query uses Nested Loop operators to work through the two lists:
“For each row in the top (outer) input, scan the bottom (inner) input, and output matching rows”

The second uses a Hash:
“Use each row from the top input (MyBooks) to build a hash table, and each row from the bottom input (AllBooks) to probe into the hash table outputting all matching rows”

These were run at the same time.  85% of the time spent on the NOT IN, compared to 15% on the JOIN.   This was for a very small sample of data…1000 rows and 49 rows.   Take this up tens of thousands, or millions and you’ll start to see significant delays in getting results using NOT IN.

Short story long…if you are writing or tuning queries, this is an easy win.

Go get some fries.  And some books….49 is a pitifully small number for someone of your intelligence and ability 🙂

Kevin3NF

Filed Under: Uncategorized

Login Failed for user…

August 4, 2016 by Kevin3NF Leave a Comment

We’ve all seen them.

Login failed for user ‘MyDomainBob’ (password issue)
Login failed for user ‘MyDomainNancy’ (default database issue)
Login failed for user ‘blah, blah, blah…’

But what about Login Failed for user ‘Insert Chinese characters here’, Reason, An attempt to logon using SQL Authentication failed.

Wait…nobody in the company has a username with Chinese characters.   And we don’t have SQL Authentication turned on….

Do not just let these messages pass you by!

These come with a client IP address at the end.  I did a ping -a on the one I got, and found:

Somebox.qualys.morestuff.mydomain.com, along with 4 replies.   So at least it was a valid internal IP address.

From here, I noticed Qualys in the machine’s FQDN.  As luck would have it I was recently on a Vulnerability Management team (elsewhere), and Qualys was the name of one of the scanning tools we used to look for Vulnerabilities on the servers, routers, etc.

Now…I can make assumptions, but I’m not going to when it comes to something like this.   I checked all the SQL Servers in my area of responsibility and found this on all but one of them.

I wrapped all of the data and findings in a nice package and sent it off to the boss to engage the security team for proper investigation and remediation.  I suspect the Qualys server has a problem…this doesn’t look like one of its checks, but I’m not the expert on that.

So the point of this is not to teach you about all the ways to trouble shoot login failed messages, but rather to make sure you are investigating who is failing to log into your SQL Server and WHY.

  • If you are logging successful logins, quit it.   You’re filling the ERRORLOG.
  • If you are not logging failed logins, start now.   Don’t ignore possible hacking attempts.
  • If you are not investigating login failed messages, start now, or you could be setting yourself up for this:  “Yeah, that data breach of user and HIPAA information was missed by our DBA.”

That is a serious RGE and CLM you don’t need.

That is all for today.

Waffle fries for lunch 🙂

Kevin3NF
The OnPurpose DBA

Filed Under: Uncategorized

SQL Server 2016 Stretch Database

June 1, 2016 by Kevin3NF Leave a Comment

New to SQL 2016 is the ability to send archived data off-premises to MS Azure storage, in the form of a “Stretch Database.”  Sounds like a great idea in theory but do your testing and bust out the calculators before you put production data in the cloud. 
Things I like:
  • Not buying storage, especially on a maxed out server
  • Easy to query full dataset across on-prem and Azure stretch
  • Ummm…all editions is a good thing instead of Enterprise only
  • Nope.  That’s it.
Not fond of:
  • “You are billed for each hour the Stretch database exists, regardless of activity or if the database exists for less than an hour, using the highest rate that applied during the hour.”
  • Lowest performance rate is $1.25/hr or just under $1K/mo. Only goes up from there
  • “Stretch Database currently does not support stretching to another SQL Server. ” Azure only
  • Lame/minimal filters…you have to roll your own functions, and they must be deterministic…no “Getdate() – 30”. This GUI is only slightly better than the horrible nightmare that was Notification Services…
I’m pretty sure I could roll my own “stretch” function into a Azure SQL Database, and I’m an admin much more than a developer.
Maybe down the road this will be better, but right now its an expensive alternative to a USB drive from Fry’s, or a NAS/SAN upgrade.
Are/were you planning to use Stretch?  Have a differing opinion?  Let’s hear it!
Kevin3NF

Filed Under: Uncategorized

Dudes…really?

February 19, 2016 by Kevin3NF 1 Comment

I was participating in an email interview about my SQL career and opinions and one of the questions was basically “What’s the worst SQL thing you’ve inherited?”

I gave them this list:

o   SQL 2008 R2, RTM
o   Incorrect Memory configuration
o   Full recovery model on data that only changes once a week at most
o   ZERO documentation
o   New data is imported to a new table and a UNION ALL is modified to add that table
o   ZERO documentation
o   Stored Procedures have no comments, poor formatting and developer names in the name of the sproc
o   Autogrow is 1MB, data imports are hundreds of MB each
o   Everyone is sysadmin, probably including you…
o   Change control process is intentionally shortcut and ignored on this internal production system

o   Ownership changed to me in December, then was yanked back 3 weeks later with developers overwriting my fixes in prod.

Really….all that on one server!

Upgrade your fries broseph!!!

Kevin3NF

Filed Under: Uncategorized

SQL 101 Blocking vs. Deadlocking – in English for the Non-DBA

October 30, 2015 by Kevin3NF 2 Comments

Plain ‘ol English….

Blocking:

You walk into Starbucks and there’s 1 guy ordering.  He finishes, then you order.  That’s blocking, and its perfectly normal, expected and acceptable.

You walk into Starbucks and there’s 1 guy ordering for 15 people in his office.  You’re willing to wait, but not forever.  That’s moderate blocking.

You walk into Starbucks and there’s 1 guy ordering, 7 people in line and they are all ordering drinks with names too long to remember….you walk out.  That’s excessive blocking with a failed transaction (you).

Blocking in SQL Server is just transactions that need the same resource lining up to take their turn.  Its by design and is a good thing.   Excessive blocking needs to be fixed.

Deadlocking:

You have 2 children.  Each has a toy, and at the same time they each grab for the toy the other one has, but won’t let go of the first one.  This is a Deadlock.   SQL Server has a built-in process to resolve this.  They gave it an official name, but I call it “Mom.”   Mom steps in and breaks it up…automatically.  This is a code issue much of the time…profile it, trace flag it, whatever..but find and fix your application code.  If its not your app code, it could be just performance related…tune or rebuild some indexes may get you there.  Start HERE for troubleshooting deadlocks.

Does that help?

They are NOT the same thing…Deadlock is bad, Blocking is part of the engine…to a point.

Next up (maybe): Replication and Log Shipping are not the same thing…quit using them interchangeably, especially when you mean “Clustered”

Have a nice day!

Kevin3NF

Filed Under: Uncategorized

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 32
  • Go to page 33
  • Go to page 34
  • Go to page 35
  • Go to page 36
  • Interim pages omitted …
  • Go to page 39
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in