Sometimes its good to sit back, listen, nod and hear what is being said before speaking. Actually, that is almost always the best idea.
Refreshing the Dev environment
I started a new contract recently. This is a global company, but I am on a relatively small product team within it. 10 developers, 3 admins (SQL, O/S, Storage), 2 Business Analysts.
The company has a formal Change Management process, which is great. However, this team manages and operates outside of that structure…not so good. Devs have sysadmin to production. For now.
I and the other DBAs are working on all of the things you would expect to be missing from this sort of environment…consistent backup strategy, documentation, proper security, etc.
As with most places our Developers have a Dev copy of the database and are told not to touch the Pre-Prod version. But, since we are test loading data there, Dev is way out of date. Also, there are objects in Dev not moved to Pre-Prod yet. So, they regularly point their queries, procedures and even the website to the PP database. And then forget what they are connected to after lunch.
This makes for interesting hallway conversations!
The solution of course is to refresh the Dev db from Pre-prod, without losing any Dev code or users.
DBAs that have been around for awhile know where I am going with this…
1. Backup the Production (or Pre-prod in my case) database. Or pull from your overnight backup. You DO have an overnight backup, right? RIGHT?!?!? 😉
2. Restore that backup to the Dev server as MyDB_Refresh:
— Get the logical and physical filenames
Change the database names if you need to do the same on MyDB_Refresh. You can use the same offline/online code when you are doing an emergency restore in prod and the app servers won’t let go of their connections.
Compare a select of tables between Prod and Dev to make sure the data is refreshed, and double check using schema compare that Dev is different than Prod (likely, since that is what Devs do…).
This is a basic process, best done in a maintenance window, early in the morning or late at night. And you can automate most of this. I have not tried to automate schema compares other than using SQLPackage.exe in the SSDTools to “upgrade” a target data-tier application, or create a Drift report. But that is way beyond the scope of this post. Maybe another time 🙂
Please feel free to post up your processes, or enhancements to this one that some new DBA or Windows admin may be able to use 3 years from now when he finds this post on Google.
Have a fantastic day!
Experienced DBAs and Developers…might as well pass on by this one 🙂
Today, I want to turn SQL into English for (mostly) Developers and (some) DBAs that are asked to develop or tune queries.
No matter where I go, I always run into a Dev that loves to use NOT IN. I get it. Its simple, it makes sense, and it works. You will get back the correct rows.
BUT!!! Lets talk through a simple example as to why a JOIN is going to be far more efficient and make your DBAs and managers very happy….
You are a book lover, and want something new to read. So you go to the local Narnes and Boble. When you get there, you find something that sounds familiar, but you can’t remember if it is in your collection at home or not.
I created a table (AllBooks) with a thousand rows and another table (MyBooks) with 49. Both have a BookID (pretend its the ISBN number so the analogy works).
My two choices for determining what books I do not own…look for books NOT IN my current collection, or JOIN the two lists (yeah…the analogy breaks down a bit here, but stick with me).
Basic stuff, right? Both will return 951 records (books) that I do not own. And, very quickly…because the tables are tiny. Sub-1 second is fast.
The issue here is HOW the rows are compared.
English version now, techy stuff later:
In the first query, this is equivalent to you standing at the bookstore and calling home to have someone check to see if the book in your hand is already in your collection. EVERY time. One by one.
In the second, you got really smart and brought a list with you, which you are comparing to the books on the shelf at the store. You’ve got both “lists” in one place, so it is far more efficient.
Techy stuff/Execution Plans:
Disclaimer…I’m not trying to teach you to read execution plans or even know the deeper meaning of the various operators…just going to show the plans behind the queries above. If you DO want to know more than you can ever use, go download Grant Fritchey‘s e-book. And Follow/Stalk Grant on Twitter. He loves that.
The first query uses Nested Loop operators to work through the two lists:
“For each row in the top (outer) input, scan the bottom (inner) input, and output matching rows”
The second uses a Hash:
“Use each row from the top input (MyBooks) to build a hash table, and each row from the bottom input (AllBooks) to probe into the hash table outputting all matching rows”
These were run at the same time. 85% of the time spent on the NOT IN, compared to 15% on the JOIN. This was for a very small sample of data…1000 rows and 49 rows. Take this up tens of thousands, or millions and you’ll start to see significant delays in getting results using NOT IN.
Short story long…if you are writing or tuning queries, this is an easy win.
Go get some fries. And some books….49 is a pitifully small number for someone of your intelligence and ability 🙂
We’ve all seen them.
Login failed for user ‘MyDomainBob’ (password issue)
Login failed for user ‘MyDomainNancy’ (default database issue)
Login failed for user ‘blah, blah, blah…’
But what about Login Failed for user ‘Insert Chinese characters here’, Reason, An attempt to logon using SQL Authentication failed.
Wait…nobody in the company has a username with Chinese characters. And we don’t have SQL Authentication turned on….
Do not just let these messages pass you by!
These come with a client IP address at the end. I did a ping -a on the one I got, and found:
Somebox.qualys.morestuff.mydomain.com, along with 4 replies. So at least it was a valid internal IP address.
From here, I noticed Qualys in the machine’s FQDN. As luck would have it I was recently on a Vulnerability Management team (elsewhere), and Qualys was the name of one of the scanning tools we used to look for Vulnerabilities on the servers, routers, etc.
Now…I can make assumptions, but I’m not going to when it comes to something like this. I checked all the SQL Servers in my area of responsibility and found this on all but one of them.
I wrapped all of the data and findings in a nice package and sent it off to the boss to engage the security team for proper investigation and remediation. I suspect the Qualys server has a problem…this doesn’t look like one of its checks, but I’m not the expert on that.
So the point of this is not to teach you about all the ways to trouble shoot login failed messages, but rather to make sure you are investigating who is failing to log into your SQL Server and WHY.
- If you are logging successful logins, quit it. You’re filling the ERRORLOG.
- If you are not logging failed logins, start now. Don’t ignore possible hacking attempts.
- If you are not investigating login failed messages, start now, or you could be setting yourself up for this: “Yeah, that data breach of user and HIPAA information was missed by our DBA.”
That is a serious RGE and CLM you don’t need.
That is all for today.
Waffle fries for lunch 🙂
The OnPurpose DBA
I was participating in an email interview about my SQL career and opinions and one of the questions was basically “What’s the worst SQL thing you’ve inherited?”
I gave them this list:
Really….all that on one server!
Upgrade your fries broseph!!!