What are you most concerned about in your database platforms?
Please pick from the list below, and expand on your choice in the comments if you like:

Thanks for reading!
SQL Server Database Consulting
What are you most concerned about in your database platforms?
Please pick from the list below, and expand on your choice in the comments if you like:
Thanks for reading!
We’ve all heard it before, pretty much any job, anywhere.
In our DBA slice of the IT world, this is very relevant to how we manage SQL Servers and the databases on them.
From scripts to check for Bad Things™, to writing efficient code.
In this video, I show a simple way to get SQL Server generate scripts you can run. In this case I needed to run the same command against a large number of databases. Running it manually, or worse, doing it in the GUI is just not an option. Enjoy!
The code that I will now use as a starter script:
-- Don't run this if you don't understand it 100% Select 'ALTER Database ['+[name]+'] set Page_Verify Checksum, recovery Full, auto_shrink off; go' From master..sysdatabases Where DBID > 4
Thanks for reading!
This is a guest post by another friend of Dallas DBAs – Brendan Mason (L|T)
Upgrading and migrating databases can be a daunting exercise if it’s not something you practice regularly. As a result, upgrades routinely get dismissed as “not necessary right now” or otherwise put off until there is not really another option. Maybe the physical hardware the instance was running on had a motherboard melt down, or perhaps the vendor of your ERP software is forcing your hand by no longer supporting that near decade old version of SQL Server. Or maybe you’re running a current version but find yourself along for the ride when the application team decides they’re moving from on-premise to the cloud.
Everyone wants upgrades and migrations to go smoothly, particularly the folks on the business side of the house who control budgets for outside help. As consultants, we get called in to help with these projects a lot. I have personally been involved in hundreds of database migrations as both an employee and a consultant. It has exposed me to a lot of different scenarios, good and bad. In this post, I’m going to talk about the top 3 mistakes I see companies make when it comes to SQL Server upgrades and migrations.
It seems that everyone thinks they need AlwaysOn Availability Groups these days, whether they know what it is they do or what the alternatives are. Most often, a kick-off meeting for a migration project starts with, “We need you to set up our new Availability Group so that we have multiple copies of the database and never have another outage. We have VMs built and ready for you to install SQL Server on.” This constitutes being proactive to a fault. Sometimes the entire architecture needs to be redone when we tell them that Availability Groups won’t solve their particular use case or give them what they were led to believe, or that their budget simply won’t accommodate Enterprise Edition. Getting too far ahead of yourself can also cost you extra money. I have seen companies go out and buy physical hardware for their new environment, only to discover during our conversation that the processors they paid extra for have more cores than the SQL Server Standard Edition instance they want to use will be able to utilize. It adds insult to injury when I tell them that Microsoft will expect them to license all those cores for SQL Server even when the lone instance won’t be using them all.
In a recent example of a pre-determined solution that was incompatible with the upgrade goals, a client wanted to perform an in-place upgrade of their SQL Server 2012 instance running on Windows Server 2012. They didn’t want to upgrade the operating system and had correctly noted that SQL Server 2019 was not supported on Windows Server 2012. Based on this, they had decided that 2017 Standard would work for them and they just wanted some outside expertise to help with the upgrade process. It wasn’t until we were days away from the upgrade that they shared that Transparent Data Encryption (TDE) was a must-have feature that they needed to enable post-upgrade. This is an easy misinterpretation for someone not well versed in SQL Server editions and features to make. TDE had been an Enterprise Edition feature until 2019 when it was finally included with Standard. The client had been looking at the edition comparison for 2019 and didn’t realize that TDE was not available in Standard edition for the earlier versions.
Another common one I see is the decision being made up front to use a database Platform-as-a-Service (PaaS) solution. I’m a big fan of PaaS in general, but there can be a lot of gotchas – especially when it involves migrating a legacy database.
Remember, we’re here to help guide you during the planning phases, not just the execution, so I recommend engaging early in the process. The more details you share about what problems you’re trying to solve and what you want to get out of your upgrade/migration project, the more likely it is that you’ll land with a cost-effective solution that does exactly what you need.
You can do everything right as far as sizing your new environment and choosing the right SQL Server edition to meet your requirements, but if you don’t actually test before migrating, you’re taking a huge risk. Inevitably, there is almost always a forgotten dependent application or an old stored procedure that uses an obscure piece of syntax that’s no longer supported. There are also changes made to the database engine with each new version that usually improve query performance, but occasionally cause regressions for certain queries and workloads. If the testing plan is to just “click around” and make sure the application still functions, you can easily miss issues that can escalate under full user load.
Part of testing should be collecting baseline performance metrics. You can’t effectively compare performance of the upgraded databases without a baseline from the source. You can and should also take this opportunity to perform some basic performance tuning prior to upgrading. There have been several cases where I have been able to recommend a reduction in the number of CPU cores and associated cost for the migration just by looking at accurate baseline performance metrics and/or making a few minor performance tweaks.
Another often overlooked part of testing is the usual myriad of connected services and applications. Database Mail, SSIS, SSRS, linked servers, etc. all need to be validated during testing to avoid surprises at go-live.
Note that a solid testing plan is not a replacement for an equally solid rollback plan. You don’t want to be trying to come up with one on the fly. Once a SQL Server database is upgraded to a newer version, you can’t back it up and restore it back to the older version. That means that if you discover a major issue post-upgrade, you need to either work through it, roll back to the database backup that was taken immediately prior to upgrading, or have a process in place to reconcile the deltas.
Every new version of SQL Server comes with improvements to performance, security, and functionality. You’re paying for these things, so you might as well use them, right? Unfortunately, clients can often be reluctant to take full advantage of these enhancements as part of an upgrade project. Those who wait to upgrade until the SQL Server version they’re running is about to go out of support tend to just want to get upgraded with as little change as possible. However, you’re missing out on a lot of benefits when you take that approach. Everyone will have a slightly different situation that dictates which new features warrant trying out, but there are a few general recommendations I have for getting more value from your upgraded SQL Server instance:
Query Store – When upgrading from a SQL Server version prior to 2016, I suggest enabling Query Store after the upgrade. It’s a valuable tuning tool (the so-called “flight recorder”) and you don’t want to wait until you really need it to turn it on.
Compatibility level – In order to take advantage of all the latest enhancements and Query Optimizer fixes, your databases should be at the latest compatibility level. Leaving them at their legacy setting post-upgrade is the “good enough” approach, but if you’re doing proper testing, you might as well test changing the compatibility level while you’re at it. There are tools that will analyze database code and queries and identify incompatibilities, so there’s not a good excuse to not at least test this. Unlike upgrading the database itself, changing the compatibility level is not a one-way operation and it’s a trivial change to go back, if necessary. Bonus points if you enable Query Store prior to changing the compatibility level – it can help identify performance regressions and target individual queries that might need attention. Just always be sure to follow vendor requirements if the database is used by third-party software.
Keep in mind that I’m not advocating that you do these things on go-live night as part of the actual upgrade. I’ve simply witnessed too many upgrades that ended with the database online and passing all the smoke tests, but with no further effort devoted to maximizing value gained from the upgrade.
There you have it – Those are the top 3 mistakes I see during upgrades and migrations. This by no means comprehensive and I have purposely left out some especially cringeworthy experiences to protect the guilty. Upgrading versions or moving to new hardware or platforms doesn’t have to be painful. In fact, with good processes and procedures it should be so predictable and straightforward that it becomes a regular part of keeping your systems up to date.
ED: Please feel free to leave some comments in the ummm…comments section!
We had a major upgrade to our main application recently, so I thought I would share some of the “thoughts from the aftermath.”
The good: We gave the application back to the business 1/2 of a business day earlier than expected. Win.
The bad: We were hoping to give it back a full day earlier
The ugly: we wasted a full day having to rollback and start over
Some lessons learned:
Those are the big items.
This post inspired by Tim Mitchell‘s post today : Studying Failures
Feel free to share your deployment/upgrade brilliance or derp moments in the comments below.
Kevin3NF
We are doing a major upgrade this weekend, so like any good DBA, I have planned for a full backup the night before and needed the ability to quickly restore if it goes sideways and needs to roll back.
The catch is that there are close to 300 identical databases involved.
This is easy enough to do if you know where info is stored in MSDB related to the last backup.
I’ll just leave the code I used right here for your amusement:
Use MSDB; go --generate the list Select max(backup_start_date) as Backup_Date, database_name, Max(physical_device_name) as physical_device_name Into #lastFullBackup From msdb..backupset bs join msdb..backupmediafamily bmf on bs.media_set_id = bmf.media_set_id Where 1=1 and backup_start_date > getdate() -1 and type = 'D' Group By database_name Order By database_name --Create the restore commands Select 'restore database ['+database_name+'] From disk = ''' +physical_device_name+''' With replace, recovery' From #lastFullBackup Where database_name like '%MyDB%[0-9]' -- database1, database2, etc. --Cleanup Drop table #lastFullBackup
There are probably much easier or prettier ways to do this, but I’m only 2 cups of coffee in thus far, and one of them got cold…
If any of the above doesn’t make sense, please drop a comment. It works on my system and should be valid for SQL 2000 – 2016…the system tables referenced have not changed.
Have a bright, shiny day full of puppies or muscle cars….whichever you prefer!