• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

backup

TSQL2SDay: Backup Basics

December 13, 2016 by Kevin3NF 1 Comment

t-sqltuesday

T-SQL Tuesday is a blog party started by Adam Machanic (b/t) just over five years ago. The first Tuesday of each month a blogger hosts the party and suggests a topic. Then anyone who is interested blogs on that topic. It can be a lot of fun and quite a challenge to blog on a topic you didn’t pick.

This month’s topic is presented by my friend Kenneth Fisher (b/t), who has challenged us to go #EntryLevel and talk about backups.  So I did…in a video just posted to YouTube:

You’ll want to go full screen to see what’s going on.  This video is targeted at non-DBAs, but give it a watch anyway…I can take the heckling!

Filed Under: Accidental DBA, backup, TSQL2sday

Back to Basics: Backups

November 10, 2016 by Kevin3NF 1 Comment

A database backup intro for: Accidental DBAs, Juniors, Developers, Managers, etc.

Backing up SQL Server databases has been documented in thousands of books, Microsoft documentation and blog posts since the product was released in the 90s.

I’m not about to try and tell you HOW to backup your SQL databases. Rather, I want to take you through the basic considerations of deciding what to back up, when, how often, etc. in clear language that non-SQL folks can work through.

Some basic terminology:

  • Database backup – sending all of the data, structure, objects, etc. into a different location (file/tape/virtual device). This is at the database level. Not Server or Instance wide.
  • Device – formerly called a dump device, this is the target for your backup.
  • Full backup – all of the contents, data and code.
  • Differential – Everything that has changed since the last Full
  • Transaction Log backup – All new entries in the .ldf file since the last t-log backup
  • Recovery model – 3 choices, depending on how much data loss you can tolerate (Full, Bulk-Logged, Simple)
  • RPO – Recovery Point Objective – the goal of what point you need to be able to recover to.
  • RTO – Recovery Time Objective – how fast you need to be able to restore the data.

Decisions you really need to make…simplified:

  • Do I need to back up this database?
  • How often?
  • What is my tolerance for data loss?
  • How long can we be down if we have to restore?
  • Where should I store my backups?
  • How long should I keep them, based on business and regulatory requirements?
  • Should I test my backups?

One by one:

Do I need to back up this database?

Most likely.  Unless this is a completely stale copy of read-only data and you can re-generate it with a few clicks, then yes…back it up.   Restoring from backup is generally much faster than re-generating.  System databases as well, in case you have to build up a new server.

How Often?

A very common standard in the industry to start with is to do a Full backup weekly, Differentials daily and T-logs throughout the day.   This speaks directly to your tolerance for data loss.   Adjust based on your specific business needs.  Slowly changing data can tolerate fewer backups than a highly active system such as order fulfillment.

What is my tolerance for data loss?

The more “nines” you want, the more expensive your backup solution will be.   Be realistic, and sort out if you can handle a minute, a day or a month.  I’ve seen all of the above as the appropriate answer.   There are not ANY generic backup solutions within SQL Server that will guarantee zero data loss.   That is a whole different conversation.

How long can we be down if we have to restore?

Commonly referred to by management in terms of how much money are we losing, due to lost productivity, lost sales, etc.   If you can be down for an hour and one person is affected, that is wildly different than your online store database being down for an hour when you are the size of an Amazon or eBay.  Be realistic.   Test restore your databases to determine the restore time.

Where should I store my backups?

Somewhere else!  Industry standard is to have “hot” backup file(s) local to the server (different drives than the database files), and copies of them somewhere else…tape, SAN, etc.  Again, the more resilient the solution needs to be, the more expensive.  Cloud storage is getting cheaper and cheaper all the time.

How long should I keep them, based on business and regulatory requirements?

I’ve had customers trash backups after a week, and one that had to keep 30 years.   True story.  Ask your auditors and legal…you don’t want to be the one that has to explain why your backups only go back a year when 3 were required by law.

Should I test my backups?

I forget who, but someone well respected in the SQL Server world said this years ago (paraphrasing): “If you don’t test your DR solution, you don’t have a DR solution.  You have a DR Hope.”  Do test restores on another server so you know the files are valid, and so you know how long the restores take.

Other considerations:

You do not need a 3rd party solution or expensive toolset to implement basic backups.   SQL Server has a Maintenance Plan wizard that will walk you through getting a basic plan in place.   The more complex you needs, the less likely this is to be sufficient.

Setting up backups is a bare minimum starting point for any organization. Depending on industry, organizational and regulatory requirements you may have to go above and beyond.   You may need to introduce compression, encryption, off-site storage, etc.

If your database/application/website uptime requirements have a lot of nines…you will be looking into HA (High Availability) solutions, of which there are a variety.  DR (Disaster Recovery) starts with backups at the database level.   Depending on the level of disaster you are trying to be able to recover from, you may need to protect more and more.   I start this conversation from small to large:  Database deleted, server dead, server room fire, building collapse, city offline.

This was not meant to be an exhaustive list of every possible backup and restore scenario in the SQL Server world.  But I do hope that if you are not a DBA you now how a basic understanding.  If you have a DBA, go ask her about your current strategy.  If you don’t have anyone doing this for you, it may be time to bring someone in to get you set up…and now you have a starting point for that conversation.

Please leave me any follow up questions in the comments!

Kevin3NF

Filed Under: Accidental DBA, backup

T-SQL Tuesday: Still dealing with the same problems

October 11, 2016 by Kevin3NF 2 Comments

t-sqltuesday

This post is my contribution to the monthly blog party known as “T-SQL Tuesday.”  This month is being hosted by Andy Mallon, and covers  the topic We’re still dealing with the same problems.

The topic was inspired by a Tweet from Allan Hirt.   Allan is exactly right.

I entered the IT field officially ( meaning full-time) in 1998 as an Access Developer.  I’ve done desktop support, database dev, network, phone systems and for the last 17 years I’ve been working exclusively with SQL Server, v. 6.5 – 2016.

In all these years, we still have users that don’t know to reboot a frozen laptop, store passwords anywhere but a sticky note and call me to “fix the internet” (pro tip…its broken.  Go read comments on any news article.)

Narrowing this down to my slice of the IT world (SQL) and then more to Admin, and even further took some time.   The thing that I see and deal with the most came down to “blindly clicking OK and accepting all the defaults, all the time.”

I get it.

On my first SQL installation, I clicked OK and yes right through the install wizard and had a functional SQL 2000 install on my desktop.   On the C drive, which was almost full already.

Installation defaults that are going to bite you (not version specific, and the installer is getting better):

  • Files all on the C drive
  • One TempDB data file (improved in SQL 2016)
  • Backups on C drive
  • No automated backups
  • Allow SQL to use ALL the memory
  • Allow SQL to use ALL the CPUs
  • Builtin\Administrators group not default*
  • Compressed backup set to OFF

If you have any of the above, please research each and sort out why I listed it…you will learn more along the way than I can teach you here.  Each of these is documented extremely well by both Microsoft and the SQL Community.

* let the religious debate start in 3…2…1…

Another issue that is not done at install time, but shortly thereafter when you create your first database is the VERY common failure of setting up a new Database using the default FULL Recovery model, without a Transaction Log backup plan in place.

There are hundreds, maybe thousands of forum posts asking the same question: “My database is 1GB, but my T-log is 500GB and filled the drive.”

I had this question brought to me just 2 weeks ago at SQL Saturday Dallas by developer turned accidental DBA.  We spent 30 minutes discussing his database restore failures and why it took so long.   At the very end he mentioned that the .LDF file was HUGE compared to the .MDF file, and then walked off to the next session.  That HUGE .LDF is taking most of the restore time due to writing zeros into it well after the data is written.

Long story short:  Make sure you understand the defaults before you install, and implement a proper backup plan.  You can learn these easily and I’m expensive if I need to come fix them for you 🙂

Kevin3NF

 

 

 

 

Filed Under: Accidental DBA, backup, Install, SQL, TSQL2sday

Analyzing an Upgrade after the fact

September 26, 2016 by Kevin3NF Leave a Comment

We had a major upgrade to our main application recently, so I thought I would share some of the “thoughts from the aftermath.”

The good:  We gave the application back to the business 1/2 of a business day earlier than expected.   Win.

The bad:  We were hoping to give it back a full day earlier

The ugly: we wasted a full day having to rollback and start over

Some lessons learned:

  • Full backups ahead of time are a great idea – check
  • Restore scripts for those backups are a great idea – check
  • Using SQLCMD to apply vendor .sql files is a great idea – check
  • Forgetting the -I parameter in SQLCMD is a bad idea (not -i, but -I for Quoted Identifiers, which is false by deafult in SQLCMD, but True in SSMS)
  • Not having a full setup lab environment to test everything – bad
  • Using the default settings in SSDT Schema Compare – bad
  • Re-ordering column order in tables for cosmetic reasons – bad (this one is on the vendor…we had to re-write some code that broke)
  • KrispyKreme donuts in the war room followed by a huge box of cupcakes – bad.  Didn’t affect the upgrade but I ate too much 😉

Those are the big items.

This post inspired by Tim Mitchell‘s post today : Studying Failures

Feel free to share your deployment/upgrade brilliance or derp moments in the comments below.

Kevin3NF

 

Filed Under: backup, Deployment, Restore, Uncategorized Tagged With: Deployment, SQL

SQL 101 – Recovery models in simple terms

January 10, 2008 by Kevin3NF Leave a Comment

A recovery model is simply a choice made on each database that determines how much data you can recover if your db goes “toes up”.

If you do not backup your database, you have chosen recovery model “terminated” or “update resume”

The 3 that are offered by Microsoft are:

  • Bulk-logged (rarely seen, and generally more advanced, so I’m skipping it)
  • Simple
  • Full

Simple vs. Full is very simply a matter of how much data can you afford to lose if the database is lost and you are forced to restore from your backups.

Simple recovery model: does not allow backing up the transaction logs, therefore you cannot restore any of the data in them. You can only restore to the point of the Full backup and any one Differential you may have.

Full recovery model: You can restore from t-log backups (assuming you have them), right up to a specific point in time. Reduced data loss, assuming the backup files are valid.

When to use:

Simple: When you do not care about the data, such as in a Development or Test environment where you can regenerate from another source. Also useful in a static, read-only database (assuming you have a full backup).

Full: Pretty much any live production database that has Inserts, Updates and Deletes happening.

Switching from one to the other:

Simple to Full: Immediately take a Full or Differential backup

Full to Simple: No specific action required, other than verifying regular data backups are in place and working.

Maintenance plan considerations:

If you have both Simple and Full recovery model databases in your SQL instance, and you create a Maintenance Plan to back up data and logs, you may run into an issue (at least in SQL 2000) where the automated deletion of old t-log backups is failing. Make two plans: one for Full and one for Simple. I have no idea if this issue still presents in SQL 2005.

I hope this is clear…please feel free to comment.

Kevin3NF

Filed Under: Accidental DBA, backup, Beginner, EntryLevel, SQL

Do What?

October 25, 2007 by Kevin3NF Leave a Comment

Do what?

So….I’ve got this job that backs up the T-log every 15 minutes (SQL 2005). Works like a champ across all servers.

Out of the blue, it starts failing with a “No full backup detected”, despite the fact that one occurred less than an hour ago. Its recorded in the msdb system tables and the file is right where the full backup job left it.

The customer had decided for reasons unknown to create a job that does this (every hour):

Use MyDB
DBCC SHRINKFILE(MyDB_Log, EMPTYFILE)
BACKUP LOG MyDB WITH TRUNCATE_ONLY
DBCC SHRINKFILE(MyDB_Log, EMPTYFILE)

go

I kid you not. And no…there is not a second file to EMPTY the log contents into.

For those of you new to the game, when you Truncate a T-log, the sequence is broken and SQL Server barfs on future log backups until you run a full.

Kudos to my co-worker (code name Freakshow) for catching this as I was headed to the ERRORLOG to see what was going on. He nailed in 10 seconds what would have taken me 7 minutes. And then he knew it was a new job while I was still reading the log. Scary smart, that dude.

No fries, but ketchup squirted all over that client.

Kevin3NF

Filed Under: backup, Fries

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...