• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

SQL

Installing SQL 2016 Developer – Extreme Basics

December 29, 2016 by Kevin3NF Leave a Comment

If you are a student, a developer or a non-dba wanting to get started with learning more about SQL Server and you need an install to play with, you have a couple of free choices:

  • SQL Server Express – free, but has some size and performance restrictions. Also, no SQL Agent, so no job scheduling.
  • SQL Server Developer – also free, full code of the Enterprise Edition, but cannot be used legally in production. Because its a dev edition.

If you’ve never installed SQL server before and are not sure what to do, this video is for you. It is targeted to new SQL users, installing on their laptops/workstations. I take all the defaults and describe why. I install just the database engine (and the SQL Agent that comes along with that)…no Reporting services, no Analysis Services, etc.

This is just so you can see it before you do it. Enjoy!

Thanks for watching!

Kevin3NF

Filed Under: Accidental DBA, Beginner, Install, SQL

Who Dropped my database?

December 20, 2016 by Kevin3NF 1 Comment

I borrowed the starter of this effort from SQL Server Expert and MVP Pinal Dave here: SQL SERVER – Who Dropped Table or Database?

We have a staging server that has a database for each client that we can test on before working in their production databases.   Recently one has gone missing a few times…but nobody will own up to it.

For a quick hit, I went to Pinal’s site and found most of the below code to read through all of the default trace files on the system and return the Object Dropped, Altered or Created Event Classes.

I have since modified it with a few comments, readability, commented reference links, etc.

I do not claim this work as my own (just the modifications), nor do I promise it will work in your environment.  Learn it, understand it, then use it…in that order 🙂

-- read all available traces.
-- including current
DECLARE @current VARCHAR(500);
DECLARE @start VARCHAR(500);
DECLARE @indx INT;

SELECT @current = path
FROM sys.traces
WHERE is_default = 1;

--remove the rollover number from the filename
--(i.e., log_157.trc becomes log.trc)
SET @current = REVERSE(@current)
SELECT @indx = PATINDEX('%\%', @current)
SET @current = REVERSE(@current)
SET @start = LEFT(@current, LEN(@current) - @indx) + '\log.trc';

-- CHANGE FILTER AS NEEDED
SELECT
    CASE EventClass
      WHEN 46 THEN 'Object:Created'
      WHEN 47 THEN 'Object:Deleted'
      WHEN 164 THEN 'Object:Altered'
      ELSE 'Other'
END as EventClass,
    DatabaseName,
    ObjectName,
    ObjectType, -- https://bit.ly/2hDOJ8V
    HostName,
    ApplicationName,
    LoginName,
    StartTime
FROM
    ::fn_trace_gettable(@start, DEFAULT)
WHERE 1=1
    and EventClass IN 
    (
    46,
    47,
    164
    ) --set events: https://bit.ly/2hMDz2r
    AND EventSubclass = 0
    AND DatabaseID <> 2 --eliminate tempdb
    --AND DatabaseName Like '%%'
ORDER BY StartTime DESC

As a test, I created and dropped a ‘test’ database, then ran the above:

I am confident that the next time this DB goes away, I will find out who and how…maybe even why.   I can also run this for pretty much anything else just by modifying the Event Classes in the WHERE clause.

Happy hunting and thanks for reading!

Kevin3NF

 

Filed Under: SQL Tagged With: SQL Trace

TSQL2SDay: Backup Basics

December 13, 2016 by Kevin3NF 1 Comment

t-sqltuesday

T-SQL Tuesday is a blog party started by Adam Machanic (b/t) just over five years ago. The first Tuesday of each month a blogger hosts the party and suggests a topic. Then anyone who is interested blogs on that topic. It can be a lot of fun and quite a challenge to blog on a topic you didn’t pick.

This month’s topic is presented by my friend Kenneth Fisher (b/t), who has challenged us to go #EntryLevel and talk about backups.  So I did…in a video just posted to YouTube:

You’ll want to go full screen to see what’s going on.  This video is targeted at non-DBAs, but give it a watch anyway…I can take the heckling!

Filed Under: Accidental DBA, backup, TSQL2sday

Getting to know Powershell, from an old DBA

December 12, 2016 by Kevin3NF 1 Comment

Getting to know Powershell, from an old DBA

My team was recently tasked with finding and resolving all Errors and Warnings for every server in our environment, from Prod all the way down to Dev.  Developers are on their own for their laptops. This is a bunch of servers and a small admin team that does mostly SQL server, so I knew I needed automation.

I tried just exporting directly from each server’s Event Viewer, but that was far too cumbersome, and not really repeatable for the future.

Finally, I have an actual use for PowerShell, since SQL Server is not involved here. Best tool for the job, yes?

What follows is a step by step of how I expanded on the most basic command to get more and more out of it. I spent a lot of time on MSDN looking up cmdlets, and on Stack Overflow twice with basic questions. Even went to my “DBA” Twitter feed early in the process using #sqlhelp since a lot of my colleagues are veteran PoSH users.

Warning…its very possible I’ve gained a syntax error here and there when copying and formatting this into a post…run at your own peril 😉

Start here to see the specifics of the Get-EventLog cmdlet:
MSDN Get-EventLog

Step 1: Get a minimal command to actually execute:

#This will bring every entry from your local machines’s System log

Get-EventLog -LogName System

(Feel free to run these in your non-prod/test environment,

Step 2: Cut that list down a bit:

#Lets filter out everything but Errors and Warnings...
Get-EventLog -LogName System -EntryType "Error","Warning"

Step 3: Add a date parameter and line wrap the code (note the ` at the end of line 1):

#Still too much, how about just the last three days?
Get-EventLog -LogName System -EntryType "Error","Warning" `
-After (Get-Date).AddDays(-3)

Step 4: Connect to remote computer:

#Switch from local pc to remote machine (local is default)
Get-EventLog -LogName System -EntryType "Error","Warning" `
-After (Get-Date).AddDays(-3) -ComputerName SERVER08

Step 5: Connect to multiple remote computers:

#Add a second server to the list, note the “” around each name
Get-EventLog -LogName System -EntryType "Error","Warning" `
-After (Get-Date).AddDays(-3) -ComputerName "SERVER08","SERVER07"

Results:

PS C:\Users\Kevin3NF> Get-EventLog -LogName System -EntryType “Error”,”Warning” `
-After (Get-Date).AddDays(-3) -ComputerName “SERVER08″,”SERVER07”

Index Time EntryType Source InstanceID Message
—– —- ——— —— ———- ——-
162296 Dec 06 03:40 Error Schannel 36888 A fatal alert was generated and sent to the remote endpoint. This may r…
162295 Dec 06 03:40 Error Schannel 36874 An TLS 1.2 connection request was received from a remote client applica…

Step 6: Pick and choose the columns we care about:

# Pipe results to Select-Object to choose the properties (columns) we want
Get-EventLog -LogName System -EntryType "Error","Warning" `
-After (Get-Date).AddDays(-3) -ComputerName "SERVER08","SERVER07" `
| Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message

Results:

MachineName : SERVER08.DallasDBAs.com
EntryType : Error
EventID : 36888
Source : Schannel
TimeGenerated : 12/6/2016 3:40:24 AM
Message : A fatal alert was generated and sent to the remote endpoint. This may result in termination of the connection. The TLS
protocol defined fatal error code is 40. The Windows SChannel error state is 1205.

MachineName : SERVER08.DallasDBAs.com
EntryType : Error
EventID : 36874
Source : Schannel
TimeGenerated : 12/6/2016 3:40:24 AM
Message : An TLS 1.2 connection request was received from a remote client application, but none of the cipher suites supported by the
client application are supported by the server. The SSL connection request has failed.

Step 7: let’s get something that looks more table-ish…

# Ugh. I want a table I can work with:
Get-EventLog -LogName System -EntryType "Error","Warning" `
-After (Get-Date).AddDays(-3) -ComputerName "SERVER08","SERVER07" `
| Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message `
| Format-Table -Property MachineName, TimeGenerated, EntryType, Source, EventID, Message

Results:

SERVER08.DallasDBAs.com 12/6/2016 3:39:05 AM Error Schannel 36888 A fatal alert was ge…
SERVER08.DallasDBAs.com 12/6/2016 3:38:38 AM Warning LsaSrv 40968 The Security System …
SERVER07.DallasDBAs.com 12/6/2016 4:06:30 AM Error Schannel 36888 A fatal alert was ge…
SERVER07.DallasDBAs.com 12/6/2016 4:06:30 AM Error Schannel 36874 An TLS 1.2 connectio…

(this looks like a table when run in PowerShell)

Step 8: 2 servers? Yeah, right…more like 20+!

Make a ServerList_Sample.txt file (Name it whatever you want):

get-errorlog_blog_1

Save it somewhere.

Step 9: Reference that file, refer to it in the PS script in a variable

# Store each server in ServerList_Sample.txt in a variable
$computers = Get-Content "B:\DBA\ServerList_Sample.txt";

# Loop through each server and get the logs
foreach($computer in $computers)

    {
    Get-EventLog -LogName System -EntryType "Error","Warning" `
    -After (Get-Date).AddDays(-3) -ComputerName $computer `
    | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message `
    | Format-Table -Property MachineName, TimeGenerated, EntryType, Source, EventID, Message
    }

——————————————————————————————-
New items:
• foreach,
• $computers
• -ComputerName $computer (variable call)
• Statement braces {}

In English: For every computer in the file I give, Get the event log info.

Result:

SERVERACA18.DallasDBAs.com 12/6/2016 2:45:28 PM Warning ebdrv 34 NTPNP_PCI0184 Unqual…
SERVERACA18.DallasDBAs.com 12/6/2016 2:45:28 PM Error hpipmibt 1001 The description for …
SERVERACA18.DallasDBAs.com 12/6/2016 2:34:41 PM Warning WinRM 10149 The description for …

MachineName TimeGenerated EntryType Source EventID Message


SERVERACA21.DallasDBAs.com 12/9/2016 10:34:48 AM Error DCOM 10016 The description for …
SERVERACA21.DallasDBAs.com 12/9/2016 10:13:03 AM Error Service Control Manager 7034 The Acme-Calc s…

Step 10: Oh crud…a server in my server list is offline for patching!

# Store each server in ServerList_Sample.txt in a variable
$computers = Get-Content "B:\DBA\ServerList_Sample.txt";

# loop through each server
foreach($computer in $computers)

{
  # Try to make a connection to each server

  if(Test-Connection $computer -Quiet -Count 1)

  # on successful connection, go get the logs
    {
    Try
        {
        Get-EventLog -LogName System -EntryType "Error","Warning" `
        -After (Get-Date).AddDays(-3) -ComputerName $computer `
        | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message `
        | Format-Table -Property MachineName, TimeGenerated, EntryType, Source, EventID, Message
        }
    # if an error in retrieving the logs, write it out
    Catch
        {
        Write-Verbose "Error $($error[0]) encountered when attempting to get events from $computer"
        }
    }
# if the Connection attempt failed, write this error…
    else
        {
        Write-Verbose "Failed to connect to $computer"
        }
}

New Items:
• if(Test-Connection…)/else…(basically a ping)
• Try / Catch blocks
• Write-Verbose for error reporting
• Nested statement Braces {}

Step 11: Output this to a .txt file for using offline in Excel

 
# Store each server in ServerList_Sample.txt in a variable
$computers = Get-Content "B:\DBA\ServerList_Sample.txt";

# loop through each server
foreach($computer in $computers)

{
  # Try to make a connection to each server

  if(Test-Connection $computer -Quiet -Count 1)

  # on successful connection, go get the logs
    {
    Try
        {
        Get-EventLog -LogName System -EntryType "Error","Warning" `
        -After (Get-Date).AddDays(-3) -ComputerName $computer `
        | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message `
        | Format-Table -Property MachineName, TimeGenerated, EntryType, Source, EventID, Message `
        | Out-File -append B:\DBA\System.txt ;

        Get-EventLog -LogName Application -EntryType "Error","Warning" `
        -After (Get-Date).AddDays(-3) -ComputerName $computer `
        | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message `
        | Format-Table -Property MachineName, TimeGenerated, EntryType, Source, EventID, Message `
        | Out-File -append B:\DBA\Application.txt ;
        } 
    # if an error in retrieving the logs, write it out 
    Catch 
        { 
        Write-Verbose "Error $($error[0]) encountered when attempting to get events from $computer" 
        } 
    } 
# if the Connection attempt failed, write this error… 
  else 
    { 
    Write-Verbose "Failed to connect to $computer" 
    } 
} 

New Items:
• Out-File –append #Simply sends the output to the specified file without overwriting
• Added a second statement to do all the same for the Application log, inside the foreach loop, writing to Application.txt

Results from system.txt:

SERVERACA18.DallasDBAs.com 12/8/2016 2:18:21 PM Error 10016 The description for…
SERVERACA18.DallasDBAs.com 12/8/2016 1:36:37 PM Warning 1014 Name resolution for…
SERVERACA18.DallasDBAs.com 12/8/2016 11:54:19 AM Warning 1014 Name resolution for…
SERVERACA18.DallasDBAs.com 12/8/2016 11:44:20 AM Error 10016 The description for…

MachineName TimeGenerated EntryType Source EventID Message


SERVERACA22.DallasDBAs.com 12/9/2016 10:11:00 AM Warning 1101 The SNMP Service is…
SERVERACA22.DallasDBAs.com 12/9/2016 10:11:00 AM Warning 1101 The SNMP Service is…
SERVERACA22.DallasDBAs.com 12/9/2016 10:10:55 AM Error 5719 This computer was n…

Not great…Message got chopped off, and I don’t really want the headers for each server…but still getting there.

Step 12/Final: Add variables, append to it, and write the contents out to a .csv

#Drop the existing files
Remove-Item B:\DBA\System_Test.csv
Remove-Item B:\DBA\Application_test.csv

# SERVER LIST PROPERTIES
# Get computer list to check disk space. This is just a plain text file with the servers listed out.
$computers = Get-Content "B:\DBA\ServerList.txt";
$sys = @()
$app = @()

# QUERY COMPUTER SYSTEM EVENT LOG
foreach($computer in $computers)


{ 
    # Try to make a connection to each server
    if(Test-Connection $computer -Quiet -Count 1)
      { 
      Try 
          {
          $sys += Get-EventLog -LogName System -EntryType "Error","Warning" ` 
          -After (Get-Date).AddDays(-3) -ComputerName $computer ` 
          | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message ;

          $app += Get-EventLog -LogName Application -EntryType "Error","Warning" ` 
          -After (Get-Date).AddDays(-3) -ComputerName $computer ` 
          | Select-Object -Property machineName, EntryType, EventID, Source, TimeGenerated, Message ; 
          } 
       # if an error in retrieving the logs, write it out 
       Catch 
          { 
          Write-Verbose "Error $($error[0]) encountered when attempting to get events from $computer" 
          } 
     } 
   # If the connection attempt failed, write this error...
   else 
     { 
     Write-Verbose "Failed to connect to $computer" 
     } 
} 
$sys | Export-Csv B:\DBA\System_Test.csv -NoTypeInformation -Append;
$app | Export-Csv B:\DBA\Application_Test.csv -NoTypeInformation -Append; 

New Items:
• Remove-Item – delete the existing files
• $sys = @() – declare a variable (one for system, one for application)
• $sys += – append the results to the variable
• Format-Table – removed
• Out-File – removed
• $sys | Export – Send the contents of the variable to a .csv file

Results:

A screen grab of my System.txt file:

121116_0205_2.png

There is only one set of headers, and the full message is there in column ‘F’, just not showing.   Some details may be missing, as not everything is exposed to Get-EventLog

This allows me to use Excel to sort, filter, group, count, and prioritize the effort needed to cleanup dozens or hundreds of servers. I expect to run this weekly until we get to minimal results, then set up alerts.

I hope you got something useful out of this. It all works on my machine, but is heavily edited for public consumption. Work on your local or in a test lab before you even think about moving to Prod. PowerShell 3.0. May or may not work in lower versions…I have no desire to check.

Shout out to Chris for helping with the very early parts of this process, as well as those who answered my Question1 and Question2 at Stack Overflow.

Update: 1/14/2017:

I decided to enhance this by dumping all the data into one file, and changing the core CmdLet from Get-EventLog to Get-WinEvent, so I can pick up “Critical” events as well as “Error” and “Warning”:

    #Drop the existing files
    Remove-Item C:\Get_WinEvent.csv
    

    # SERVER LIST PROPERTIES
    # Get computer list to check disk space. This is just a plain text file with the servers listed out.
     $computers = Get-Content "C:\ServerList.txt"; 
     $sys = @()
     $st= (Get-Date).AddDays(-7)

    # QUERY COMPUTER SYSTEM EVENT LOG
    foreach($computer in $computers)

{        
 if(Test-Connection $computer -Quiet -Count 1)
        {
   Try {$sys += Get-WinEvent -Computername $computer -FilterHashtable @{logname="System","Application";level=1,2,3;starttime=$st} `
            | Select-Object -Property machinename, LevelDisplayName, LogName, ID, ProviderName, TimeCreated, Message

       } 
   Catch 
       {
       Write-Verbose "Error $($error[0]) encountered when attempting to get events from  $computer"
       }
       } 
   else {
         Write-Verbose "Failed to connect to $computer"
        }
    
}
$sys | Export-Csv C:\Get_WinEvent.csv -NoTypeInformation -Append;

The part that defines the Critical/Error/Warning:  level=1,2,3

Thanks for reading!

Kevin3NF

Filed Under: Beginner, EntryLevel, PowerShell

TIL: Starting a recurring SQL trace

December 8, 2016 by Kevin3NF 1 Comment

Today I learned…

How to set up a recurring Server-side SQL trace that runs every hour for 10 minutes.

Issues:

  • 6 people in the room are staring at me waiting for the last second request to be done at the end of an 11 hour day (3 of them from the VBV – Very Big Vendor)
  • Trace file names must be different, or you get errors
  • Trace files cannot end with a number
  • I can’t tell time when I am hungry and tired

I know I need to script out the trace to T-SQL (which I had already done), and put this in a ‘start trace’ job on an hourly schedule. I have set these up before to run ad hoc, but not recurring.
I also need code to stop the trace 10 minutes later that my boss understands…so we are going for fast and functional here….not pretty.

In the SQL script Profiler generates, the @tracefile parameter supplied to sp_trace_create is specified in the call:

exec @rc = sp_trace_create @TraceID output, 2, 'D:\MyTraceFiles\Trace', @maxfilesize, NULL

If you need different names for each execution you have to get creative. I created a parameter to feed that included a getdate():

declare @tracefile nvarchar(245)
set @tracefile = 'D:\MyTraceFiles\Trace_'+Cast (getdate() as varchar(20))+'_Trace'

this worked and gave this result in testing:

D:\MyTraceFiles\Trace_Dec 8 2016 7:08AM_Trace

But…running the full trace script generated files in the directory with this name:

Trace_Dec 8 2016 7

It took me some time playing with variable sizes and such before I asked one of the VBV guys to look at it. We figured out the ‘:’ in the time stamp was an issue, so I included a REPLACE:

set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

Which gave this result:

D:\MyTraceFiles\Trace_Dec 8 2016 711AM_Trace

More importantly, I had a functioning trace script I could schedule:

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @tracefile nvarchar(245)

set @maxfilesize = 4096
set @tracefile = 'D:\MyTraceFiles\Trace_'+replace(Cast (getdate() as varchar(20)),':','')+'_Trace'

--select @tracefile

exec @rc = sp_trace_create @TraceID output, 2, @tracefile, @maxfilesize, NULL
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 9, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
.
.
.
exec sp_trace_setevent @TraceID, 13, 64, @on
exec sp_trace_setevent @TraceID, 13, 66, @on

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 35, 0, 6, N'Database1'
exec sp_trace_setfilter @TraceID, 35, 1, 6, N'Database2'
--above, the '1' in parameter 3 changes this to an OR instead of an AND
-- so where databasename is Database1 OR Database2. Also new info for ---me today

-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

-- display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go

Scheduling was simple…drop the above into Job ‘Start trace’ and enable to run every hour on the hour.

Create a second job to run every hour on the 10 minutes with the following:

exec sp_trace_setstatus 2, 0
exec sp_trace_setstatus 2, 2

On this system, my trace will always be 2. 1 is the default trace and nobody else here runs them.

Summary:

I already knew I don’t do well at the end of a long day or in a hurry, so I took my time while the others waited, in order to not put in a bad solution to run overnight. There are much more elegant ways to do this, but this worked for me and I learned it on the fly.

Thanks for reading!

Kevin3NF

Filed Under: Performance, TIL Tagged With: job, performance, SQL Trace, TIL

Back to Basics: Backups

November 10, 2016 by Kevin3NF 1 Comment

A database backup intro for: Accidental DBAs, Juniors, Developers, Managers, etc.

Backing up SQL Server databases has been documented in thousands of books, Microsoft documentation and blog posts since the product was released in the 90s.

I’m not about to try and tell you HOW to backup your SQL databases. Rather, I want to take you through the basic considerations of deciding what to back up, when, how often, etc. in clear language that non-SQL folks can work through.

Some basic terminology:

  • Database backup – sending all of the data, structure, objects, etc. into a different location (file/tape/virtual device). This is at the database level. Not Server or Instance wide.
  • Device – formerly called a dump device, this is the target for your backup.
  • Full backup – all of the contents, data and code.
  • Differential – Everything that has changed since the last Full
  • Transaction Log backup – All new entries in the .ldf file since the last t-log backup
  • Recovery model – 3 choices, depending on how much data loss you can tolerate (Full, Bulk-Logged, Simple)
  • RPO – Recovery Point Objective – the goal of what point you need to be able to recover to.
  • RTO – Recovery Time Objective – how fast you need to be able to restore the data.

Decisions you really need to make…simplified:

  • Do I need to back up this database?
  • How often?
  • What is my tolerance for data loss?
  • How long can we be down if we have to restore?
  • Where should I store my backups?
  • How long should I keep them, based on business and regulatory requirements?
  • Should I test my backups?

One by one:

Do I need to back up this database?

Most likely.  Unless this is a completely stale copy of read-only data and you can re-generate it with a few clicks, then yes…back it up.   Restoring from backup is generally much faster than re-generating.  System databases as well, in case you have to build up a new server.

How Often?

A very common standard in the industry to start with is to do a Full backup weekly, Differentials daily and T-logs throughout the day.   This speaks directly to your tolerance for data loss.   Adjust based on your specific business needs.  Slowly changing data can tolerate fewer backups than a highly active system such as order fulfillment.

What is my tolerance for data loss?

The more “nines” you want, the more expensive your backup solution will be.   Be realistic, and sort out if you can handle a minute, a day or a month.  I’ve seen all of the above as the appropriate answer.   There are not ANY generic backup solutions within SQL Server that will guarantee zero data loss.   That is a whole different conversation.

How long can we be down if we have to restore?

Commonly referred to by management in terms of how much money are we losing, due to lost productivity, lost sales, etc.   If you can be down for an hour and one person is affected, that is wildly different than your online store database being down for an hour when you are the size of an Amazon or eBay.  Be realistic.   Test restore your databases to determine the restore time.

Where should I store my backups?

Somewhere else!  Industry standard is to have “hot” backup file(s) local to the server (different drives than the database files), and copies of them somewhere else…tape, SAN, etc.  Again, the more resilient the solution needs to be, the more expensive.  Cloud storage is getting cheaper and cheaper all the time.

How long should I keep them, based on business and regulatory requirements?

I’ve had customers trash backups after a week, and one that had to keep 30 years.   True story.  Ask your auditors and legal…you don’t want to be the one that has to explain why your backups only go back a year when 3 were required by law.

Should I test my backups?

I forget who, but someone well respected in the SQL Server world said this years ago (paraphrasing): “If you don’t test your DR solution, you don’t have a DR solution.  You have a DR Hope.”  Do test restores on another server so you know the files are valid, and so you know how long the restores take.

Other considerations:

You do not need a 3rd party solution or expensive toolset to implement basic backups.   SQL Server has a Maintenance Plan wizard that will walk you through getting a basic plan in place.   The more complex you needs, the less likely this is to be sufficient.

Setting up backups is a bare minimum starting point for any organization. Depending on industry, organizational and regulatory requirements you may have to go above and beyond.   You may need to introduce compression, encryption, off-site storage, etc.

If your database/application/website uptime requirements have a lot of nines…you will be looking into HA (High Availability) solutions, of which there are a variety.  DR (Disaster Recovery) starts with backups at the database level.   Depending on the level of disaster you are trying to be able to recover from, you may need to protect more and more.   I start this conversation from small to large:  Database deleted, server dead, server room fire, building collapse, city offline.

This was not meant to be an exhaustive list of every possible backup and restore scenario in the SQL Server world.  But I do hope that if you are not a DBA you now how a basic understanding.  If you have a DBA, go ask her about your current strategy.  If you don’t have anyone doing this for you, it may be time to bring someone in to get you set up…and now you have a starting point for that conversation.

Please leave me any follow up questions in the comments!

Kevin3NF

Filed Under: Accidental DBA, backup

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 29
  • Go to page 30
  • Go to page 31
  • Go to page 32
  • Go to page 33
  • Go to page 34
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...