• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

SQL

SQL Server in the Cloud – Are DBAs Still Needed?

June 18, 2025 by Kevin3NF Leave a Comment

Things your cloud vendor may not tell you

 

Here’s a common theme I hear from small IT teams:

“Our SQL Server is in the cloud now. We don’t need a DBA.”

Not quite.

When you’re running SQL Server on cloud virtual machines like Azure SQL VM or AWS EC2, you’re still responsible for a lot of the same challenges you faced on-prem. The difference? Now you’re paying cloud rates for the same headaches. But you don’t have to deal with hardware and some of the other infrastructure hassles.

Let’s bust some of the top myths.

Cloud VM Myths That Can Hurt Your SQL Server

Myth 1: The Cloud Auto-Scales SQL Server for Me

  • Cloud VMs don’t automatically adjust CPU, memory, or storage. If you don’t right-size your SQL Server, you may be paying too much or running into performance bottlenecks.
  • Even with the right mix of vCPUs and Memory, you may be getting IOPS/Throughput throttling to the disks.
  • The VM size needs to be in the right family as well. For database workloads, look for “Memory Optimized” options, especially for reporting workloads.

 

Myth 2: Cloud Patching Means I’m Always Safe

  • Azure and AWS offer patching options for the O/S, but they don’t know your business hours or your workloads. A patch could restart your SQL Server at the worst possible time.
  • Azure (and AWS?) will not auto-patch your SQL Server unless you turn it on via the SQL Server IaaS Agent extension. Is this really a good idea for your system? Are patches automatically supported by your software vendors?
    • The IaaS extension will only apply “Important or Critical” patches.
  • As always, test all patches before applying to production.

 

Myth 3: Backups are Handled Automatically in the Cloud

  • Cloud VMs give you the option to back up SQL Server. They don’t configure backups or validate them for your RTO/RPO needs.
  • A DBA ensures backups run properly, are tested, and can meet your recovery goals.

 

Myth 4: The Cloud Handles SQL Performance

  • The cloud gives you CPU, memory, and storage but it doesn’t tune your indexes, optimize queries, or troubleshoot blocking.
    • Some level of auto-tuning exists in the PaaS offering (Azure SQL DB), but I’ve seen it go hilariously wrong.
  • A DBA does that. Without them, you may see slow performance over time, higher costs, and user/customer frustration.

 

Myth 5: SQL Server Licensing is Simpler in the Cloud

  • Simple? Maybe. Cheaper? Probably not, even if you get it right.
  • Cloud billing for SQL Server licensing can be complex, and mistakes add up fast.
  • Overpaying for cores “just in case” is a silly way to spend. Start smaller and add resources, better disks, etc. as you determine your actual workload.
  • This is one of the key reasons the Memory Optimized VMs exist…VMs are priced based more on vCPU count than anything else. Get that core count down by having a DBA tune your system periodically.

 

The Bottom Line:

SQL Server in the cloud can be a game changer. Wandering into the vendor’s portal or dashboard and blindly picking “something” is not the best approach. Get input from independent cloud experts and have a DBA review things from time to time.


Could Your SQL Server Survive a Failure Without Major Downtime?

Take the Quiz!

Find out how bullet-proof your DR plan really is!


 

SQL TidBits:

Check your SQL Server uptime:

SELECT 
   sqlserver_start_time 
FROM 
   sys.dm_os_sys_info;

Thanks for reading!

–Kevin

Filed Under: AWS, Azure, Cloud, SQL Tagged With: syndicated

SQL Server Agent Jobs: Set It and Forget It?

June 11, 2025 by Kevin3NF Leave a Comment

Useful tips to make your jobs more stable

Too many IT teams treat SQL Server Agent jobs like a coffee timer “Set it and forget it!”

Unfortunately, that mindset only works if everything else is perfect forever. Whether its backup jobs failing silently, index maintenance running on the wrong replica, or nobody getting alerts when things break, unattended SQL Agent jobs are one of the sneakiest ways to rack up technical debt. Let’s dig into what DBAs and non-DBAs alike need to keep an eye on to avoid job-related headaches.

SQL Agent Jobs Are Not “Fire and Forget”

Just because a job is scheduled doesn’t mean it’s working.
Agent jobs can fail, silently skip steps (especially when new steps are added), hang indefinitely, or run at the wrong time (local vs. UTC) and nobody notices until something downstream explodes.

  • Why Job Monitoring Matters
    • An ETL job that fails silently doesn’t just impact reports – it can compromise business decisions. Visibility is key.
    • CheckDB job failing due to corruption found? That needs IMMEDIATE attention.
  • Common Signs of Neglect:
    • Look for jobs with old failure messages, mismatched schedules, or jobs that haven’t run successfully in days or weeks.
  • Real-world Examples
    • One client had a nightly report load job fail every third Wednesday because of a bad parameter. Nobody noticed for months until the CFO asked why sales numbers looked off.
    • Differential backup jobs were failing due to running on an AG secondary. DAYS of potential data loss

Job Ownership and Permissions

SQL Agent jobs need a stable, valid owner. If that account gets disabled or removed, some job steps might fail, as SQL Server checks for valid ownership every time a job runs. The job owner or team name should be in the job description field.

  • Default Ownership Pitfalls
    Too often, developers or DBAs create jobs under their own accounts (this is default behavior). When those folks leave the company, chaos follows.
  • Best Practice:
    Assign ownership to a dedicated Agent service account or ‘sa’, depending on your security policy.
  • What Can Go Wrong?
    DBA leaves the company (even on the best of terms), sysadmin disables her account, BLAM! SQL Agent jobs start failing all over the place the next time they run. Probably while you are at your kid’s soccer game.

Logging and Notifications: Built-In but Underused

SQL Agent offers good logging and alerting options. You just have to turn them on.

  • Enable Job History and Step Logging
    Make sure each step writes output so you can diagnose problems quickly. Even the bare minimum “Log to Table” in the job step Advanced menu is better than nothing.  Watch out for the “Append output to existing entry in table” option – that can blow out msdb in a hurry.
  • Set Up Alerts for Failures
    Database Mail + an Operator = Quick alerts when something breaks. Useless if the emails go to the wrong person or team though.
  • Pro Tip
    Use job steps that retry on failure with brief wait times. And yes, that includes your t-log backup jobs. Check the job logic so you don’t accidentally repeat something that worked.

AG-Aware Jobs: Run Only on the Primary

In an Availability Group, not all jobs belong on all nodes.

  • Why This Matters
    Write-heavy jobs, like index rebuilds, update stats, or differential backups must run on the Primary replica.
  • Implementation Tip
    Add a check step at the beginning of critical jobs. If not on the Primary, gracefully exit.
  • How to Check If You’re on the Primary
    Use sys.fn_hadr_is_primary_replica in a preliminary job step to verify before proceeding.

    • -- Check if the local replica is the Primary
      IF NOT EXISTS (
          SELECT 1
          FROM sys.dm_hadr_availability_replica_states ars
          JOIN sys.availability_replicas ar ON ars.replica_id = ar.replica_id
          WHERE ars.role_desc = 'PRIMARY'
            AND ar.replica_server_name = @@SERVERNAME
      )
      BEGIN
          PRINT 'This is not the primary replica. Skipping job step.';
          RETURN;
      END
      
      -- If we get here, we're on the Primary
      PRINT 'This is the Primary replica. Continuing job logic...';
      
      BEGIN
          -- Your actual job logic goes here
          -- Example:
          Select getdate();
      END

Don’t Ignore Long-Running or Hanging Jobs

A job that’s still running isn’t necessarily still working.

  • Symptoms of a Hung Job
    No errors, but the runtime is wildly longer than average.
  • Root Causes
    Blocking, deadlocks, external API hangs, or backup file locks can all cause jobs to stall.
  • Solutions
    Implement a max run time policy per job or use third-party monitoring to alert on outliers.

Could Your SQL Server Survive a Failure Without Major Downtime?

Take the Quiz!

Find out how bullet-proof your DR plan really is!


SQL TidBits:

-- This is code from the internet, test before using
-- https://www.sqlservercentral.com/forums/topic/script-to-find-long-running-job

SELECT jobs.name AS [Job_Name]
    , CONVERT(VARCHAR(30),ja.start_execution_date,121) AS [Start_execution_date]
    , ja.stop_execution_date
    , [Duration_secs]
FROM msdb.dbo.sysjobs jobs
LEFT JOIN 
   (
       SELECT *
       FROM msdb.dbo.sysjobactivity
       WHERE session_id = (SELECT MAX(session_id) FROM msdb.dbo.syssessions) AND
           start_execution_date IS NOT NULL AND
           stop_execution_date IS NULL
   ) AS ja ON ja.job_id = jobs.job_id
CROSS APPLY 
   (
       SELECT DATEDIFF(ss,ja.start_execution_date,GETDATE()) AS [Duration_secs]
   ) AS ca1
WHERE 1=1
    and jobs.name Not in ('test') -- set this up to include/exclude jobs that should/should not be alerted on
    AND Duration_secs > 300 -- 5 minutes

Thanks for reading!

–Kevin

Filed Under: Agent, SQL Tagged With: syndicated

SQL Server TempDB: The Most Overlooked Bottleneck

June 4, 2025 by Kevin3NF Leave a Comment

Configuration, Performance, and Unnecessary Usage

TempDB is the SQL Server equivalent of a junk drawer – everyone uses it, nobody monitors it, and eventually it becomes a bottleneck you can’t ignore.

Whether it’s poorly configured from the start or getting hammered by bad execution plans, TempDB often becomes the silent killer of performance. The good news? A few targeted changes can make a big impact.

TempDB Configuration:

  • File Count: “one file per core” is outdated. Only go past 8 if you have verified PFS or SGAM contention exists
  • File Size and Growth: Pre-size your files to avoid autogrow OR ensure Instant File Initialization is enabled
  • Trace Flags 1117 & 1118: relevant for SQL Server 2014 and older, not in newer versions
  • Placement: Should TempDB be on its own disk? Ideally, yes. Fast disk, Fastest RAID, separate controller/path to the drives

 

Identifying TempDB Bottlenecks and Stalls

  • Read/Write Latency: Query sys.dm_io_virtual_file_stats to see stats on ms/read and ms/write. Write is usually worse.
  • Spills: Look for queries that are spilling out to tempdb due to poorly estimated memory grants
  • Monitoring Tools: I use exec sp_blitzcache @Sortorder = ‘Spills’ for this (part of the First Responder Kit)

 

Reducing Traffic TO TempDB

  • Avoid Unnecessary Sorts and Spills: Bad execution plans are a TempDB killer, and unneeded sorts make it worse
  • Test the Use of CTEs, Table Variables, and #Temp Tables: Test your code with the different types of temp objects, rather than just blindly using the easiest one. Nobody cares for 10 rows. They ALL care about 10 Million rows!
  • Version Store Traffic: If you are using RCSI on multiple databases, one transaction left open can blow out tempdb.
Select
   db_name(database_id)
   , reserved_page_count
   , reserved_space_kb/1024/1024 as [GB Used]
From
   sys.dm_tran_version_store_space_usage
Order by
   reserved_space_kb desc
  • Cursors: Some (all?) cursors live in tempdb. At some point a cursor gets too be to be efficient and you’ll want to re-code for speed, as well as tempdb space

 

TempDB Best Practices and Quick Wins

  • Multiple Files with Equal Size: Equal size is key…autogrow all files is on by default since 2016.
  • Instant File Initialization: Saves time on growth events.
  • Regular Review: TempDB isn’t “set it and forget it”. Schedule health checks, just like for the rest of your SQL Server

 

Bonus Content: What’s in your tempdb right now?

-- Determining the amount of free space in tempdb
SELECT SUM(unallocated_extent_page_count) AS [tempdb free pages],
  (SUM(unallocated_extent_page_count)*1.0/128) AS [tempdb free space in MB]
FROM tempdb.sys.dm_db_file_space_usage;

-- Determining the amount of space used by the version store
SELECT SUM(version_store_reserved_page_count) AS [version store pages used],
  (SUM(version_store_reserved_page_count)*1.0/128) AS [version store space in MB]
FROM tempdb.sys.dm_db_file_space_usage;

-- Determining the amount of space used by internal objects
SELECT SUM(internal_object_reserved_page_count) AS [internal object pages used],
  (SUM(internal_object_reserved_page_count)*1.0/128) AS [internal object space in MB]
FROM tempdb.sys.dm_db_file_space_usage;

-- Determining the amount of space used by user objects
SELECT SUM(user_object_reserved_page_count) AS [user object pages used],
  (SUM(user_object_reserved_page_count)*1.0/128) AS [user object space in MB]
FROM tempdb.sys.dm_db_file_space_usage;

Thanks for reading!

 

Filed Under: Configuration, Performance, SQL Tagged With: syndicated

SQL Server Logins vs. Users: Why You Really Need to Know the Difference

May 28, 2025 by Kevin3NF Leave a Comment

The Two-Layer Model Explained

“We added them to the database, but they still can’t connect.”

Sound familiar? That’s the kind of confusion SQL Server’s two-layer security model creates when people don’t understand the difference between a login and a user.

Let’s clear that up, because getting it wrong causes broken access, orphaned users, and frustrated help desk calls.

Two Layers: One to Get In, One to Do Anything

SQL Server controls access at two separate levels:

  1. Login (Server-Level)
  • Exists at the SQL Server instance level
  • Allows someone or something to connect to the server
  • Can be:
    • SQL login (username + password inside SQL)
    • Windows login (domain account)
    • Windows group (preferred for manageability)

But a login alone doesn’t grant access to databases.

  1. User (Database-Level)
  • Exists inside a specific database
  • Grants access to that database and defines roles/permissions
  • Must be mapped to a login, unless you’re using a contained database

So:
Login = authentication (Can you connect?)
User = authorization (What can you do inside the database?)

What Happens When They Don’t Match?

Here’s a common scenario:

You restore a database from Prod to Test. A user exists in the database, but the matching login doesn’t exist on the new server.

Result? You’ve got an orphaned user.

SQL Server sees the user in the database, but there’s no login at the instance level with the same SID. The connection fails—even though everything “looks right.”

How to Fix Orphaned Users

Run this at the database level to find all orphaned users:

EXEC sp_change_users_login 'Report';

(Deprecated, but still handy in older versions.)

For newer systems, use:

Use MyDatabase;
GO

SELECT dp.name AS UserName
FROM sys.database_principals dp
LEFT JOIN sys.server_principals sp ON dp.sid = sp.sid
WHERE dp.type IN ('S', 'U') AND sp.sid IS NULL;

Use this command to remap a user to an existing login:

ALTER USER [username] WITH LOGIN = [loginname];

Or…this is a process I’ve been using for 20 years:

Transfer logins and passwords between instances – SQL Server | Microsoft Learn

Why You Should Care

  • Granting a login access to the server doesn’t give them database access.
  • Creating a user in a database without a login won’t allow connections.
  • Orphaned users silently break apps after restores or migrations.
  • Unused users or logins clutter security and increase audit risk.

 

Best Practices

  • Use AD groups as logins whenever possible. Much easier to manage access for teams.
  • Don’t grant permissions to logins directly. Always go through database users and roles.
  • When dropping users or logins, clean up both sides: DROP USER and DROP LOGIN.
  • Document mappings so you know which logins tie to which databases.

 

Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

• Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Limited availability.


Thanks for reading!

–Kevin

Filed Under: EntryLevel, Security, SQL Tagged With: syndicated

SQL Server Login Types: Knowing the Difference Really Matters

May 21, 2025 by Kevin3NF Leave a Comment

If your SQL Server has both SQL and Windows logins and you’re not sure why, you’re not alone. Many IT teams without a dedicated DBA bump into this sooner or later. And sometimes…things break just because someone checked the wrong box during setup.

Let’s break it down.

Two Login Types, Two Worlds

SQL Server supports two authentication modes:

Windows Authentication

  • Uses your Active Directory account (individual or group) to connect.
  • Most secure and manageable in domain environments.
  • Passwords are handled by Windows, so no extra password to manage in SQL Server.

SQL Server Authentication

  • Uses a separate username and password stored inside SQL Server.
  • Necessary for non-domain users, apps, or services outside your network.
  • Password policies are optional (and often ignored… oof).

 

Mixed Mode vs Windows-Only Mode

Your SQL Server can be set to:

  • Windows-only Mode (no SQL logins allowed), or
  • Mixed Mode (allows both types)

Most installs use Mixed Mode, even when they don’t need to because “that’s how it was installed,” or “some app needed a SQL login once.”

Tip: Use Windows authentication whenever possible. It’s integrated, auditable, and far more secure.

Common Pitfalls

  • “sa” is enabled and still using the default password (Yes, we’ve seen this. More than once.)
  • SQL logins don’t expire or require password complexity, unless you set that option.
  • Orphaned users (exists in the database but not on the server).
  • Confusion about who can access what—especially after domain changes or user departures.

 

The Bottom Line:

  • Audit your logins and users regularly. Know what’s there.
  • Disable “sa” or rename it.
  • Use Windows logins/groups for real people whenever possible.
  • Use SQL logins only when Windows auth isn’t an option and treat them like sensitive credentials.

Security breaches often start with a misconfigured login. A little attention now saves a lot of pain later.


Database Emergencies Cost Thousands Per Minute

 

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Security, SQL Tagged With: syndicated

Stop Wasting Time Rebuilding Indexes Every Night

May 14, 2025 by Kevin3NF Leave a Comment

Too many IT teams run index rebuild jobs on autopilot. Every night. Every index. No questions asked.

Here’s the truth: if you’re doing this daily on indexes smaller than 10,000 pages, you might be chewing up CPU, bloating your logs, and annoying your users  for zero gain.

Let’s fix that.

Internal vs. External fragmentation explained

SQL Server tracks two kinds of fragmentation:

  • Internal = too much free space inside data pages
  • External = out-of-order pages on disk. The logical sequence doesn’t match physical placement

Think:

  • Internal = wasted space
  • External = wasted reads

 

Does External Fragmentation Even Matter on SSDs?

Not really*.

On spinning disks, scattered reads could mean lots of tiny movements by the drive head = slower performance.

But on SSDs? Everything is basically a memory chip, so page order has minimal impact on read speed. Yet SQL Server still reports external fragmentation like it’s 2008.

That’s why blindly reacting to fragmentation numbers is often misleading.

* Not to be confused with internal

Don’t Worry about % – look at the page counts

My rule of thumb? If it’s under 10,000 pages, skip it.

You won’t see a meaningful performance benefit, and you’re just burning maintenance time and log space.

Microsoft’s default page count for index maintenance is 1000 8K pages. 8K x1000 = approx. 8MB. That is tiny.

Rebuild vs. Reorganize

Rebuild = drops and recreates the index (resource-heavy, spikes log use, can block unless ONLINE)

Reorganize = shuffles leaf-level pages gently (lighter, but less effective)

Here’s a smarter guideline than just looking at fragmentation %:

 

Don’t overlook statistics! They matter more than fragmentation in most cases.

SQL Server uses statistics to estimate how many rows a query will return. These estimates help the optimizer decide whether to use an index, do a table scan, join methods, and more.

If the stats are outdated or inaccurate, SQL Server can choose a bad query plan, which can lead to slow performance, excessive reads, or blocked processes.

That’s why I recommend updating stats daily in most transactional systems. It has a much higher ROI than blindly rebuilding indexes.

I usually let SQL Server start with its default sampling algorithm, then adjust if needed, especially if your data distribution is skewed or volatile.

Recommended Tools for Smarter Index Maintenance

Use tools that let you filter by both fragmentation level and page count, like:

  • Ola Hallengren’s IndexOptimize (this can be tweaked to do Stats Only)

These give you control rather than nightly sledgehammer.

The Bottom Line

Stop defaulting to nightly index rebuilds.
If you’re on SSDs and under 10K pages, you’re solving a problem that doesn’t exist — and possibly creating new ones (like log growth and blocking).

Be strategic:

  • Focus on meaningful fragmentation
  • Consider index size before acting
  • Separate stats updates from index maintenance

Smarter maintenance = better performance with fewer headaches.


Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Index, SQL Tagged With: syndicated

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 33
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...