• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
  • Pocket DBA®
  • Blog
  • Testimonials
  • Contact
  • About

SQL

SQL Server Logins vs. Users: Why You Really Need to Know the Difference

May 28, 2025 by Kevin3NF Leave a Comment

The Two-Layer Model Explained

“We added them to the database, but they still can’t connect.”

Sound familiar? That’s the kind of confusion SQL Server’s two-layer security model creates when people don’t understand the difference between a login and a user.

Let’s clear that up, because getting it wrong causes broken access, orphaned users, and frustrated help desk calls.

Two Layers: One to Get In, One to Do Anything

SQL Server controls access at two separate levels:

  1. Login (Server-Level)
  • Exists at the SQL Server instance level
  • Allows someone or something to connect to the server
  • Can be:
    • SQL login (username + password inside SQL)
    • Windows login (domain account)
    • Windows group (preferred for manageability)

But a login alone doesn’t grant access to databases.

  1. User (Database-Level)
  • Exists inside a specific database
  • Grants access to that database and defines roles/permissions
  • Must be mapped to a login, unless you’re using a contained database

So:
Login = authentication (Can you connect?)
User = authorization (What can you do inside the database?)

What Happens When They Don’t Match?

Here’s a common scenario:

You restore a database from Prod to Test. A user exists in the database, but the matching login doesn’t exist on the new server.

Result? You’ve got an orphaned user.

SQL Server sees the user in the database, but there’s no login at the instance level with the same SID. The connection fails—even though everything “looks right.”

How to Fix Orphaned Users

Run this at the database level to find all orphaned users:

EXEC sp_change_users_login 'Report';

(Deprecated, but still handy in older versions.)

For newer systems, use:

Use MyDatabase;
GO

SELECT dp.name AS UserName
FROM sys.database_principals dp
LEFT JOIN sys.server_principals sp ON dp.sid = sp.sid
WHERE dp.type IN ('S', 'U') AND sp.sid IS NULL;

Use this command to remap a user to an existing login:

ALTER USER [username] WITH LOGIN = [loginname];

Or…this is a process I’ve been using for 20 years:

Transfer logins and passwords between instances – SQL Server | Microsoft Learn

Why You Should Care

  • Granting a login access to the server doesn’t give them database access.
  • Creating a user in a database without a login won’t allow connections.
  • Orphaned users silently break apps after restores or migrations.
  • Unused users or logins clutter security and increase audit risk.

 

Best Practices

  • Use AD groups as logins whenever possible. Much easier to manage access for teams.
  • Don’t grant permissions to logins directly. Always go through database users and roles.
  • When dropping users or logins, clean up both sides: DROP USER and DROP LOGIN.
  • Document mappings so you know which logins tie to which databases.

 

Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

• Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Limited availability.


Thanks for reading!

–Kevin

Filed Under: EntryLevel, Security, SQL Tagged With: syndicated

SQL Server Login Types: Knowing the Difference Really Matters

May 21, 2025 by Kevin3NF Leave a Comment

If your SQL Server has both SQL and Windows logins and you’re not sure why, you’re not alone. Many IT teams without a dedicated DBA bump into this sooner or later. And sometimes…things break just because someone checked the wrong box during setup.

Let’s break it down.

Two Login Types, Two Worlds

SQL Server supports two authentication modes:

Windows Authentication

  • Uses your Active Directory account (individual or group) to connect.
  • Most secure and manageable in domain environments.
  • Passwords are handled by Windows, so no extra password to manage in SQL Server.

SQL Server Authentication

  • Uses a separate username and password stored inside SQL Server.
  • Necessary for non-domain users, apps, or services outside your network.
  • Password policies are optional (and often ignored… oof).

 

Mixed Mode vs Windows-Only Mode

Your SQL Server can be set to:

  • Windows-only Mode (no SQL logins allowed), or
  • Mixed Mode (allows both types)

Most installs use Mixed Mode, even when they don’t need to because “that’s how it was installed,” or “some app needed a SQL login once.”

Tip: Use Windows authentication whenever possible. It’s integrated, auditable, and far more secure.

Common Pitfalls

  • “sa” is enabled and still using the default password (Yes, we’ve seen this. More than once.)
  • SQL logins don’t expire or require password complexity, unless you set that option.
  • Orphaned users (exists in the database but not on the server).
  • Confusion about who can access what—especially after domain changes or user departures.

 

The Bottom Line:

  • Audit your logins and users regularly. Know what’s there.
  • Disable “sa” or rename it.
  • Use Windows logins/groups for real people whenever possible.
  • Use SQL logins only when Windows auth isn’t an option and treat them like sensitive credentials.

Security breaches often start with a misconfigured login. A little attention now saves a lot of pain later.


Database Emergencies Cost Thousands Per Minute

 

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Security, SQL Tagged With: syndicated

Stop Wasting Time Rebuilding Indexes Every Night

May 14, 2025 by Kevin3NF Leave a Comment

Too many IT teams run index rebuild jobs on autopilot. Every night. Every index. No questions asked.

Here’s the truth: if you’re doing this daily on indexes smaller than 10,000 pages, you might be chewing up CPU, bloating your logs, and annoying your users  for zero gain.

Let’s fix that.

Internal vs. External fragmentation explained

SQL Server tracks two kinds of fragmentation:

  • Internal = too much free space inside data pages
  • External = out-of-order pages on disk. The logical sequence doesn’t match physical placement

Think:

  • Internal = wasted space
  • External = wasted reads

 

Does External Fragmentation Even Matter on SSDs?

Not really*.

On spinning disks, scattered reads could mean lots of tiny movements by the drive head = slower performance.

But on SSDs? Everything is basically a memory chip, so page order has minimal impact on read speed. Yet SQL Server still reports external fragmentation like it’s 2008.

That’s why blindly reacting to fragmentation numbers is often misleading.

* Not to be confused with internal

Don’t Worry about % – look at the page counts

My rule of thumb? If it’s under 10,000 pages, skip it.

You won’t see a meaningful performance benefit, and you’re just burning maintenance time and log space.

Microsoft’s default page count for index maintenance is 1000 8K pages. 8K x1000 = approx. 8MB. That is tiny.

Rebuild vs. Reorganize

Rebuild = drops and recreates the index (resource-heavy, spikes log use, can block unless ONLINE)

Reorganize = shuffles leaf-level pages gently (lighter, but less effective)

Here’s a smarter guideline than just looking at fragmentation %:

 

Don’t overlook statistics! They matter more than fragmentation in most cases.

SQL Server uses statistics to estimate how many rows a query will return. These estimates help the optimizer decide whether to use an index, do a table scan, join methods, and more.

If the stats are outdated or inaccurate, SQL Server can choose a bad query plan, which can lead to slow performance, excessive reads, or blocked processes.

That’s why I recommend updating stats daily in most transactional systems. It has a much higher ROI than blindly rebuilding indexes.

I usually let SQL Server start with its default sampling algorithm, then adjust if needed, especially if your data distribution is skewed or volatile.

Recommended Tools for Smarter Index Maintenance

Use tools that let you filter by both fragmentation level and page count, like:

  • Ola Hallengren’s IndexOptimize (this can be tweaked to do Stats Only)

These give you control rather than nightly sledgehammer.

The Bottom Line

Stop defaulting to nightly index rebuilds.
If you’re on SSDs and under 10K pages, you’re solving a problem that doesn’t exist — and possibly creating new ones (like log growth and blocking).

Be strategic:

  • Focus on meaningful fragmentation
  • Consider index size before acting
  • Separate stats updates from index maintenance

Smarter maintenance = better performance with fewer headaches.


Database Emergencies Cost Thousands Per Minute

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

  • Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Indexing Strategies, SQL Tagged With: syndicated

Your SQL Server Is Whispering…Are You Listening?

May 7, 2025 by Kevin3NF Leave a Comment

Early Warnings of a Meltdown (and How to Catch Them)

Most SQL Server crashes don’t come out of nowhere.
They leave breadcrumbs – red flags that something’s not right. The problem? If you don’t know where to look, you miss the signs…until it’s 2am and your CEO’s calling.

Let’s talk about how to listen for those whispers before they turn into full-blown alarms.

The Canary in the Query Mine

Here are a few subtle warning signs I see often before a server falls over:

  • Backup jobs start taking longer for no obvious reason.
  • TempDB grows larger than usual
  • Login failures spike
  • High CPU or long-running queries suddenly become “normal.”
  • End users QUIT complaining…because they’ve gotten used to the problem.
  • DBCC CHECKDB gets skipped because “it blocks things”

 

If any of these ring a bell, your SQL Server might be trying to tell you something.

Logs You Shouldn’t Ignore

Quick reality check: Most people never check the logs until the server is on fire, or after

  • SQL Server Errorlog:
    • Corruption messages (Severity 24 and 25).
      • Run this: Select * from msdb.dbo.suspect_pages
    • Failed login attacks (possible brute force or misconfigurations).
      • Some of these might be from vulnerability testing tools and can be ignored. Might.
    • IO warnings that your storage is too slow or unreliable.
    • How to find the SQL Server ErrorLog on disk
  • Agent job failures that are just quietly failing in the background.
  • Windows Event logs
  • Cluster logs

Pro tip: Set a calendar reminder to review logs once a week. Seriously—it takes 5 minutes and could save your bacon.

 

DIY Monitoring: No Budget? No Excuse.

You don’t need fancy tools to catch problems early. Here are a few dead-simple tricks:

  • SQL Agent job that checks disk space and emails you when it drops below 10%.
  • Query to track the longest-running queries over time—schedule it to run daily.
  • Custom SQL alerts for CPU over 90% for more than X minutes.

 

Set Up Built-In SQL Alerts (Takes 5 Minutes)

SQL Server has a native Alerting feature. No third-party tools required.

Here’s how to set up critical alerts for things you never want to miss:

--Step 1:

EXEC msdb.dbo.sp_add_operator 
    @name = N'DBA_Alerts', 
    @enabled = 1, 
    @email_address = N'[email protected]';
-- This requires Database Mail and an SMTP server

--Step 2
-- Severity 19-25 (from “Something’s wrong” to “We’re toast”)
-- Change and run for each severity you want to track. Sev 20 may give false positives

EXEC msdb.dbo.sp_add_alert 
    @name = N'Severity 19', 
    @message_id = 0, 
    @severity = 19, 
    @notification_message = N'Severity 19 error occurred', 
    @job_id = NULL, 
    @enabled = 1, 
    @delay_between_responses = 300, 
    @include_event_description_in = 1, 
    @database_name = N'master', 
    @operator_name = N'DBA_Alerts', 
    @notification_method = 1;

Repeat this for severity levels 19 to 25, or use a script to loop through them.
Bonus: Create alerts for SQL error numbers like:

  • 823, 824, 825 – signs of potential data corruption
  • 9002 – log file full

The Bottom Line

SQL Server doesn’t generally go down without a fight. But it does give off signals.
Learn to listen, and you’ll dramatically reduce those “why is the website down?”.


Database Emergencies Cost Thousands Per Minute 

When SQL Server fails, you need expert help fast.
For a limited time: First month of Pocket DBA® FREE with a 6-month plan.

• Senior DBAs available 24/7 • 1-hour SLA • Proactive problem prevention

Don’t wait for a disaster. Be prepared.
CLAIM FREE MONTH
Offer ends June 1st. Limited availability.


Thanks for reading!

–Kevin

Filed Under: Emergency & Troubleshooting, SQL, Troubleshooting Tagged With: syndicated

SQL Server Budgets: How to Justify Cost to Management

May 1, 2025 by Kevin3NF Leave a Comment

Ever tried to request SQL Server funding from the CFO?

Your systems, your data, your customer experience – they all rely on that “invisible” database engine humming along behind the scenes. And if you’re responsible for keeping it running, you need the budget to do it right.

Here’s how to make your case without getting buried in tech jargon or glazed-over stares.

Speak Their Language: Business Impact

Skip the CPU counts and version numbers. Start with outcomes:

  • “We’re running SQL Server 2014. It’s out of mainstream support, which means we’re vulnerable to security issues, and Microsoft won’t help us fix them.”
  • “We’ve hit memory and CPU limits on Standard Edition. That’s why reports take forever, and our systems slow down during busy hours.”
  • “We’re storing critical data on spinning hard drives. Replacing them with SSDs will improve performance significantly and cut risk of disk failure.”

Translation Tip:

Focus on Risk, Cost, and Growth

Every budget conversation should touch at least one of these:

Risk Avoidance

  • Running an outdated version? That’s unsupported = risky.
  • No proper backups or DR? That’s a business continuity issue.
  • One bad disk or query can bring the whole app down.

Cost Efficiency

  • SQL Server Standard Edition: ~$3K.
  • Emergency consulting to recover a failed system: $10K–$20K.
  • SSD upgrade: one-time investment.
  • Hourly downtime across 20 staff: thousands.

Growth & Performance

  • Hitting the ceiling on CPU/memory? You’ve outgrown Standard Edition.
  • Enterprise Edition gives you scale-up options, better indexing, and more.
  • SSDs reduce query lag – users and customers notice speed.

Use Visuals or Simple Metrics

Try:

  • A chart of your SQL Server data growth over the last 3 years
  • A simple graph showing CPU/memory usage vs. Standard Edition limits
  • Pie chart showing how SQL touches Sales, Finance, HR, etc.

Anything to remind them this isn’t “just IT stuff”. It’s tied to the business’ success.

Offer Tiered Options

Sometimes, they just need a menu.

Good:

  • SSD storage upgrade
  • SQL patching and monitoring
  • Basic backup review

Better:

  • Upgrade to supported SQL Server version
  • Performance tuning
  • Regular DR tests

Best:

  • Edition upgrade (if needed)
  • SSDs, full DR plan
  • Proactive monthly health checks

 

You’re giving them choices, but you’re also giving them consequences. They pick the investment level; you just make sure they understand what each buys them.

The Bottom Line: Be Their Strategic Partner

Don’t just ask for money. Frame it as helping them sleep better at night. You’re not spending their budget… you’re protecting their business. That’s the real value of good data infrastructure.

Filed Under: SQL Tagged With: syndicated

SQL Server Index Primer

April 23, 2025 by Kevin3NF Leave a Comment

Indexes 101: What, Why, and When?

“What Is an Index?”
I get this question a lot, especially from developers and sysadmins who’ve been handed a SQL Server and told, “Keep it running fast.” No pressure.

At a basic level, an index is a roadmap. Instead of scanning every house on the street to find the one you want, you check a list. Indexes help SQL Server find data faster – less scanning, more targeted seeking. All those pages at the back of a technical book? Those are indexes. The same idea applies to tables.

Now, not all indexes are created equal. Some are used every day. Others are…let’s say, special occasion. Let’s walk through them in order from most to least common (based on my experience).

Clustered Indexes

What it is: Defines how the data is stored on disk. SQL Server physically orders the table based on this. One per table.
Why it matters: Every seek operation relies on this index. If you don’t create one, SQL Server makes a hidden “heap”, which adds time to the query.
When to use: Pretty much always, unless you’re loading massive amounts of data fast and don’t care about retrieval speed such as an ETL/ELT process.
Pro tip: Pick a column (or set) that is narrow, static, and unique (like an identity column). Even better, use a column that is frequently used to retrieve sets of data such as date for “Last month’s sales” types of reports.

Non-Clustered Indexes

What it is: Think of these like side tables (they do add to the database size) with pointers back to the main data. You can have many per table, but too many can be an issue.
Why it matters: Great for covering queries. Especially helpful if they include only a few columns.
When to use: When the same query pattern happens often and doesn’t match the clustered key.
Watch out for: Too many non-clustered indexes = slow inserts and updates. Balance is key.

…Included Columns…

What it is: Bonus data tacked onto a non-clustered index. Doesn’t affect sort order but helps avoid expensive trips to the table to find more data.
Why it matters: Lets you “cover” more queries without bloating the main index definition.
When to use: If the query needs extra columns but they don’t need to be sorted.

Filtered Indexes

What it is: Indexes with WHERE clauses.
Why it matters: Small, fast, and efficient for predictable filters.
When to use: Perfect for things like IsActive = 1 or Status = ‘Pending’.

Columnstore Indexes

What it is: Think column-based storage, not row-based. Optimized for analytics.
Why it matters: Game-changing for reporting queries across millions of rows.
When to use: Data warehouses, not your OLTP system.
Bonus: Clustered columnstore = high compression. Great for archiving.

XML Indexes

What it is: Indexes specifically for querying XML data types.
Why it matters: Without them, XML queries crawl.
When to use: If you’re unlucky enough to be storing structured data as XML blobs.

Spatial Indexes

What it is: Indexes that help with geography and geometry data types.
Why it matters: SQL Server can’t efficiently process spatial queries without these.
When to use: Mapping, geolocation apps, routing engines—very niche but powerful. I’ve only seen a spatial index in production once in my career.

The Bottom Line:

Indexing is both art and science. You want just enough to make reads fly and not too many that writes crawl. Think of it like your car with just the right tire pressure—too little and you drag, too much and you lose control.

— Thanks for reading!

Kevin

 

Filed Under: Beginner, Indexing Strategies, SQL Tagged With: syndicated

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 37
  • Go to Next Page »

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2026 · WordPress · Log in

 

Loading Comments...