• Skip to main content
  • Skip to primary sidebar

DallasDBAs.com

SQL Server Database Consulting

  • Services
    • SQL Server Health Check
    • Fractional DBA
  • Pocket DBA®
  • Blog
    • Speaking
  • Testimonials
  • Contact
  • About

HealthCheck

Virtual Log Files

November 24, 2020 by Kevin3NF Leave a Comment

Today’s post is a guest article from a friend of Dallas DBAs, writer, and fantastic DBA Jules Behrens (B|L)

One common performance issue that is not well known that should still be on your radar as a DBA is a high number of VLFs.

Virtual Log Files are the files SQL Server uses to do the actual work in a SQL log file (MyDatabase_log.LDF). It allocates new VLFs every time the log file grows. Perhaps you’ve already spotted the problem – if the log file is set to grow by a tiny increment, then if your the file ever grows very large, you may end up with thousands of tiny little VLFs, and this can slow down your performance at the database level. Think of it like a room (the log file) filled with boxes (the VLFs). If you just have a few boxes, it is more efficient to figure out where something (a piece of data in the log file) is, than if you have thousands of tiny boxes. (Analogy courtesy of @SQLDork)

It is especially evident there is an issue with VLFs when SQL Server takes a long time to recover from a restart. Other symptoms may be slowness with autogrowth, log shipping, replication, and general transactional slowness. Anything that touches the log file, in other words.

The best solution is prevention – set your log file to be big enough to handle its transaction load to begin with, and set it to have a sensible growth rate in proportion to its size, and you’ll never see this come up. But sometimes we inherit issues where best practices were not followed, and a high number of VLFs is certainly something to check when doing a health assessment on an unfamiliar environment.

Find your VLF counts

The built-in DMV sys.dm_db_log_info is specifically for finding information about the log file, and command DBCC LOGINFO (deprecated) will return a lot of useful information about VLFs as well. There is an excellent script for pulling the count of VLFs that uses DBCC LOGINFO from Kev Riley, on Microsoft Tech Net:

https://gallery.technet.microsoft.com/scriptcenter/SQL-Script-to-list-VLF-e6315249

There is also a great script by Steve Rezhener on SQLSolutionsGroup.com that utilizes the view:

https://sqlsolutionsgroup.com/capture-sql-server-vlf-information-using-a-dmv/

Either one of these will tell you what you ultimately need to know – if your VLFs are an issue. How many VLFs are too many? There isn’t an industry standard, but for the sake of a starting point, let’s say a tiny log file has 500 VLFs. That is high. A 5GB log file with 200 VLFs, on the other hand, is perfectly acceptable. You’ll likely know a VLF problem when you find it; you’ll run a count on the VLFs and it will return something atrocious like 20,000. (ed – someone at Microsoft support told me about one with 1,000,000 VLFs)

Resolution

If the database is in Simple recovery model and doesn’t see much traffic, this is easy enough to fix. Manually shrink the log file as small as it will go, verify the autogrow is appropriate, and grow it back to its normal size. If the database is in Full recovery model and is in high use, it’s a little more complex. Follow these steps (you may have to do it more than once):

  • Take a transaction log backup .
  • Issue a CHECKPOINT manually.
  • Check the empty space in the transaction log to make sure you have room to shrink it.
  • Shrink the log file as small as it will go.
  • Grow the file back to its normal size.
  • Lather, Rinse, Repeat as needed

Now check your VLF counts again, and make sure you are down to a nice low number. Done!

If you need help, contact us

Thanks for reading!

Kevin3NF

Filed Under: HealthCheck, Performance, Troubleshooting Tagged With: performance, syndicated

Stories from the HealthCheck – part 4

January 10, 2020 by Kevin3NF Leave a Comment

This week I have been highlighting the top items that I see on almost every SQL Server I run across that need to be changed in almost every instance.

If you know me at all, or have read anything from me, you probably already know what’s coming – 

Number 4, for Freaky Friday:

Indexes

  • No indexes – bad – everything is slow
  • Lots of indexes – bad – everything is slow, despite having some great indexes
  • Duplicate indexes – bad – everything is slow and there is no benefit
  • Unused Indexes – bad – the junk drawer of SQL Server needs some attention
  • Goldilocks indexes (the just right ones) – Great!

You guys are great, and I applaud you for making the effort…but indexes are hard. The idea is easy, but the implementation is hard to get right and it changes over time with your workload.

New features and changing traffic patterns can mess up a perfectly good index strategy.  Reviewing your indexes is like getting your oil changed in the car – do it regularly or risk everything grinding to a halt.

EVERY HealthCheck I have done since starting Dallas DBAs has pointed out that an indexing project was desperately needed.  Some better than others, but they all needed help.

Please – Seek help! (DBAs…see what I did there?)

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: HealthCheck, SQL

Stories from the HealthCheck – part 3

January 9, 2020 by Kevin3NF Leave a Comment

This week I am highlighting the top items that I see on almost every SQL Server I run across that need to be changed in almost every instance.

Number 3:

Instance level settings

Many of you (most?) are familiar with instance level settings such as MAXDOP, Backup Compression and Max Server Memory…and you should be. Good for you!

Are you also familiar with ‘Cost Threshold for Parallelism’ and ‘Optimize for Ad Hoc Workloads’? I’m guessing not as much as the others.

The default settings on these are what most clients I see are running, and they are usually WRONG.

Cost Threshold for Parallelism – deals with the question ‘How ugly does this query have to be before I use more than one processor?’ The default is ‘5’, but for most servers today it should be 25 or even 50 to get the best balance of processor usage at the query level.

Optimize for Ad Hoc Workloads – this setting may or may not apply to your server – it depends on the workload and the type of queries you run from your apps. If its not set properly, you may be churning your plan cache, which can be a performance killer.

These are easy to change and easy to get wrong…consult a professional! Google is not a pro, but can help you find one. Like Me (b|t). Or Pinal (b|t). Or Brent (b|t).

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: HealthCheck, SQL

Stories from the HealthCheck – part 2

January 8, 2020 by Kevin3NF Leave a Comment

This week I am highlighting the top items that I see on almost every SQL Server I run across that need to be changed in almost every instance.

Number 2:

CheckDB

I’ll give you a little leeway on this one, despite how rampant it is. I get that many of you have never heard of database corruption.

Its a thing.

Its a bad thing

Its a potential data-loss thing! (that got your attention, yeah?)

If your database(s) get corrupted, Microsoft’s standard answer is to restore from your last non-corrupt backup (see yesterday’s post).

Things you should be doing:

  • Set your Page Verification level to Checksum
  • Set up a recurring job to run DBCC CheckDB against ALL of your databases (SQL has built in tools for this, or use Ola Hallengren’s FREE scripts)
  • Turn on alerts for Errors 823, 824, and 825
    • Make sure an email goes to someone that knows how to respond to these errors

Corruption can come from a number of places.

  • Storage subsystem – common
  • Bad versions of SQL Server – less common – patch your servers.
  • Funky Code situations – much less common
  • Other – no, I’m not going to list every possible option, lol

If you do all this and then get that dreaded email or alert from your monitoring tools – read this article before you do anything else.

Now go Check your CheckDBs.  Now.  Go.

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: HealthCheck, SQL

Stories from the HealthCheck – part 1

January 7, 2020 by Kevin3NF 1 Comment

This week I will be highlighting the top items that I see on almost every SQL Server I run across that need to be changed in almost every instance.

Number 1 by a landslide:

Backups

We’ve been backing up SQL Server in the same way since at least SQL 7.0 from the late 90’s, yet…there are so many bad or missing backup plans:

  • No backups at all
  • Backups to the same drive as the databases
  • Backups to the same server as the databases
  • Backups with irregular frequency (manual?)
  • Production databases with no transaction log backups (so no ability to restore to a point in time)
  • VM backups only, and once a day at best

Every one of these exposes you to data loss that is probably very unacceptable to your customers.

If you think that because you are in the cloud, or have a great SysAdmin/Infra team you are covered, let me give you two reasons to completely re-think your strategy:

  • User error
  • Ransomware

Your expensive HA solution will not save you from accidentally deleting a table.  Ransomware can bankrupt a small business.

Thanks for reading!

Kevin3NF

Follow @Dallas_DBAs

Filed Under: HealthCheck, SQL

Primary Sidebar

Search

Sign up for blogs, DBA availability and more!

Home Blog About Privacy Policy
  • Home-draft
  • Blog
  • About Us

Copyright © 2025 · WordPress · Log in

 

Loading Comments...