Microsoft SQL Server Best Practices: Log File Configuration 101

Configuring a transaction log for your Microsoft SQL Server database depends on factors like your recovery model, data growth trends, and even your storage drive type. In this blog post, I explain which factors impact your transaction logs in the configuration stage.

In a previous blog post, we looked at the basics of a transaction log in Microsoft SQL Server and the different operations that can increase its size. Now, let’s look at the most important aspects of a database that need to be considered while configuring a transaction log:

  • Recovery Model
  • Size – Initial and Auto Growth
  • OLTP vs. OLAP Databases
  • Storage Type (Solid State Drives vs. Traditional Hard Drives)
READ PART ONE

Recovery Model

The recovery model of a database defines how a transaction log is used by the database server. The table below provides a quick comparison of the three recovery models available in SQL Server (as supplied by Microsoft):

Recovery ModelDescriptionWork Loss ExposureRecover to a specific point in the time?
SimpleDoes not require log backups.

Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space.

Changes since the most recent backup are unprotected. In the event of a disaster, those changes must be redone.Can recover only to the end of a backup.
FullRequires log backups.

No work is lost due to a lost or damaged data file.

Can recover to an arbitrary point in time (for example, prior to application or user error).

Normally none.

If the tail of the log is damaged, changes since the most recent log backup must be redone.

Can recover to a specific point in time, assuming that your backups are complete up to that point in time.
Bulk loggedRequires log backups.

An adjunct of the full recovery model that permits high-performance bulk copy operations.

Reduces log space usage by using minimal logging for most bulk operations

If the log is damaged or bulk-logged operations occurred since the most recent log backup, changes since that last backup must be redone.

Otherwise, no work is lost.

It is a common misconception that data modification operations are not written to the log file when a database uses the Simple recovery model because you rarely see the transaction log size increase. However, this this is not true: all operations are written to the log file before they are committed to the database.

Since SQL Server does not provide point-in-time recovery for a database using the Simple recovery model, all transactions that are not active are truncated from the log when a checkpoint is issued. The database engine supports several types of checkpoints (automatic, indirect, manual, and internal) which can be configured.

It is always recommended to configure database-in-production environments to use the Full recovery model, as this provides the ability for point-in-time recovery. However, if there are bulk loads that are performed every so often, consider changing the recovery mode to “Bulk Logged” for the duration of the load operation to improve performance. Once the bulk load operation has been completed, you should switch back to Full recovery model.

Size – Initial & Auto Growth

There is no magic bullet. However, the guiding principle should be to make sure that the the initial size of the transaction log is set to a value so that the log does not have to use more disk size during business hours, as this could impact performance. Now, let’s ask ourselves: when does a transaction log grow? It grows when the log file is full. There could be several reasons why the log file is full: the initial size was too small or few, transactions have gone wild, et cetera.

The best method to determine the initial size of a log file is to understand the throughput of the operations that cause a transaction log to increase, then configure the log file so that you have enough space. Additionally, it is also important to make sure you have regular backups of the database, because a database backup usually truncates the log file.

Setting the autogrowth to a very small value can lead to a fragmented log file, which will cause performance issues. If the log file becomes too big due to many small increments, there will be several hundreds – if not thousands – of virtual log files. This can slow down database startup, log backup, and restore operations.

It is always a good practice to determine what your data growth patterns are in order to determine the initial size and growth values for a log file. Always try to use a fixed value for growth instead of a percentage.

OLTP vs. OLAP Databases

OLTP databases are transaction-intensive in nature, so it is always recommended to setup these types of databases in Full recovery mode, which means all data change operations are recorded in the log file and are truncated when the database is backed up.

However, typical OLAP databases are not transaction-intensive. You can consider setting up an OLAP database in Simple recovery mode if the OLAP system is designed such that the data loads occur only once a day, which means data change operations occur only at a specific time. However, you should always have backups run right after the loads to make sure there is minimal data loss.

Storage Type (Solid State Drives vs. Traditional Hard Drives)

Storage type plays a key role in the performance of a transaction log. We all know that solid state drives (SSDs) are much faster than a traditional hard disk. Transaction logs are write-intensive in nature. On systems that are highly transaction-intensive, consider placing the transaction log file for a database on a SSD device.

I hope this blog post series was helpful in understanding the basics of a transaction log file and some of key aspects to focus on while configuring log files for a database. Please comment below with your questions or experiences!

There’s more to explore at Smartbridge.com!

Sign up to be notified when we publish articles, news, videos and more!