Managing and Optimizing SQL Server

May 29, 2023
10 min read

There are many managing and optimization settings that need review and modification to improve SQL Server performance once you’ve performed a basic default installation. This article introduces five settings to tweak for better maintenance and database performance:

  • System and database file locations
  • SQL Server memory allocation 
  • System database default settings
  • SQL Server alias 
  • Regularly scheduled backups that include backing up transaction logs

Database File Locations

The default installation of SQL Server configures all SQL Server files to be placed on the system drive, most often the C:\ drive of the server where you perform the installation of SQL Server. This means all SQL Server system files, (.dll, .exe, etc.), as well as all database-specific files, including the data file (.mdf), the transaction log files (.ldf), and, should you need to expand the size of a database by adding more files, the extended files (.ndf) are all located on the C:\ drive. In addition, the default backup location that stores your SQL Server backups are also on the C:\ drive. This, to be polite, is horrible! 

It happens because all of your SQL Server system files, your production database files, the transaction files for your production databases, and the backups for your database are all stored on one physical drive. If you lose the C:\ drive, you’ve lost all of your data and the backup of that data.

During the install, you can choose to specify the location of these files and it’s recommended that the SQL Server system files reside on the system drive, probably the C:\ drive, the data files (.mdf) on a dedicated physical drive, the transaction log files on a separate dedicated physical drive, and the backups on a network share on a different server in your network.  

Tip: In addition to the backups located on a local network share, a second copy should be available in a remote location, in the event of a natural disaster. 

Note: If you’re using a hardware high-availability option like RAID 5, and the data is appropriately striped across all the drives. The redundancy and ability to recover the data is already in place, so the placement of the .mdf, ndf, and .ldf files is less of a concern. 

For additional information on specifying file locations of SQL Server components, visit: File Locations - SQL Server > Microsoft Learn.

A Windows Server installation can contain multiple installations of SQL Server. The first is the default instance of SQL Server, and the remaining are named instances of SQL Server, all requiring unique names. You use the same tools to manage the multiple instances, but they share Windows Server resources like drive space, memory usage, and CPU time. The default setting for each instance of SQL Server is pretty much unlimited, which means that if there are multiple instances of SQL Server and one of them consumes a chunk of memory, that memory won’t be available to the other instance of SQL Server. You need to specify per instance how much memory you want allocated to each instance. 

Tip: Keep in mind when setting these per-instance memory settings that you need to reserve enough memory for your operating system to function efficiently. 

There are two memory settings you need to adjust per instance:

  • Min server memory (MB)
  • Max server memory (MB)

The min server memory setting (MB) has a default setting of 0, which can be fine, but if you have a SQL Server instance that you know will consume at least 16 GB of memory throughout the day, you can configure this to a minimum of 16 GB (16,384 MB). This means that when the instance reaches that minimum amount, it won’t release it back to the operating system, saving time for the operating system and SQL Server to have to retrieve that amount of memory again later in the day. 

The max server memory (MB) controls the maximum amount of memory an instance of SQL Server can consume. This is the biggie, because by default it can allocate up to 2,147,483,647 MB of memory per instance. The first one in wins, so if a large transaction occurs and this isn’t set, the instance of SQL Server will consume as much memory as it needs, leaving little to no memory for the other SQL Server instance(s), or even the operating system. The minimum configuration value is 128 MB. The recommended is 75% of available system memory, but keep in mind other instances of SQL Server, and other processes running on the system that require memory. 

Note: Beginning with SQL Server 2019, a standalone SQL Server installation provides a recommendation for this setting based on the percentage of available system memory detected during the installation. I strongly suggest that you configure this setting on a server hosting multiple instances of SQL Server while keeping in mind the memory requirements for all other processes on that server. 

For additional information about optimizing SQL Server by managing memory, please visit: Server memory configuration options - SQL Server > Microsoft Learn.

System Databases

There are five system databases created for you when you install SQL Server:

  • Master: Contains all system-level information for each SQL Server instance
  • Msdb: Used by SQL Server Agent for managing jobs and alerts in SQL Server
  • Resource: Read-only database containing all SQL Server system objects
  • Model: Template for all databases created in the SQL Server instance 
  • Tempdb: Workspace database for holding temporary objects or results

There are no configurations necessary for the first three databases: Master, Msdb, and Resource. However, a snapshot of the contents of the model database is used whenever you create a new database in SQL Server, so changes in the model database affect all new databases. These changes can include:

  • Permissions
  • Tables
  • Functions 
  • Stored procedures
  • Database options

The database options are where you can have an impact on future databases being created. One of the most beneficial ones would be the initial size of the model.mdf, which is set to 8 MB by default. This means every new database will be 8 MB, requiring the autogrow to occur when you start adding data to the new database. Whenever it has to autogrow, performance slows down. A setting you may want to change on the model.mdf is to configure the initial file size to a size larger than 8 MB, reducing the number of times it has to autogrow

Tip: It’s also strongly recommended to set a MAXSIZE on each database file to prevent a file from growing until it consumes the entire disk and causes SQL Server to fail. 

Note: Be sure to immediately back up the model database after making any changes to it, so if you have to recreate the model database, you can restore the changes you made from the backup. 

For additional information about optimizing the model database, please visit: model Database - SQL Server > Microsoft Learn.

Now let’s explore the tempdb database that temporarily holds user objects, including:

  • Temporary tables
  • Indexes
  • Temporary stored procedures
  • Work tables
  • Work files

This database is recreated using the model database every time you restart the SQL Server instance. Because it’s a temporary holding area for content, it can have an impact on performance. To optimize SQL Server, it’s recommended that you place the tempd.mdf on a different physical drive than your user databases. Increasing the initial size of the model.mdf, as previously discussed, can increase the autogrow size to a larger number than the default 64 MB. To further improve performance, you can also create additional tempdb files (.ndf) across multiple drives to spread the workload across multiple drives for improved performance. For additional information about optimizing the tempdb database, please visit: tempdb database - SQL Server > Microsoft Learn.

SQL Server Alias

There has been a lot of talk about location, location, location of files associated with SQL Server, both system files and database files. One helpful bit of advice that will be beneficial as you perform upgrades or migrate from server to server or even server to the cloud, is the use of a SQL Server alias. This alias will be used by clients and admins who are writing scripts. The idea is to have a seamless transition from one server to another in the event that you need to move your data.

If you install an instance of SQL Server on a standalone server and then realize you need redundancy, in order to do so, you need to build a multi-server SQL Server farm and then move everything to it. This means that the server farm will have a completely different name. It’s at that point that you may encounter difficulties because your users and all of your scripts are pointing to the original standalone server hosting SQL Server. To avoid this scenario, you can use a SQL Server alias that the users and your scripts point to. Then when the new SQL Server farm is built out, you just need to modify the SQL Server alias. 

For example, let’s say that today, your named instance of SQL Server is running on a standalone server called SQLSERVER1 with an instance name of TG_DATA. When you access the named instance, you use SQLSERVER1\TG_DATA. Instead of referencing SQLSERVER1\TG_DATA everywhere and then having to change the location of the SQL Server instance in all your scripts, upon a successful migration of the data to the new SQL Server farm, you can create an alias that points to the original standalone SQL Server-named instance. This means that the users and your scripts connect using the alias TG_SQLSERVER that points to SQLSERVER1\TG_DATA. 

After you complete your migration to the new SQL Server farm, you simply change the alias name TG_SQLSERVER to point to NEWSQLFARM\TG_DATA_FARM and the transition will be seamless for your users and the several scripts that you have scheduled to run automatically. 

For additional information about how to manage your SQL Server alias, please visit: Create or delete a server alias for use by a client - SQL Server > Microsoft Learn.

Regularly Scheduled Backups 

Performing backups is a huge part of preventing data loss, and having a solid backup plan is instrumental to redundancy and data recovery in the event of a natural disaster or the accidental or intentional deletion of data. There are entire day-long classes on backups and restores for your SQL Server data, but in this article, I just want to iterate the importance of performing these backups and also provide some resources to help you determine the best backup plan for your organization. 

An important area that I see overlooked a lot when discussing backup plans with DBAs is how often they back up the transaction logs associated with each database. Often, they have a specific schedule for backing up the database data files (.mdf), but not the transaction log files (.ldf) associated with each database. The reason it’s important to back up the transaction logs is that without doing so, they will grow until they run out of space on the hard drive and then SQL Server fails and the entire database is unreachable. Be sure that you have a foolproof backup plan that includes a solid and reliable backup of the database transaction logs. 

If you’re new to SQL Server and want to schedule backups of your databases, there are a few different ways to do so:

  • SQL Server Management Studio (SSMS)
    • Maintenance plans 
    • Script Action to Job
  • SQL Server Agent (to manually create and schedule backup jobs)

You can use the SQL Server Management Studio (SSMS) to create and schedule maintenance plans, as well as Script to Job backup plans. Alternatively, you can manage your backups manually by creating and scheduling the jobs using the SQL Server Agent. 

Here are a few links to sites that will help you get started performing backups and restores in SQL Server: 

These are just the top five big areas I’ve encountered in my many years as a database administrator and they are easy to modify, which will reduce the risks of any data loss, as well as improve performance of SQL Server. 

Brian Alderman

Brian Alderman

Brian Alderman is a former Microsoft MVP, and has his Master's in Computer Information Systems. Since 1995, he has held several Microsoft certifications that currently include; MCT, MCSE for SharePoint, MCSA for Office 365, MCITP for SQL Server(R), MCSA and MCSE for Windows Server(R), and also his PMP certification. As a Senior Technical Instructor and Consultant, he has 30 years of experience in networking, SQL Server databases, SharePoint technologies, and project management. Brian's publications include Windows 2000 Professional, SQL Server 2000 Administration, SharePoint 2010 Administrator's Companion, and Microsoft SharePoint 2013 Administration Inside Out. He is an active speaker at SharePoint Industry conferences including SharePoint Saturday's, SPLive, DevIntersection, SharePoint Fest, Microsoft Ignite, and several international SharePoint conferences including London, Milan, and Madrid.

Your Privacy

Like most sites, TekkiGurus uses cookies and similar technologies to improve your experience.

We may use cookies and other technologies that:

  • Are essential for the site to work
  • Remember your preferences
  • Collect information about how you use our site
  • Provide more relevant content and advertising
Cookie Policy