
SQL Server post-installation configuration
So far, we have configured our Windows Server and we have made a few configurations related to SQL Server, but we haven't configured any SQL-specific items inside the SQL Server itself. For the post-installation configuration, there are plenty of settings worth exploring, some of course with careful consideration.
When the server is deployed, many configuration items are configured with default values, which may be modified for your environment. We'll start with some basic configuration for the databases. During the installation, you had to enter the paths for data, log, and backup file locations, which you can later modify if you need to update the location of the default files.
In the Database Settings section of the server configuration, you can again configure all three paths, as shown in the following screenshot:
On this same settings page, you can configure two additional important parameters. The first one is the backup compression. We'll talk more about the backup settings and methods to perform backup in a different chapter, but as part of post-installation configuration, you can configure this setting on most servers.
Bear in mind that turning on backup compression puts additional load on the CPU while performing the backup, so this may cause higher peaks in the performance monitor. Also if the server is under heavy load, causing additional load by backup compression might not be ideal for response times for users. On the other hand, compression has its benefits, which combine a smaller backup size stored on the disk and the time needed to create a backup.
Actually, there's one more important benefit, and this one is the time to restore, which is also decreased with compressed backup versus an uncompressed one, because the system gets to read a smaller file from the disk.
If you would like to configure these settings on just one server, you'll be fine with GUI of our SQL Server Management Studio, but if you are preparing a script for post-deployment configuration on more servers, you will most probably use an SQL script that can perform such a configuration. Most of the configuration at the server level is performed with a stored procedure called sp_configure.
If you just run the procedure without any parameters, it will display all basic parameters and their values, which are configured on the server:
You don't need to memorize all the options as we won't configure all of these. It's just a basic set of the items; as you can see, the second from the bottom is an option called show advanced options, which will display more of the items for configuration. Backup compression is listed in the basic set and can be configured with the following code:
USE master
GO
EXEC sp_configure 'backup compression default',1
GO
RECONFIGURE
GO
Some other options that we will explore are visible only when you display the advanced features. To display all the advanced features, you can simply run sp_configure again and you'll set the option for show advanced options as in the previous example. With advanced options displayed, SQL Server will let you configure 77 (on SQL Server 2017; with other versions, this may be different) options in contrast to 23 when you display only the basic set.
With regards to post-installation configuration, we'll configure the memory and CPU settings for our server too. By default, SQL Server is allowed to use as much memory as possible and Windows OS won't make any larger reserve of other applications or even for itself, so you can limit the amount of memory available to SQL Server. You should reserve memory to the operating system so that it does not get unresponsive under heavy load on SQL Server.
There have been many situations when DBA could not connect to the SQL Server operating system because all the memory was allocated to SQL Server itself. You can limit the memory available to the SQL Server with a setting called max server memory. This setting has to be considered carefully as you need to keep some memory to the OS. As a general guideline, you need to keep 1 to 2 GB for the OS and then one 1 GB for each 8 to 16 GB on the system. So for a SQL Server with 256 GB RAM, you would configure the max server memory setting to a value between 224 to 240 GB. The code to perform the configuration is as follows. Don't forget that the procedure is using megabytes as a unit of measure:
sp_configure 'max server memory',245760
SQL Server editions provide different limits to use system memory to SQL server. Standard edition can use only up to 128GB RAM for the SQL Server buffer pool, whereas enterprise edition can use all the system memory available on the server. You can find different limits for the editions available in the online documentation. The differences in the editions are not only about RAM, but also about CPU and core support for different SQL Server SKUs.
There is another setting that can be very useful when you're troubleshooting your SQL Server and it gets unresponsive--it is called dedicated admin connection. By default, such a connection is not allowed remotely and you can connect to DAC only locally while being logged onto the server. If the system faces performance issues and even the Windows Server won't allow you to connect via Remote Desktop, you can connect to DAC remotely if you have enabled this setting. To enable remote DAC, you need to run the following procedure:
sp_configure 'remote admin connections',1
GO
RECONFIGURE
Additional items that we will configure have an effect on the SQL Server performance and require deeper understanding of your workload, hardware, and requirement of your applications. These will include configuring parallelism on your server.
There are two main configuration items that we're using to control parallelism on the server level and these are as follows:
- Max degree of parallelism (default is 0)
- Cost threshold for parallelism (default is 5)
The first one sets the maximum amount of threads to be used in a query when it's processed in parallel. It does not mean that each query will be processed with multiple threads, and if it will be, it can be a lower amount. It's a default server setting, which you can later override on different levels, but as a general option, it's a good idea to configure this value. What is the proper value depends greatly on several factors and they are as follows:
- Your hardware: CPUs and cores
- Your workload: OLTP versus OLAP
- Your environment: Physical versus virtual
In most cases, you can examine the amount of CPUs and cores on your system and assign a value that determines the amount of cores on one CPU. So for example, if you have two 8-Core CPUs used for your SQL Server, you will configure the max degree of parallelism to value 8. Again, you can use the SQL Server Management Studio or the sp_configure procedure. At the same time, in the GUI, you can also update the cost threshold for parallelism value, which is something like a virtual cost of a query when the query optimizer component is generating a serial or parallel plan.
If the cost is too low, there might be too many parallel plans, which can increase the load on the CPU and memory of your SQL Server. This configuration is subject to test, but you can start with values ranging from 20 to 50 and evaluate the load on your SQL Server and your application performance: