Click here to Skip to main content
15,867,568 members
Articles / Database Development / SQL Server

Speeding up database access - Part 8: Fixing memory, disk, and CPU issues

Rate me:
Please Sign up or sign in to vote.
4.96/5 (14 votes)
21 Dec 2011CPOL9 min read 28.1K   37   2
In Part 2, we saw how to pinpoint bottlenecks related to the database server hardware - memory, disks, and CPU. In this last Part 8, we'll look at fixing those hardware issues.

This is part 8 of an 8 part series of articles about speeding up access to a SQL Server database. This series is based on chapter 8 "Speeding up Database Access" of my book ASP.NET Site Performance Secrets, available at amazon.com and other book sites.

In Part 2, we saw how to pinpoint bottlenecks related to the database server hardware - memory, disks, and CPU. In this last Part 8, we'll look at fixing those hardware issues.

If you like this article, please vote for it.

Memory

These are the most common ways to relieve memory stress:

  • Add more physical memory.
  • Increase the amount of memory allocated to SQL Server. To see how much is currently allocated, run:
  • SQL
    EXEC sp_configure 'show advanced option', '1'
    RECONFIGURE
    EXEC sp_configure 'max server memory (MB)'

    If more physical memory is available on the server, increase the allocation. For example, to increase the allocation to 3000 MB, run:

    SQL
    EXEC sp_configure 'show advanced option', '1'
    RECONFIGURE
    EXEC sp_configure 'max server memory (MB)', 3000
    RECONFIGURE WITH OVERRIDE

    Do not allocate all physical memory. Leave a few hundred MB free for the Operating System and other software.

  • Reduce the amount of data read from disk. Each page read from disk needs to be stored and processed in memory. Table scans, aggregate queries, and joins can read large amounts of data. Refer to parts 1 and 3 to see how to reduce the amount of data read from disk.
  • Promote reuse of execution plans to reduce memory needed for the plan cache. See Part 6.

Disk usage

Here are the most common methods to reduce stress on the disk system:

  • Optimize query processing.
  • Move the log file to a dedicated physical disk.
  • Reduce fragmentation of the NTFS file system.
  • Consider moving the tempdb database to its own disk.
  • Split the data over two or more disks, to spread the load.
  • Alternatively, move heavily used database objects to another disk.
  • Use the optimal RAID configuration.

Let's go through these options one by one.

Optimize query processing

Make sure you have the correct indexes in place and optimize the most expensive queries. Refer to Parts 1, 3, and 4.

Move the log file to a dedicated physical disk

Moving the read/write head of a disk is a relatively slow process. The log file is written sequentially, which by itself requires little head movement. This doesn't help you though if the log file and data file are on the same disk, because then the head has to move between the log file and the data file.

However, if you put the log file on its own disk, head movement on that disk is minimized, leading to faster access to the log file. That in turn leads to quicker Read/Write operations, such as UPDATEs, INSERTs, and DELETEs.

To move the log file to another disk for an existing database, first detach the database. Move the log file to the dedicated disk. Then reattach the database, specifying the new location of the log file.

Reduce fragmentation of the NTFS file system

When the actual NTFS database files become fragmented, the disk has to hunt around the disk for the fragments when reading a file. To reduce fragmentation, set a large initial file size (for your database and log files) and a large increment size. Better still, set them large enough so neither file ever has to grow. You want to prevent growing and shrinking the files.

If you do need to grow and shrink the database or log files, consider using a 64KB NTFS cluster size to match SQL Server reading patterns.

Consider moving the tempdb database to its own disk

tempdb is used for sorting, subqueries, temporary tables, aggregation, cursors, and so on. It can be very busy. That means that it may be a good idea to move the tempdb database to its own disk, or to a disk that is less busy.

To check the level of activity of the database and log files of tempdb and the other databases on the server, use the dm_io_virtual_file_stats DMV:

SQL
SELECT d.name, mf.physical_name, mf.type_desc, vfs.*
FROM sys.dm_io_virtual_file_stats(NULL,NULL) vfs
JOIN sys.databases d ON vfs.database_id = d.database_id
JOIN sys.master_files mf ON mf.database_id=vfs.database_id AND mf.file_id=vfs.file_id

To move the tempdb data and log files to for example the G: disk, setting their sizes to 10MB and 1MB, run this code. Then restart the server.

SQL
ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 'G:\tempdb.mdf', SIZE = 10MB)
GO
ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 'G:\templog.ldf', SIZE = 1MB)
GO

To reduce fragmentation, prevent growing and shrinking of the tempdb data and log files by giving them as much space as they are likely to ever need.

Split the database data over two or more disks

By splitting the database's data file over two or more disks, you spread the load. And because you wind up with more but smaller files, this also makes backup and moving the database easier.

To make this happen, add a file to the PRIMARY filegroup of the database. SQL Server then spreads the data over the existing file(s) and the new file. Put the new file on a new disk or a disk that isn't heavily used. If you can, make its initial size big enough so it doesn't have to grow further, thereby reducing fragmentation.

For example, to add a file to database TuneUp, on the G: disk, with an initial size of 20GB, run this command:

SQL
ALTER DATABASE TuneUp
ADD FILE (NAME = TuneUp_2, FILENAME = N'G:\TuneUp_2.ndf', SIZE = 20GB) 

Note that the file has the extension .ndf - the recommended extension for secondary files.

Move heavily used database objects to another disk

You could move heavily used database objects such as indexes to a new disk, or to less busy disks. In Part 1 "Pinpointing missing indexes and expensive queries", you saw how to use the DMV dm_db_index_usage_stats to determine the number of reads and writes executed on each index. There it was used to find unused indexes, but you can also use it to find the busiest indexes.

And if your server has multiple disks, in Part 2 "Pinpointing other bottlenecks", you saw how to measure the usage of your disks. Use this information to decide which objects to move to which disk.

To move an index to another disk, first create a new user defined filegroup. For example, this statement creates a filegroup FG2:

SQL
ALTER DATABASE TuneUp ADD FILEGROUP FG2

Then add a file to the filegroup:

SQL
ALTER DATABASE TuneUp
ADD FILE (NAME = TuneUp_Fg2, FILENAME = N'G:\TuneUp_Fg2.ndf', SIZE = 200MB)
TO FILEGROUP FG2

Finally move the object to the filegroup. For example, here is how to move a non-clustered index IX_Title on column Title in table Book to filegroup FG2:

SQL
CREATE NONCLUSTERED INDEX [IX_Title] ON [dbo].[Book]([Title] ASC)
WITH DROP_EXISTING ON FG2

You can assign multiple objects to a filegroup. And you can add multiple files to a filegroup, allowing you to spread for example a very busy table or index over multiple disks.

Have tables and their non-clustered indexes on separate disks, so one task can read the index itself while another task is doing key lookups in the table.

Use the optimal RAID configuration

To improve performance and/or fault tolerance, many database servers use RAID (Redundant Array of Inexpensive Disks) subsystems instead of individual drives. RAID subsystems come in different configurations. Choosing the right configuration for your data files, log file, and tempdb files can greatly affect performance.

The most commonly used RAID configurations are:

RAID ConfigurationDescription
RAID 0Each file is spread ("striped") over each disk in the array. When reading or writing a file, all disks are accessed in parallel, leading to high transfer rates.
RAID 5Each file is striped over all disks. Parity information for each disk is stored on the other disks, providing fault tolerance. File writes are slow - a single file write requires 1 data read + 1 parity read + 1 data write + 1 parity write = 4 accesses.
RAID 10Each file is striped over half the disks. Those disks are mirrored by the other half, providing excellent fault tolerance. A file write requires 1 data write to a main disk + 1 data write to a mirror disk.
RAID 1This is RAID 10 but with just 2 disks, a main disk and a mirror disk. That gives you fault tolerance but no striping.

This translates to the following performance characteristics compared with an individual disk. N is the number of disks in the array.

 Read SpeedWrite SpeedFault Tolerant
Individual Disk11no
RAID 0NNno
RAID 5NN/4yes
RAID 10NN/2yes
RAID 121yes

So if you have a RAID 10 with 4 disks (2 main + 2 mirror), N = 4 and read performance will be 4 times better than an individual disk, while write performance will be 4 / 2 = 2 times better. This is assuming that the individual disk has the same speed as the disks in RAID 10.

From this follows the optimal RAID configuration to use for your tempdb, data, and log files:

FilesPerformance related attributesRecommended RAID configuration
tempdbRequires good read and write performance for random access. Relatively small. Losing temporary data may be acceptable.RAID 0, RAID 1, RAID 10
logRequires very good write performance, and fault tolerance. Uses sequential access, so striping is no benefit.RAID 1, RAID 10
data (writes make up less than 10% of accesses)Requires fault tolerance. Random access means striping is beneficial. Large data volume.RAID 5, RAID 10
data (writes make up over 10% of accesses)Same as above, plus good write performance.RAID 10

Having a battery backed caching RAID controller greatly improves write performance, because this allows SQL Server to hand over write requests to the cache without having to wait for the physical disk access to complete. The controller then executes the cached write requests in the background.

CPU

Common ways to resolve processor bottlenecks include:

  • Optimize CPU intensive queries. In Part 1 "Pinpointing missing indexes and expensive queries", you saw how to identify the most expensive queries. The DMVs listed there give you the CPU usage of each query. See sections "Missing Indexes" and "Expensive Queries" on how to optimize these queries.
  • Building execution plans is highly CPU intensive. Refer to Part 6 to improve reuse of execution plans.
  • Install more or faster processors, L2/L3 cache, or more efficient drivers.

Conclusion

In this part, we looked at optimizing the use of the available hardware, including memory, disks, and CPU.

This was the last part in this series. If you enjoyed reading these articles, consider buying my book ASP.NET Site Performance Secrets, available at amazon.com and other book sites. It shows how to fix performance issues in ASP.NET / SQL Server web sites in a structured and hands on manner, by first pinpointing the biggest bottlenecks and then fixing those bottlenecks. It covers not only the database, but also the web server and the browser.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect
Australia Australia
Twitter: @MattPerdeck
LinkedIn: au.linkedin.com/in/mattperdeck
Current project: JSNLog JavaScript Logging Package

Matt has over 9 years .NET and SQL Server development experience. Before getting into .Net, he worked on a number of systems, ranging from the largest ATM network in The Netherlands to embedded software in advanced Wide Area Networks and the largest ticketing web site in Australia. He has lived and worked in Australia, The Netherlands, Slovakia and Thailand.

He is the author of the book ASP.NET Performance Secrets (www.amazon.com/ASP-NET-Site-Performance-Secrets-Perdeck/dp/1849690685) in which he shows in clear and practical terms how to quickly find the biggest bottlenecks holding back the performance of your web site, and how to then remove those bottlenecks. The book deals with all environments affecting a web site - the web server, the database server and the browser.

Matt currently lives in Sydney, Australia. He recently worked at Readify and the global professional services company PwC. He now works at SP Health, a global provider of weight loss web sites such at CSIRO's TotalWellBeingDiet.com and BiggestLoserClub.com.

Comments and Discussions

 
GeneralMy vote of 5 Pin
Oshtri Deka1-Jul-14 22:47
professionalOshtri Deka1-Jul-14 22:47 
GeneralNice article Pin
Gurunatha Dogi12-Jun-14 1:28
Gurunatha Dogi12-Jun-14 1:28 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.