File/Filegroups Optimization Tips
- Set a reasonable size of your database.
First of all, before database creation, you should estimate how big
your database will be.
To estimate the reasonable database size, you should estimate the
size of each table individually, and then add the values obtained.
See this link for more information:
Estimating the Size of a Table
- Set a reasonable size for the transaction log.
The general rule of thumb for setting the transaction log size is
to set it to 20-25 percent of the database size. The smaller the size
of your database, the greater the size of the transaction log
should be, and vice versa. For example, if the estimation database
size is equal to 10Mb, you can set the size of the transaction log
to 4-5Mb, but if the estimated database size is over 500Mb,
the 50Mb may be enough for the size of the transaction log.
- Leave the Autogrow feature on for the data files and for the log files.
Leave this feature to let SQL Server to automatically increase allocated
resources when necessary without DBA intervention. The Autogrow feature
is necessary when there is no DBA in your firm or when your DBA doesn't
have a lot of experience.
- Set a reasonable size of the autogrow increment.
Setting a database to automatically grow results in some performance
degradation, therefore you should set a reasonable size for the Autogrow
increment to avoid automatically growing too often. Try to set the initial
size of the database, and the size of the Autogrow increment, so that
automatic growth will occur once per week or less.
- Don't set the autoshrink feature.
Autoshrinking results in some performance degradation, therefore you
should shrink the database manually or create a scheduled task to shrink
the database periodically during off-peak times, rather than set Autoshrink
feature to on.
- Set the maximum size of the data and log files.
Specify the maximum size to which the files can grow to prevent
disk drives from running out of space.
- Create a user-defined filegroup and make it the default filegroup.
It's a good decision in most cases to store and manage system and user
objects separately from one another, so the user objects will not compete
with system objects for space in the primary filegroup. Usually, a user-
defined filegroup is not created for small databases, for example, if the
database is less than 100Mb.
- Create a user-defined filegroup and create some tables in it to run
maintenance tasks (backups, DBCC, update statistics, and so on)
against these tables.
LOAD TABLE and DUMP TABLE are no longer supported in SQL Server 7.0
(and higher), but you can place a table in its own filegroup and can
backup and restore only this table. So you can group user objects
with similar maintenance requirements into the same filegroup.
- If you have several physical disk arrays, try to create as many files as
there are physical disk arrays so that you have one file per disk array.
This will improve performance, because when a table is accessed
sequentially, a separate thread is created for each file on each
disk array in order to read the table's data in parallel.
- Don't create many data and log files on the same physical disk array.
Leaving the autogrow feature on for the data and for the log files
can cause fragmentation of those files if there are many files
on the same physical disk array.
In most cases, it's enough to have 1-2 database files on the same
- For heavily accessed tables, place these tables in one filegroup and
place the table's indexes in a different filegroup on different physical
This will improve performance, because separate threads will be created
to access the tables and indexes.
- For heavily accessed tables with text/image columns, place this table
in one filegroup and place text/image columns in a different filegroup
on different physical disks.
You can use CREATE TABLE statement with TEXTIMAGE_ON keyword to
place text/image columns in a different filegroup.
See the SQL Server BOL for details.
- Place the log files on other physical disk arrays than those with the
Because logging is more write-intensive, it's important that the disk
arrays containing the SQL Server log files have sufficient disk I/O
- If one of your join queries is used much more often than others,
place the tables used in this query in different filegroups on
different physical disk arrays.
- If you have read-only tables, place these tables in different filegroups
on different physical disk arrays and use the ALTER DATABASE statement
to make just this filegroup READ ONLY.
This not only increases read performance, it prevents any data changes
and allows you to control permissions to this data.
- Use the Windows NT Performance Monitor to determine the appropriate number
for the data and log files on your server by checking the Disk Queue
Consider reducing the number of files and filegroups you have for
your databases if the Disk Queue length on your server averages
above 3, and continue monitoring once you have made your changes
to ensure that your disk I/O is optimum over the long term.