August 07, 2008

Monitoring server disk space in SQL Server

DBAs must monitor their SQL Servers carefully to ensure that the system and user databases have sufficient disk space for the life time of the applications using these SQL Servers. Failure to do so can result in the following problems:
All SQL Server jobs fail – if the msdb database does not have room to grow, job information will be unable to be logged and all of your SQL Server jobs will be unable to start.
Sorts, aggregates and operations which make heavy use of tempdb may fail if tempdb data or log files can't grow to accommodate these operations.
The transaction log may fill up and all DML operations on your database may fail and point in time recovery is lost.
Your database files themselves may fill up and all DML operations on your database may fail.
A database experiencing out of control growth of its transaction log or data files may fill up the entire disk causing other databases applications to fail.

Steps for monitoring disk space in SQL Server :

Auto-growth settings for logs and files
The best practice is to size your data and log files to sizes that minimize the number of auto growths. For data files, the best practice is not to use the Growth in Percent, but rather the Growth in Megabytes, and select a value that will work for your database.

File growth is an intensive disk I/O operation and if SQL Server has to wait for a data or log file to expand, you could certainly see this harm performance and response times. The default growth increment is 1MB for data files and 10 percent for log files, which can be insufficient growth increments for busy systems. Additionally, relying on autogrow can lead to disk fragmentation because the data or log files are not contiguous on disk—this means response times may be longer than necessary, as data is physically scattered on the disk.
To size your transaction log files you should:
Shrink the log as much as possible.
Dump every 10-20 minutes.
Monitor its growth over a week - noting the maximum size.
Lastly, dump the log, shrink it one more time to the minimum size and then manually size it for this observed maximum size.
This prevents too many virtual log files, which can lead to performance problems.

Check server disk space :
Xp_fixeddrives will give the amount of free space available on local fixed drives.

Check database files space :

select db_name(dbid),name,drive=left(filename,1),filename,
filegroup=filegroup_name(groupid),
'size'=convert(nvarchar(15),convert(bigint,size)*8) +N'KB',
'maxsize'=(case maxsize when -1 then N'Unlimited'
else convert(nvarchar(15),convert(bigint,maxsize)*8) +N'KB'
end),
'usage'=(case status & 0x40 when 0x40 then 'log only' else 'data only' end)
from sysaltfiles
where dbid= 7
order by usage
Monitor using performance monitor :
You can only use performance monitor to monitor log auto growths – there is no counter for data file growths. Use Performance Monitor to monitor the performance objects SQL Server Databases: Log Growths, Percent log Used, and Data File Size.
Performance Monitor allows you to watch specific databases or all databases, and if necessary, raise alerts to send net messages or write events to the Windows NT application log, which monitoring software (like NetIQ) can then pick up and react to.

2 comments:

daspeac said...
This comment has been removed by the author.
daspeac said...

get the how can reduce mdf file size utility and remove the consequences of data corruption issues

Creating DataFrames from CSV in Apache Spark

 from pyspark.sql import SparkSession spark = SparkSession.builder.appName("CSV Example").getOrCreate() sc = spark.sparkContext Sp...