March 26, 2010

TempDB in SQL server 2005

Tempdb holds all temporary tables and temporary stored procedures. Tempdb will automatically increase and decrease as the data engine performs actions. Using a lot of temp tables and other operations can cause the log to grow,
but if you stopped the service and restarted it, then tempdb is created with
the last size set up.


Scenarios that can cause tempdb to fill up:
any sorting that requires more memory than has been allocated to SQL Server will be forced to do its work in tempdb;

 if the sorting requires more space than you have allocated to tempdb, one of the above errors will occur;

 DBCC CheckDB('any database') will perform its work in tempdb -- on larger databases, this can consume quite a bit of space;

 DBCC DBREINDEX or similar DBCC commands with 'Sort in tempdb' option set will also potentially fill up tempdb;

 large resultsets involving unions, order by / group by, cartesian joins, outer joins, cursors, temp tables, table variables, and hashing can often require help from tempdb;

 any transactions left uncommitted and not rolled back can leave objects orphaned in tempdb;

 Queries which show a relatively high value in the WRITES data column of a Profiler trace. Seeing SPOOL operators in the query plan is another sign that SQL Server is performing large numbers of writes to tempdb. The SPOOL operator means that SQL Server is "spooling" an interim result set to a secondary location (tempdb) so that it can do something with that data later in the plan.

 use of an ODBC DSN with the option 'create temporary stored procedures' set can leave objects there for the life of the connection.

Identifying tempdb bottleneck using Perfmon counters:

Use following performance counters to monitor tempdb issue in SQL Server.
 Temp Tables Creation Rate (SQL Server 2005): The number of temporary tables or variables that are created per second. The value depends on the workload. If your workload creates many temporary tables and the temporary table cache cannot be used, you may see high values. In such cases investigate why the temporary table cache cannot be used; this might be by design in the application. For example, if there is an explicit DDL after the temporary table is created, the table cannot be cached. In a perfect scenario, if there were 100% temporary table cache hits, this counter would be 0
 Temp Tables For Destruction (SQL Server 2005): The number of temporary tables or variables waiting to be destroyed by the cleanup system thread. The number of temporary tables or variables waiting to be destroyed by the cleanup system thread should be 0. Temporary spikes are expected, but should be rare. Spikes can happen when the temporary table cache or the procedure cache is being cleaned up.

Fixing tempdb full issue:

If tempdb is full, it gives one of the following error:
 1101 or 1105: A session has to allocate more space in tempdb in order to continue
 3959: The version store is full.
 3967: The version store has been forced to shrink because tempdb is full.
 3958 or 3966: A transaction is unable to find a required version record in tempdb.
Restarting SQL Server will re-create tempdb from scratch, and it will return to its usually allocated size. In and of itself, this solution is only effective in the very short term; assumedly, the application and/or T-SQL code which caused tempdb to grow once will likely cause it to grow again.

To shrink tempdb, you can consider using DBCC ShrinkDatabase, DBCC ShrinkFile (for the data or the log file).

Shrinking files is not a recommended practice, since these files will probably grow again. In addition, shrink operations cause data fragmentation. Performing shrink operation on tempdb has the following limitations:
• Shrink operations do not shrink the version store or internal objects. This means that you will not recover space if the internal object or version store page needs to be moved.

Identifying session which might be causing issue and fixing it by killing that session:

If you can't shrink the log, it might be due to an uncommitted transaction. See if you have any long-running transactions with the following command:

DBCC OPENTRAN -- or DBCC OPENTRAN('tempdb')

Check the oldest transaction (if it returns any), and see who the SPID is (there will be a line starting with 'SPID (Server Process ID) : '). Use that in the following:

DBCC INPUTBUFFER SPID

This will tell you at least a portion of the last SQL command executed by this SPID, and will help you determine if you want to end this process with:

KILL SPID number

Batch larger heavily-logged operations (especially deletes) that *might* overflow into tempdb into reasonable 'chunks' of rows, especially when joins are involved.


No comments:

Creating DataFrames from CSV in Apache Spark

 from pyspark.sql import SparkSession spark = SparkSession.builder.appName("CSV Example").getOrCreate() sc = spark.sparkContext Sp...