Tuesday 21 September 2010

Basic algorithm for the "Auto update statistics"

This is an excerpt from one of the support call I read recently, thought it would be beneficial to share it.

========================

Basically the SQL database engine will not update statistics every time a record is change, added or deleted. There is a change threshold for each table, and this is what effect the statistics update. So if you have a customer with large tansaction volumes, you could find that this update process is not triggered frequently enough.

The following describes the basic algorithm for the "auto update statistics" process:
  1. If the cardinality (in most cases this means row count) for a table is less than six and the table is in the tempdb database, auto update with every six modifications to the table.
  2. If the cardinality for a table is greater than 6, but less than or equal to 500, update status every 500 modifications.
  3. If the cardinality for a table is greater than 500, update statistics when (500 + 20 percent of the table) changes have occurred.
  4. For table variables, cardinality changes does not trigger auto update statistics.

Therefore you may find that large tables are only statistically updates when the row count increases by 20%. But this does not really take into account updates, and this does not change the row count. Therefore you may find that if you make lots of updates, the data statistics on the are not in tune with this updated data. So in this case, updating the statistics manually from a scripts would help the query optimiser when the process is run.

========================

No comments:

Post a Comment