summaryrefslogtreecommitdiff
path: root/storage/innobase/include/buf0flu.h
diff options
context:
space:
mode:
authorSunny Bains <Sunny.Bains@Oracle.Com>2011-02-22 16:04:08 +1100
committerSunny Bains <Sunny.Bains@Oracle.Com>2011-02-22 16:04:08 +1100
commitb3c9cc6f212c650a18ca191a8889e2af68ffc9d6 (patch)
treee08735bb00ffdf755168e1ab09f21f26b6f19441 /storage/innobase/include/buf0flu.h
parentbe59ca8275a3e0c95001af3077e1b4f6dc736739 (diff)
downloadmariadb-git-b3c9cc6f212c650a18ca191a8889e2af68ffc9d6.tar.gz
Bug #11766227: InnoDB purge lag much worse for 5.5.8 versus 5.1
Bug #11766501: Multiple RBS break the get rseg with mininum trx_t::no code during purge Bug# 59291 changes: Main problem is that truncating the UNDO log at the completion of every trx_purge() call is expensive as the number of rollback segments is increased. We truncate after a configurable amount of pages. The innodb_purge_batch_size parameter is used to control when InnoDB does the actual truncate. The truncate is done once after 128 (or TRX_SYS_N_RSEGS iterations). In other words we truncate after purge 128 * innodb_purge_batch_size. The smaller the batch size the quicker we truncate. Introduce a new parameter that allows how many rollback segments to use for storing REDO information. This is really step 1 in allowing complete control to the user over rollback space management. New parameters: i) innodb_rollback_segments = number of rollback_segments to use (default is now 128) dynamic parameter, can be changed anytime. Currently there is little benefit in changing it from the default. Optimisations in the patch. i. Change the O(n) behaviour of trx_rseg_get_on_id() to O(log n) Backported from 5.6. Refactor some of the binary heap code. Create a new include/ut0bh.ic file. ii. Avoid truncating the rollback segments after every purge. Related changes that were moved to a separate patch: i. Purge should not do any flushing, only wait for space to be free so that it only does purging of records unless it is held up by a long running transaction that is preventing it from progressing. ii. Give the purge thread preference over transactions when acquiring the rseg->mutex during commit. This to avoid purge blocking unnecessarily when getting the next rollback segment to purge. Bug #11766501 changes: Add the rseg to the min binary heap under the cover of the kernel mutex and the binary heap mutex. This ensures the ordering of the min binary heap. The two changes have to be committed together because they share the same that fixes both issues. rb://567 Approved by: Inaam Rana.
Diffstat (limited to 'storage/innobase/include/buf0flu.h')
-rw-r--r--storage/innobase/include/buf0flu.h13
1 files changed, 12 insertions, 1 deletions
diff --git a/storage/innobase/include/buf0flu.h b/storage/innobase/include/buf0flu.h
index 28d9b90e755..ae27f5dab0e 100644
--- a/storage/innobase/include/buf0flu.h
+++ b/storage/innobase/include/buf0flu.h
@@ -138,7 +138,18 @@ UNIV_INTERN
void
buf_flush_wait_batch_end(
/*=====================*/
- buf_pool_t* buf_pool, /*!< buffer pool instance */
+ buf_pool_t* buf_pool, /*!< in: buffer pool instance */
+ enum buf_flush type); /*!< in: BUF_FLUSH_LRU
+ or BUF_FLUSH_LIST */
+/******************************************************************//**
+Waits until a flush batch of the given type ends. This is called by
+a thread that only wants to wait for a flush to end but doesn't do
+any flushing itself. */
+UNIV_INTERN
+void
+buf_flush_wait_batch_end_wait_only(
+/*===============================*/
+ buf_pool_t* buf_pool, /*!< in: buffer pool instance */
enum buf_flush type); /*!< in: BUF_FLUSH_LRU
or BUF_FLUSH_LIST */
/********************************************************************//**