diff options
author | Sunny Bains <Sunny.Bains@Oracle.Com> | 2011-02-22 16:04:08 +1100 |
---|---|---|
committer | Sunny Bains <Sunny.Bains@Oracle.Com> | 2011-02-22 16:04:08 +1100 |
commit | b3c9cc6f212c650a18ca191a8889e2af68ffc9d6 (patch) | |
tree | e08735bb00ffdf755168e1ab09f21f26b6f19441 /storage/innobase/sync/sync0sync.c | |
parent | be59ca8275a3e0c95001af3077e1b4f6dc736739 (diff) | |
download | mariadb-git-b3c9cc6f212c650a18ca191a8889e2af68ffc9d6.tar.gz |
Bug #11766227: InnoDB purge lag much worse for 5.5.8 versus 5.1
Bug #11766501: Multiple RBS break the get rseg with mininum trx_t::no code during purge
Bug# 59291 changes:
Main problem is that truncating the UNDO log at the completion of every
trx_purge() call is expensive as the number of rollback segments is increased.
We truncate after a configurable amount of pages. The innodb_purge_batch_size
parameter is used to control when InnoDB does the actual truncate. The truncate
is done once after 128 (or TRX_SYS_N_RSEGS iterations). In other words we
truncate after purge 128 * innodb_purge_batch_size. The smaller the batch
size the quicker we truncate.
Introduce a new parameter that allows how many rollback segments to use for
storing REDO information. This is really step 1 in allowing complete control
to the user over rollback space management.
New parameters:
i) innodb_rollback_segments = number of rollback_segments to use
(default is now 128) dynamic parameter, can be changed anytime.
Currently there is little benefit in changing it from the default.
Optimisations in the patch.
i. Change the O(n) behaviour of trx_rseg_get_on_id() to O(log n)
Backported from 5.6. Refactor some of the binary heap code.
Create a new include/ut0bh.ic file.
ii. Avoid truncating the rollback segments after every purge.
Related changes that were moved to a separate patch:
i. Purge should not do any flushing, only wait for space to be free so that
it only does purging of records unless it is held up by a long running
transaction that is preventing it from progressing.
ii. Give the purge thread preference over transactions when acquiring the
rseg->mutex during commit. This to avoid purge blocking unnecessarily
when getting the next rollback segment to purge.
Bug #11766501 changes:
Add the rseg to the min binary heap under the cover of the kernel mutex and
the binary heap mutex. This ensures the ordering of the min binary heap.
The two changes have to be committed together because they share the same
that fixes both issues.
rb://567 Approved by: Inaam Rana.
Diffstat (limited to 'storage/innobase/sync/sync0sync.c')
-rw-r--r-- | storage/innobase/sync/sync0sync.c | 14 |
1 files changed, 10 insertions, 4 deletions
diff --git a/storage/innobase/sync/sync0sync.c b/storage/innobase/sync/sync0sync.c index 453314f465d..0f6a60ca260 100644 --- a/storage/innobase/sync/sync0sync.c +++ b/storage/innobase/sync/sync0sync.c @@ -1,6 +1,6 @@ /***************************************************************************** -Copyright (c) 1995, 2010, Innobase Oy. All Rights Reserved. +Copyright (c) 1995, 2011, Innobase Oy. All Rights Reserved. Copyright (c) 2008, Google Inc. Portions of this file contain modifications contributed and copyrighted by @@ -1172,7 +1172,7 @@ sync_thread_add_level( case SYNC_RSEG: case SYNC_TRX_UNDO: case SYNC_PURGE_LATCH: - case SYNC_PURGE_SYS: + case SYNC_PURGE_QUEUE: case SYNC_DICT_AUTOINC_MUTEX: case SYNC_DICT_OPERATION: case SYNC_DICT_HEADER: @@ -1239,10 +1239,16 @@ sync_thread_add_level( || sync_thread_levels_g(array, SYNC_FSP, TRUE)); break; case SYNC_TRX_UNDO_PAGE: + /* Purge is allowed to read in as many UNDO pages as it likes, + there was a bogus rule here earlier that forced the caller to + acquire the purge_sys_t::mutex. The purge mutex did not really + protect anything because it was only ever acquired by the + single purge thread. The purge thread can read the UNDO pages + without any covering mutex. */ + ut_a(sync_thread_levels_contain(array, SYNC_TRX_UNDO) || sync_thread_levels_contain(array, SYNC_RSEG) - || sync_thread_levels_contain(array, SYNC_PURGE_SYS) - || sync_thread_levels_g(array, SYNC_TRX_UNDO_PAGE, TRUE)); + || sync_thread_levels_g(array, level - 1, TRUE)); break; case SYNC_RSEG_HEADER: ut_a(sync_thread_levels_contain(array, SYNC_RSEG)); |