summaryrefslogtreecommitdiff
path: root/storage/innobase/include/sync0rw.ic
diff options
context:
space:
mode:
authorMarko Mäkelä <marko.makela@mariadb.com>2020-06-18 13:38:30 +0300
committerMarko Mäkelä <marko.makela@mariadb.com>2020-06-18 14:16:01 +0300
commit5155a300fab85e97217c75e3ba3c3ce78082dd8a (patch)
tree7180cc802900bbf2c5328835d3de39e78c469615 /storage/innobase/include/sync0rw.ic
parentcfd3d70ccbbfcf3fdec034be446317741dfae824 (diff)
downloadmariadb-git-5155a300fab85e97217c75e3ba3c3ce78082dd8a.tar.gz
MDEV-22871: Reduce InnoDB buf_pool.page_hash contention
The rw_lock_s_lock() calls for the buf_pool.page_hash became a clear bottleneck after MDEV-15053 reduced the contention on buf_pool.mutex. We will replace that use of rw_lock_t with a special implementation that is optimized for memory bus traffic. The hash_table_locks instrumentation will be removed. buf_pool_t::page_hash: Use a special implementation whose API is compatible with hash_table_t, and store the custom rw-locks directly in buf_pool.page_hash.array, intentionally sharing cache lines with the hash table pointers. rw_lock: A low-level rw-lock implementation based on std::atomic<uint32_t> where read_trylock() becomes a simple fetch_add(1). buf_pool_t::page_hash_latch: The special of rw_lock for the page_hash. buf_pool_t::page_hash_latch::read_lock(): Assert that buf_pool.mutex is not being held by the caller. buf_pool_t::page_hash_latch::write_lock() may be called while not holding buf_pool.mutex. buf_pool_t::watch_set() is such a caller. buf_pool_t::page_hash_latch::read_lock_wait(), page_hash_latch::write_lock_wait(): The spin loops. These will obey the global parameters innodb_sync_spin_loops and innodb_sync_spin_wait_delay. buf_pool_t::freed_page_hash: A singly linked list of copies of buf_pool.page_hash that ever existed. The fact that we never free any buf_pool.page_hash.array guarantees that all page_hash_latch that ever existed will remain valid until shutdown. buf_pool_t::resize_hash(): Replaces buf_pool_resize_hash(). Prepend a shallow copy of the old page_hash to freed_page_hash. buf_pool_t::page_hash_table::n_cells: Declare as Atomic_relaxed. buf_pool_t::page_hash_table::lock(): Explain what prevents a race condition with buf_pool_t::resize_hash().
Diffstat (limited to 'storage/innobase/include/sync0rw.ic')
-rw-r--r--storage/innobase/include/sync0rw.ic18
1 files changed, 2 insertions, 16 deletions
diff --git a/storage/innobase/include/sync0rw.ic b/storage/innobase/include/sync0rw.ic
index 7fcac01e5ba..169cbdd9aa5 100644
--- a/storage/innobase/include/sync0rw.ic
+++ b/storage/innobase/include/sync0rw.ic
@@ -226,22 +226,8 @@ rw_lock_lock_word_decr(
caused by concurrent executions of
rw_lock_s_lock(). */
-#if 1 /* FIXME: MDEV-22871 Spurious contention between rw_lock_s_lock() */
-
- /* When the number of concurrently executing threads
- exceeds the number of available processor cores,
- multiple buf_pool.page_hash S-latch requests would
- conflict here, mostly in buf_page_get_low(). We should
- implement a simpler rw-lock where the S-latch
- acquisition would be a simple fetch_add(1) followed by
- either an optional load() loop to wait for the X-latch
- to be released, or a fetch_sub(1) and a retry.
-
- For now, we work around the problem with a delay in
- this loop. It helped a little on some systems and was
- reducing performance on others. */
- (void) LF_BACKOFF();
-#endif
+ /* Note: unlike this implementation, rw_lock::read_lock()
+ allows concurrent calls without a spin loop */
}
/* A real conflict was detected. */