summaryrefslogtreecommitdiff
path: root/items.h
diff options
context:
space:
mode:
authordormando <dormando@rydia.net>2014-12-27 18:56:55 -0800
committerdormando <dormando@rydia.net>2014-12-27 18:56:55 -0800
commit6af7aa0b1581b3bfac98e4fd7c67801cd1f8f1fb (patch)
treed555172f17fdc8b17ced744573e7eeeae46b14a1 /items.h
parentd2676b4aa5840eef877705b23831f22b6166769e (diff)
downloadmemcached-6af7aa0b1581b3bfac98e4fd7c67801cd1f8f1fb.tar.gz
Pause all threads while swapping hash table.
We used to hold a global lock around all modifications to the hash table. Then it was switched to wrapping hash table accesses in a global lock during hash table expansion, set by notifying each worker thread to change lock styles. There was a bug here which causes trylocks to clobber, due to the specific item locks not being held during the global lock: https://code.google.com/p/memcached/issues/detail?id=370 The patch previous to this one uses item locks during hash table expansion. Since the item lock table is always smaller than the hash table, an item lock will always cover both its new and old buckets. However, we still need to pause all threads during the pointer swap and setup. This patch pauses all background threads and worker threads, swaps the hash table, then unpauses them. This trades the (possibly significant) slowdown during the hash table copy, with a short total hang at the beginning of each expansion. As previously; those worried about consistent performance can presize the hash table with `-o hashpower=n`
Diffstat (limited to 'items.h')
-rw-r--r--items.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/items.h b/items.h
index d86c776..51ae39e 100644
--- a/items.h
+++ b/items.h
@@ -36,3 +36,5 @@ int start_item_crawler_thread(void);
int stop_item_crawler_thread(void);
int init_lru_crawler(void);
enum crawler_result_type lru_crawler_crawl(char *slabs);
+void lru_crawler_pause(void);
+void lru_crawler_resume(void);