summaryrefslogtreecommitdiff
path: root/items.h
diff options
context:
space:
mode:
authordormando <dormando@rydia.net>2015-09-29 02:48:02 -0700
committerdormando <dormando@rydia.net>2015-11-18 23:14:35 -0800
commitd6e96467051720197abf9eab8ca85d153ae06610 (patch)
tree85ec27c356f2ef6fac98d30670e78ae16d84aa8d /items.h
parentd5185f9c25e346417d0de1c8d704d945d76ea474 (diff)
downloadmemcached-d6e96467051720197abf9eab8ca85d153ae06610.tar.gz
first half of new slab automover
If any slab classes have more than two pages worth of free chunks, attempt to free one page back to a global pool. Create new concept of a slab page move destination of "0", which is a global page pool. Pages can be re-assigned out of that pool during allocation. Combined with item rescuing from the previous patch, we can safely shuffle pages back to the reassignment pool as chunks free up naturally. This should be a safe default going forward. Users should be able to decide to free or move pages based on eviction pressure as well. This is coming up in another commit. This also fixes a calculation of the NOEXP LRU size, and completely removes the old slab automover thread. Slab automove decisions will now be part of the lru maintainer thread.
Diffstat (limited to 'items.h')
-rw-r--r--items.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/items.h b/items.h
index f47de8f..4e492b4 100644
--- a/items.h
+++ b/items.h
@@ -27,7 +27,6 @@ item *do_item_get(const char *key, const size_t nkey, const uint32_t hv);
item *do_item_touch(const char *key, const size_t nkey, uint32_t exptime, const uint32_t hv);
void item_stats_reset(void);
extern pthread_mutex_t lru_locks[POWER_LARGEST];
-void item_stats_evictions(uint64_t *evicted);
enum crawler_result_type {
CRAWLER_OK=0, CRAWLER_RUNNING, CRAWLER_BADCLASS, CRAWLER_NOTSTARTED