summaryrefslogtreecommitdiff
path: root/slabs.h
diff options
context:
space:
mode:
authordormando <dormando@rydia.net>2015-09-29 02:48:02 -0700
committerdormando <dormando@rydia.net>2015-11-18 23:14:35 -0800
commitd6e96467051720197abf9eab8ca85d153ae06610 (patch)
tree85ec27c356f2ef6fac98d30670e78ae16d84aa8d /slabs.h
parentd5185f9c25e346417d0de1c8d704d945d76ea474 (diff)
downloadmemcached-d6e96467051720197abf9eab8ca85d153ae06610.tar.gz
first half of new slab automover
If any slab classes have more than two pages worth of free chunks, attempt to free one page back to a global pool. Create new concept of a slab page move destination of "0", which is a global page pool. Pages can be re-assigned out of that pool during allocation. Combined with item rescuing from the previous patch, we can safely shuffle pages back to the reassignment pool as chunks free up naturally. This should be a safe default going forward. Users should be able to decide to free or move pages based on eviction pressure as well. This is coming up in another commit. This also fixes a calculation of the NOEXP LRU size, and completely removes the old slab automover thread. Slab automove decisions will now be part of the lru maintainer thread.
Diffstat (limited to 'slabs.h')
-rw-r--r--slabs.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/slabs.h b/slabs.h
index 1eac5c8..fb29cfa 100644
--- a/slabs.h
+++ b/slabs.h
@@ -34,7 +34,7 @@ bool get_stats(const char *stat_type, int nkey, ADD_STAT add_stats, void *c);
void slabs_stats(ADD_STAT add_stats, void *c);
/* Hints as to freespace in slab class */
-unsigned int slabs_available_chunks(unsigned int id, bool *mem_flag, unsigned int *total_chunks);
+unsigned int slabs_available_chunks(unsigned int id, bool *mem_flag, unsigned int *total_chunks, unsigned int *chunks_perslab);
int start_slab_maintenance_thread(void);
void stop_slab_maintenance_thread(void);