summaryrefslogtreecommitdiff
path: root/storage.c
diff options
context:
space:
mode:
authordormando <dormando@rydia.net>2019-07-26 01:32:26 -0700
committerdormando <dormando@rydia.net>2019-07-26 16:26:38 -0700
commitc630eeb7f4502a9d96cbec7cc9bda7ce1b1f6476 (patch)
tree3b054bd5e72b0e4da7f66455bdc01d97a9814f6b /storage.c
parent78eb7701e0823643d693c1a7a6fd8a0c75db74d8 (diff)
downloadmemcached-c630eeb7f4502a9d96cbec7cc9bda7ce1b1f6476.tar.gz
move mem_requested from slabs.c to items.c
mem_requested is an oddball counter: it's the total number of bytes "actually requested" from the slab's caller. It's mainly used for a stats counter, alerting the user that the slab factor may not be efficient if the gap between total_chunks * chunk_size - mem_requested is large. However, since chunked items were added it's _also_ used to help the LRU balance itself. The total number of bytes used in the class vs the total number of bytes in a sub-LRU is used to judge whether to move items between sub-LRU's. This is a layer violation; forcing slabs.c to know more about how items work, as well as EXTSTORE for calculating item sizes from headers. Further, it turns out it wasn't necessary for item allocation: if we need to evict an item we _always_ pull from COLD_LRU or force a move from HOT_LRU. So the total doesn't matter. The total does matter in the LRU maintainer background thread. However, this thread caches mem_requested to avoid hitting the slab lock too frequently. Since sizes_bytes[] within items.c is generally redundant with mem_requested, we now total sizes_bytes[] from each sub-LRU before starting a batch of LRU juggles. This simplifies the code a bit, reduces the layer violations in slabs.c slightly, and actually speeds up some hot paths as a number of branches and operations are removed completely. This also fixes an issue I was having with the restartable memory branch :) recalculating p->requested and keeping a clean API is painful and slow. NOTE: This will vary a bit compared to what mem_requested originally did, mostly for large chunked items. For items which fit inside a single slab chunk, the stat is identical. However, items constructed by chaining chunks will have a single large "nbytes" value and end up in the highest slab class. Chunked items can be capped with chunks from smaller slab classes; you will see utilization of chunks but not an increase in mem_requested for them. I'm still thinking this through but this is probably acceptable. Large chunked items should be accounted for separately, perhaps with some new counters so they can be discounted from normal calculations.
Diffstat (limited to 'storage.c')
-rw-r--r--storage.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/storage.c b/storage.c
index 8b7764a..b387e37 100644
--- a/storage.c
+++ b/storage.c
@@ -154,7 +154,7 @@ static void *storage_write_thread(void *arg) {
// Avoid extra slab lock calls during heavy writing.
chunks_free = slabs_available_chunks(x, &mem_limit_reached,
- NULL, NULL);
+ NULL);
// storage_write() will fail and cut loop after filling write buffer.
while (1) {