summaryrefslogtreecommitdiff
path: root/logger.c
diff options
context:
space:
mode:
authordormando <dormando@rydia.net>2017-05-29 22:51:20 -0700
committerdormando <dormando@rydia.net>2017-05-29 22:51:20 -0700
commitd184b4b08f92d0171c8a3a3f03fa22e33c6aaa55 (patch)
tree146248ce6468a6c5c4a6e19b24816adb823c5aa0 /logger.c
parent7d72a92cf11bbea0aa2b40e277eb9c2c44b8ad41 (diff)
downloadmemcached-d184b4b08f92d0171c8a3a3f03fa22e33c6aaa55.tar.gz
LRU crawler scheduling improvements
when trying to manually run a crawl, the internal autocrawler is now blocked from restarting for 60 seconds. the internal autocrawl now independently schedules LRU's, and can re-schedule sub-LRU's while others are still running. should allow much better memory control when some sub-lru's (such as TEMP or WARM) are small, or slab classes are differently sized. this also makes the crawler drop its lock frequently.. this fixes an issue where a long crawl happening at the same time as a hash table expansion could hang the server until the crawl finished. to improve still: - elapsed time can be wrong in the logger entry - need to cap number of entries scanned. enough set pressure and a crawl may never finish.
Diffstat (limited to 'logger.c')
-rw-r--r--logger.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/logger.c b/logger.c
index 892bea8..b6072f0 100644
--- a/logger.c
+++ b/logger.c
@@ -50,7 +50,7 @@ static const entry_details default_entries[] = {
[LOGGER_ITEM_GET] = {LOGGER_ITEM_GET_ENTRY, 512, LOG_FETCHERS, NULL},
[LOGGER_ITEM_STORE] = {LOGGER_ITEM_STORE_ENTRY, 512, LOG_MUTATIONS, NULL},
[LOGGER_CRAWLER_STATUS] = {LOGGER_TEXT_ENTRY, 512, LOG_SYSEVENTS,
- "type=lru_crawler crawler=%d low_mark=%llu next_reclaims=%llu since_run=%u next_run=%d elapsed=%u examined=%llu reclaimed=%llu"
+ "type=lru_crawler crawler=%d lru=%s low_mark=%llu next_reclaims=%llu since_run=%u next_run=%d elapsed=%u examined=%llu reclaimed=%llu"
}
};