summaryrefslogtreecommitdiff
path: root/t/slabs-reassign2.t
Commit message (Collapse)AuthorAgeFilesLines
* Find perl via /usr/bin/env instead of directlyDavid CARLIER2022-08-251-1/+1
| | | | | At least FreeBSD has perl in /usr/local/bin/perl and no symlink by default.
* add a real slab automover algorithmdormando2017-06-231-2/+2
| | | | converts the python script to C, more or less.
* make test not flaky on 32bit OS.dormando2016-07-121-5/+15
| | | | | since slab classes don't align necessarily. should maybe use the explicit slab sizing feature to make 32/64 bit tests more stable.
* too close to the line for forcing evictions.dormando2016-07-121-1/+1
|
* fix flakiness of slabs-reassign2.t test.dormando2016-07-121-4/+29
| | | | | | | | | | slabs_automove=2 is fairly dumb and should be replaced with something better. I'm guessing a kickoff from the LRU juggler if free chunks is zero, rather than inline in the lru_pull_tail call. This test was checking rescues/inline evictions and other newer features of the slab rebalancer. So now that's the central point rather than racing the slab mover thread. Tests now seem reliable.
* make slab reassign tests more reliabledormando2015-11-191-13/+20
| | | | | | | | | on 32bit hardware with different pointer/slab class sizes, the tests would fail. made a few adjustments to ensure reassign rescues happen and make items not be near the default slab class borders. This makes the tests pass, but needs further improvements for reliability.. ie: "fill until evicts", count slab pages for reassignment/etc.
* fix over-inflation of total_malloceddormando2015-11-181-1/+11
| | | | | | | | | | | mem_alloced was getting increased every time a page was assigned out of either malloc or the global page pool. This means total_malloced will inflate forever as pages are reused, and once limit_maxbytes is surpassed it will stop attempting to malloc more memory. The result is we would stop malloc'ing new memory too early if page reclaim happens before the whole thing fills. The test already caused this condition, so adding the extra checks was trivial.
* split rebal_evictions into _nomem and _samepagedormando2015-11-181-1/+1
| | | | | | gross oversight putting two conditions into the same variable. now can tell if we're evicting because we're hitting the bottom of the free memory pool, or if we keep trying to rescue items into the same page as the one being cleared.
* first half of new slab automoverdormando2015-11-181-1/+25
| | | | | | | | | | | | | | | | | | If any slab classes have more than two pages worth of free chunks, attempt to free one page back to a global pool. Create new concept of a slab page move destination of "0", which is a global page pool. Pages can be re-assigned out of that pool during allocation. Combined with item rescuing from the previous patch, we can safely shuffle pages back to the reassignment pool as chunks free up naturally. This should be a safe default going forward. Users should be able to decide to free or move pages based on eviction pressure as well. This is coming up in another commit. This also fixes a calculation of the NOEXP LRU size, and completely removes the old slab automover thread. Slab automove decisions will now be part of the lru maintainer thread.
* slab mover rescues valid items with free chunksdormando2015-11-181-3/+6
| | | | | | | | | | | During a slab page move items are typically ejected regardless of their validity. Now, if an item is valid and free chunks are available in the same slab class, copy the item over and replace it. It's up to external systems to try to ensure free chunks are available before moving a slab page. If there is no memory it will simply evict them as normal. Also adds counters so we can finally tell how often these cases happen.
* restore slab_automove=2 support and add testdormando2015-11-181-0/+61
Test is a port of a golang test submitted by Scott Mansfield. There used to be an "angry birds mode" to slabs_automove, which attempts to force a slab move from "any" slab into the one which just had an eviction. This is an imperfect but fast way of responding to shifts in memory requirements. This change adds it back in plus a test which very quickly attempts to set data in via noreply. This isn't the end of improvements here. This commit is a starting point.