| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
At least FreeBSD has perl in /usr/local/bin/perl and no symlink by
default.
|
|
|
|
|
|
| |
allows tests to run faster, let users make it sleep longer/less time.
Also cuts the sleep time down when actively compacting and coming from
high idle.
|
|
|
|
|
|
|
|
|
| |
if you accidentally start memcached with the same options twice,
extstore is initiated before the listener sockets and will happily
truncate its own file.
So this avoids that. Keep in mind any other process can still wipe the
file clean!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just a Bunch Of Devices :P
code exists for routing specific devices to specific buckets
(lowttl/compact/etc), but enabling it requires significant fixes to
compaction algorithm. Thus it is disabled as of this writing.
code cleanups and future work:
- pedantically freeing memory and closing fd's on exit
- unify and flatten the free_bucket code
- defines for free buckets
- page eviction adjustment (force min-free per free bucket)
- fix default calculation for compact_under and drop_under
- might require forcing this value only on default bucket
|
|
|
|
|
|
|
|
|
|
|
| |
trying out a simplified slab class backoff algorithm. The LRU maintainer
individually schedules slab classes by time, which leads to multiple wakeups
in a steady state as they get out of sync. This algorithm more simply skips
that class more often each time it runs the main loop, using a single
scheduled sleep instead.
if it goes to sleep for a long time, it also reduces the backoff for all
classes. if we're barely awake it should be fine to poke everything.
|
|
|
|
|
|
|
|
|
|
| |
enforce actually waiting for extstore to flush items before moving on, remove
some pacing that was making things worse. The compactor is rescuing the
canary items occasionally.
Detunes the compactor on start, then ramps it up before the compactor
specific tests. This seems to fix the flakiness, at least enough that it's
been passing in a loop on fast and slow systems for me.
|
|
|
|
|
|
| |
on 32bit systems the items take a bit less space in a few dimensions, so we
need to fill harder. also, these tend to be slower systems, so pace out the
inserts with occasional 1s sleeps, rather than all at once at the end.
|
|
|
|
|
|
|
|
|
|
| |
* automover
* avoiding
* compress
* fails
* successfully
* success
* tidiness
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
there's now an optional ext_drop_under setting which defaults to the same as
compact_under, which should be fine. now, if drop_unread is enabled, it only
kicks in if there are no pages matching the compaction threshold.
This allows you to set a lower compaction frag rate, then start rescuing only
non-COLD items if storage is too full. You can also compact up to a point,
then allow a buffer of pages to be used before dropping unread.
previously enabling drop_unread would always drop_unread even when compacting
normally. This limited utility of the feature.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
was early evicting from HOT/WARM LRU's for item headers because the
*original* item size was being tracked, then compared to the actual byte
totals for the class.
also adjusts drop_unread so it drops items which are currently in the COLD_LRU
this is expected to be used with very low compacat_under values; ie 2-5
depending on page count and write load. If you can't defrag-compact,
drop-compact.
but this is still subtly wrong, since drop_compact is now an option.
|
|
|
|
|
|
|
|
| |
couple TODO items left for a new issue I thought of. Also hardcoded memory
buffer size which should be fixed.
also need to change the "free and re-init" logic to use a boolean in case any
related option changes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
was struggling to figure out how to automatically turn this on or off, but I
think it should be part of an outside process.
ie; a mechanism should be able to target a specific write rate, and one of
its tools for reducing the write rate should be flipping this on.
there's *still* a hole where you can't trigger a compaction attempt if
there's no fragmentation. I kind of want, if this feature is on, to attempt
a compaction on the oldest page while dropping unread items.
|
|
|
|
|
|
|
|
|
|
|
|
| |
LRU crawler was not marking reclaimed expired items as removed from the
storage engine. This could cause fragmentation to persist much longer than it
should, but would not cause any problems once compaction started.
Adds "ext_low_ttl" option. Items with a remaining expiration age below this
value are grouped into special pages. If you have a mixed TTL workload this
would help prevent low TTL items from causing excess fragmentation/compaction.
Pages with low ttl items are excluded from compaction.
|
|
|
|
|
| |
also fixes a bug where setting -U 0 would disable TCP automatically...
and vice versa.
|
|
|
|
|
| |
./configure --enable-extstore to compile the feature in
specify -o ext_path=/whatever to start.
|
|
|
|
|
|
| |
item size max must be <= wbuf_size.
reads into iovecs, writes out of same iovecs.
|
|
|
|
|
|
| |
if < 2 free pages left, "evict" objects which haven't been hit at all.
should be better than evicting everything if we can continue compacting.
|
|
until the compile option is properly in, this is kind of the only test that
should run properly.
|