| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
leaked into a runtime bug :(
fixes #482
|
|
|
|
|
|
|
|
|
| |
queues were roundrobin before. during sustained overload some queues
can get behind while other stays empty. Simply do a bit more work to
track depth and pick the lowest queue. This is fine for now since the
bottleneck remains elsewhere.
Been meaning to do this, benchmark work made it more obvious.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just a Bunch Of Devices :P
code exists for routing specific devices to specific buckets
(lowttl/compact/etc), but enabling it requires significant fixes to
compaction algorithm. Thus it is disabled as of this writing.
code cleanups and future work:
- pedantically freeing memory and closing fd's on exit
- unify and flatten the free_bucket code
- defines for free buckets
- page eviction adjustment (force min-free per free bucket)
- fix default calculation for compact_under and drop_under
- might require forcing this value only on default bucket
|
|
|
|
|
|
| |
can add more later; extremely unlikely to happen.
can/should convert more functions to use extstore_err + codes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
LRU crawler was not marking reclaimed expired items as removed from the
storage engine. This could cause fragmentation to persist much longer than it
should, but would not cause any problems once compaction started.
Adds "ext_low_ttl" option. Items with a remaining expiration age below this
value are grouped into special pages. If you have a mixed TTL workload this
would help prevent low TTL items from causing excess fragmentation/compaction.
Pages with low ttl items are excluded from compaction.
|
|
|
|
|
|
| |
item size max must be <= wbuf_size.
reads into iovecs, writes out of same iovecs.
|
|
|
|
|
|
|
|
| |
write_request returns a buffer to write into, which lets us not corrupt the
active item with the hash and crc.
"technically" we can save 24 bytes per item in storage but I'll leave that
for a later optimization, in case we want to stuff more data into the header.
|
|
been squashing reorganizing, and pulling code off to go upstream ahead
of merging the whole branch.
|