| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
allow users to differentiate thread functions externally to memcached.
Useful for setting priorities or pinning threads to CPU's.
|
|
|
|
|
|
|
|
|
| |
with low (ie; 1) IO threads, and a long IO depth (10k+) that might
suddenly appear during a disk hiccup, we can cause a slowdown as each
worker thread submission walks the entire queue.
this fully avoids walking objects while holding any lock, though we can
still do that a bit on the IO thread's end when reading the queue.
|
|
|
|
|
|
|
|
|
| |
if you accidentally start memcached with the same options twice,
extstore is initiated before the listener sockets and will happily
truncate its own file.
So this avoids that. Keep in mind any other process can still wipe the
file clean!
|
| |
|
|
|
|
| |
debug message was already there.
|
|
|
|
|
|
|
| |
pread[v]() is missing on some platforms. We had a test added to build
under OS X, but the lseek arguments were swapped and tests would've
never passed. I never force-tested the replacement code until checking
this out for a cygwin build error :(
|
|
|
|
|
|
|
|
|
| |
... instead to
check the token. less optimised than the usual memcmp especially
it goes through the whole buffers but more resilient against possible
attacks.
While at it, constifying a var which should have been.
|
|
|
|
| |
thanks to 'neatlife' on github.
|
| |
|
|
|
|
|
|
| |
leaked into a runtime bug :(
fixes #482
|
|
|
|
|
|
|
|
|
| |
queues were roundrobin before. during sustained overload some queues
can get behind while other stays empty. Simply do a bit more work to
track depth and pick the lowest queue. This is fine for now since the
bottleneck remains elsewhere.
Been meaning to do this, benchmark work made it more obvious.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just a Bunch Of Devices :P
code exists for routing specific devices to specific buckets
(lowttl/compact/etc), but enabling it requires significant fixes to
compaction algorithm. Thus it is disabled as of this writing.
code cleanups and future work:
- pedantically freeing memory and closing fd's on exit
- unify and flatten the free_bucket code
- defines for free buckets
- page eviction adjustment (force min-free per free bucket)
- fix default calculation for compact_under and drop_under
- might require forcing this value only on default bucket
|
|
|
|
|
|
| |
The extstore maintenance thread takes permanent ownership of its mutex.
Grabbing an uninitialized mutex can result in undefined behavior. In
this case the memory is zeroed so probably no harm.
|
|
|
|
|
|
|
|
|
|
| |
* automover
* avoiding
* compress
* fails
* successfully
* success
* tidiness
|
|
|
|
|
|
| |
can add more later; extremely unlikely to happen.
can/should convert more functions to use extstore_err + codes.
|
|
|
|
|
|
|
|
|
|
|
|
| |
LRU crawler was not marking reclaimed expired items as removed from the
storage engine. This could cause fragmentation to persist much longer than it
should, but would not cause any problems once compaction started.
Adds "ext_low_ttl" option. Items with a remaining expiration age below this
value are grouped into special pages. If you have a mixed TTL workload this
would help prevent low TTL items from causing excess fragmentation/compaction.
Pages with low ttl items are excluded from compaction.
|
|
|
|
|
|
| |
item size max must be <= wbuf_size.
reads into iovecs, writes out of same iovecs.
|
|
|
|
|
|
|
|
| |
write_request returns a buffer to write into, which lets us not corrupt the
active item with the hash and crc.
"technically" we can save 24 bytes per item in storage but I'll leave that
for a later optimization, in case we want to stuff more data into the header.
|
|
been squashing reorganizing, and pulling code off to go upstream ahead
of merging the whole branch.
|