| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
It has been observed in strace output that we sometimes call
sendmsg with a request of sending 0 bytes.. The kernel obeys
our request and send 0 bytes and we treat that as an error.
|
|
|
|
| |
since reading the code is probably incredibly confusing now.
|
|
|
|
| |
freebsd9 is the only platform that apparently cares about this.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This doesn't reduce mutex contention much, if at all, for the global stats
lock, but it does remove a handful of instructions from the alloc hot path,
which is always worth doing.
Previous commits possibly added a handful of instructions for the loop and for
the bucket readlock trylock, but this is still faster than .14 for writes
overall.
|
|
|
|
|
|
|
|
|
|
| |
expansion requires switching to a global lock temporarily, so all buckets have
a covered read lock.
slab rebalancer is paused during hash table expansion.
internal item "trylocks" are always issued, and tracked as the hash power
variable can change out from under it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes a few issues with a restructuring... I think -M was broken before,
should be fixed now. It had a refcount leak.
Now walks up to five items from the bottom in case of the bottomost items
being item_locked, or refcount locked. Helps avoid excessive OOM errors for
some oddball cases. Those happen more often if you're hammering on a handful
of pages in a very large class size (100k+)
The hash item lock ensures that if we're holding that lock, no other thread
can be incrementing the refcount lock at that time. It will mean more in
future patches.
slab rebalancer gets a similar update.
|
|
|
|
| |
-pthread was added as part of setting up the gcov options
|
|
|
|
|
| |
use both #define's when using the spinlock version of our locks. not all locks
are designed to be that way, so this doesn't touch the whole thing.
|
|
|
|
| |
I dunno why it litters .orgs then tries to run them. I'm not a magician.
|
|
|
|
| |
fix warning in new gcc.
|
|
|
|
|
| |
I don't care why it happened, just don't whitespace check the README files
anymore.
|
| |
|
|
|
|
| |
I broke 'em earlier
|
| |
|
| |
|
|
|
|
| |
(http://code.google.com/p/memcached/issues/detail?id=275)
|
| |
|
|
|
|
|
|
|
| |
someone pointed out that cache_destroy wasn't freeing the cache_t pointer.
memcached itself never destroys a cache it creates, so this is fine, but it's
fixed for completeness...
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
I'll probably get in trouble for removing DONT_PREALLOC_SLABS
... however tons of people like using the -L option, which does nothing under
linux. It should soon do *something* under linux, and when it does they'll
report the same errors of not being able to store things into certain slab
classes.
So just give them a useful error and bail instead.
|
|
|
|
|
| |
I imagine host people on linux run this and then get both thumbs stuck up
their noses when weird bugs happen. Lets start by not lying to them.
|
|
|
|
| |
(I removed the "honoured" fix as this is american english -ed)
|
| |
|
|
|
|
|
| |
bitrotted. only existed to prove a point. can add it back in better later, or
use a storage engine if we ever get on 1.6.
|
|
|
|
|
| |
slabs_reassign() calls now attempt to lock, return busy if thread is already
moving something.
|
|
|
|
|
| |
also fix a bug causing slab rebalance thread to spin instead of waiting on the
condition... duhr.
|
|
|
|
|
|
|
|
|
| |
specifying -1 as the src class for a slabs reassign will pick the first
available, rolling through the list on each request (so as to not bias against
the smaller classes).
So if you're in a hurry and have to move memory into class 5, you may now mash
it without thinking.
|
|
|
|
|
|
|
|
|
| |
slab rebalancer now chillaxes on a signal and waits a lot less time when
hitting a busy item. automove is its own thread now, and signals rebal when
necessary.
when entering the command "slabs reassign 1 2" it should start moving a
page instantly.
|
|
|
|
|
|
|
| |
slab memory assignment used to lazily split a new page into chunks as memory
was requested. now it doesn't, so drop all the related code.
Cuts the memory assignment hotpath a tiny bit, so that's exciting.
|
|
|
|
|
|
|
|
|
|
|
|
| |
slab freelists used to be malloc'ed arrays. then they were changed into a
freelist. now we pre-split newly assigned/moved pages into a slabs freelist
instead of lazily pulling pointers as needed.
The loop is pretty darn direct and I can't measure a performance impact of
this relatively rare event.
In doing this, slab reassign can move memory without having to wait for a
class to chew through its recently assigned page first.
|
|
|
|
|
| |
(ed note: yes it doesn't check for a NULL and die after 20 times. this should
mitigate until we can do better with writing the pidfile)
|
|
|
|
|
| |
ed note: this needs to be redone in memcached.h as a static inline, or changed
to a define.
|
| |
|
|
|
|
|
|
|
|
|
| |
reported by jhpark. items at the bottom of the LRU would be popped for sets if
flush_all was set for the "future" but said future hadn't arrived yet.
item_get handled this correctly so the flush would not happen, but items at
the bottom of the LRU would be reclaimed early.
Added tests for this as well.
|
|
|
|
|
|
|
| |
This fails for various stupid platform-specific things. The SASL code
can be working correctly, but not in a way that is completely
predictable on every platform (for example, we may be missing a
particular auth mode).
|
| |
|
|
|
|
|
|
|
|
| |
saslpasswd2 does something a little magical when initializing the
structure that's different from what happens if you just pass NULL.
The magic is too great for the tests as is, so this code does the same
thing saslpasswd2 does to determine the fqdn.
|
|
|
|
|
|
|
| |
They just changed this randomly with no way to really detect it. You
can read about it here:
http://lists.andrew.cmu.edu/pipermail/cyrus-sasl/2011-September/002340.html
|
|
|
|
|
|
| |
echo "" | nc localhost 11211 would segfault the server
simple fix is to add the proper token check to the one place it's missing.
|
| |
|
|
|
|
|
| |
I was naive. GCC atomics were added in 4.1.2, and not easily detectable
without configure tests. 32bit platforms, centos5, etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Awesome bug goes like this:
let "c1" be the commit of the "good state" and "c2" be the commit
immediately after (in a bad state). "t1" is the state of the tree in "c1"
and "t2" is the state of the tree in "c2"
In their natural states, we have this:
c1 -> t1 -> success
c1 -> t2 -> fail
However, if you take
c1 -> t1 -> patch to t2 -> success
c2 -> t2 -> patch to t1 -> fail
So t1 *and* t2 both succeed if the committed tree is c1, but both fail of
the committed tree is c2.
The difference? c1 has a tag that points to it so the version number is
"1.2.10" whereas the version number for the unreleased c2 is
"1.4.10-1-gee486ab" -- a bit longer, breaks stuff in tests that try to
print stats.
|
|
|
|
| |
32bit pointers are smaller... need more items to fill the slabs, sigh.
|
| |
|
|
|
|
|
| |
Since spawn_and_wait doesn't use argc anyway, might as well just not
send a value in.
|
|
|
|
| |
credit goes to anton.yuzhaninov for the report and patch
|
|
|
|
| |
Thanks to Stephen Yang for the bug report.
|
|
|
|
| |
bad practice.
|
|
|
|
|
|
|
| |
Most credit to Dustin and Trond for showing me the way, though I have no way
of testing this myself.
These should probably just be defines...
|