summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Added support for automake-1.12 in autogen.shEric McConville2012-07-291-1/+1
|
* Use Markdown for README.Toru Maesaka2012-07-291-5/+15
|
* Fixed issue with invalid binary protocol touch command expiration timeMaksim Zhylinski2012-07-291-2/+2
| | | | (http://code.google.com/p/memcached/issues/detail?id=275)
* add a binary touch test that actaully failsdormando2012-07-291-0/+4
|
* totally destroy test cachesdormando2012-07-291-0/+1
| | | | | | | someone pointed out that cache_destroy wasn't freeing the cache_t pointer. memcached itself never destroys a cache it creates, so this is fine, but it's fixed for completeness...
* Define touch command probe for DTrace supportyuryur2012-07-291-0/+11
|
* If we're preallocating memory, prealloc slab pagesdormando2012-07-293-15/+10
| | | | | | | | | | | I'll probably get in trouble for removing DONT_PREALLOC_SLABS ... however tons of people like using the -L option, which does nothing under linux. It should soon do *something* under linux, and when it does they'll report the same errors of not being able to store things into certain slab classes. So just give them a useful error and bail instead.
* Error and exit if we don't have hugetlb supportdormando2012-07-291-1/+4
| | | | | I imagine host people on linux run this and then get both thumbs stuck up their noses when weird bugs happen. Lets start by not lying to them.
* Fix doc/protocol.txt typoFordy2012-07-291-1/+1
| | | | (I removed the "honoured" fix as this is american english -ed)
* update reassign/automove documentationdormando2012-07-291-2/+12
|
* Remove USE_SYSTEM_MALLOC definedormando2012-07-271-17/+0
| | | | | bitrotted. only existed to prove a point. can add it back in better later, or use a storage engine if we ever get on 1.6.
* remove rebalancer's race conditiondormando2012-07-271-11/+9
| | | | | slabs_reassign() calls now attempt to lock, return busy if thread is already moving something.
* automove levels are an int instead of bool nowdormando2012-07-274-12/+33
| | | | | also fix a bug causing slab rebalance thread to spin instead of waiting on the condition... duhr.
* slab rebalancing from random classdormando2012-07-271-0/+26
| | | | | | | | | specifying -1 as the src class for a slabs reassign will pick the first available, rolling through the list on each request (so as to not bias against the smaller classes). So if you're in a hurry and have to move memory into class 5, you may now mash it without thinking.
* split slab rebalance and automove threadsdormando2012-07-271-10/+53
| | | | | | | | | slab rebalancer now chillaxes on a signal and waits a lot less time when hitting a busy item. automove is its own thread now, and signals rebal when necessary. when entering the command "slabs reassign 1 2" it should start moving a page instantly.
* remove end_page_ptr business from slabsdormando2012-07-273-35/+6
| | | | | | | slab memory assignment used to lazily split a new page into chunks as memory was requested. now it doesn't, so drop all the related code. Cuts the memory assignment hotpath a tiny bit, so that's exciting.
* pre-split slab pages into slab freelistsdormando2012-07-272-9/+24
| | | | | | | | | | | | slab freelists used to be malloc'ed arrays. then they were changed into a freelist. now we pre-split newly assigned/moved pages into a slabs freelist instead of lazily pulling pointers as needed. The loop is pretty darn direct and I can't measure a performance impact of this relatively rare event. In doing this, slab reassign can move memory without having to wait for a class to chew through its recently assigned page first.
* Avoid race condition in test by re-tryingClint Byrum2012-07-271-3/+7
| | | | | (ed note: yes it doesn't check for a NULL and die after 20 times. this should mitigate until we can do better with writing the pidfile)
* - Fix inline issue with older compilers (gcc 4.2.2)1.4.13Steve Wills2012-02-021-2/+2
| | | | | ed note: this needs to be redone in memcached.h as a static inline, or changed to a define.
* Better detection of sasl_callback_ftDustin Sallings2012-02-012-1/+28
|
* fix glitch with flush_all <future>1.4.12dormando2012-02-012-3/+15
| | | | | | | | | reported by jhpark. items at the bottom of the LRU would be popped for sets if flush_all was set for the "future" but said future hadn't arrived yet. item_get handled this correctly so the flush would not happen, but items at the bottom of the LRU would be reclaimed early. Added tests for this as well.
* Skip SASL tests unless RUN_SASL_TESTS is defined.Dustin Sallings2012-01-281-1/+6
| | | | | | | This fails for various stupid platform-specific things. The SASL code can be working correctly, but not in a way that is completely predictable on every platform (for example, we may be missing a particular auth mode).
* Look around for saslpasswd2 (typically not in the user's path).Dustin Sallings2012-01-281-1/+16
|
* Specify hostname in sasl_server_new.Dustin Sallings2012-01-273-1/+15
| | | | | | | | saslpasswd2 does something a little magical when initializing the structure that's different from what happens if you just pass NULL. The magic is too great for the tests as is, so this code does the same thing saslpasswd2 does to determine the fqdn.
* build fix: Define sasl_callback_ft on older versions of sasl.Dustin Sallings2012-01-271-0/+4
| | | | | | | They just changed this randomly with no way to really detect it. You can read about it here: http://lists.andrew.cmu.edu/pipermail/cyrus-sasl/2011-September/002340.html
* fix segfault when sending a zero byte commanddormando2012-01-261-1/+1
| | | | | | echo "" | nc localhost 11211 would segfault the server simple fix is to add the proper token check to the one place it's missing.
* fix warning in UDP testdormando2012-01-261-1/+1
|
* properly detect GCC atomicsdormando2012-01-252-5/+19
| | | | | I was naive. GCC atomics were added in 4.1.2, and not easily detectable without configure tests. 32bit platforms, centos5, etc.
* tests: loop on short binary packet readsDustin Sallings2012-01-251-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Awesome bug goes like this: let "c1" be the commit of the "good state" and "c2" be the commit immediately after (in a bad state). "t1" is the state of the tree in "c1" and "t2" is the state of the tree in "c2" In their natural states, we have this: c1 -> t1 -> success c1 -> t2 -> fail However, if you take c1 -> t1 -> patch to t2 -> success c2 -> t2 -> patch to t1 -> fail So t1 *and* t2 both succeed if the committed tree is c1, but both fail of the committed tree is c2. The difference? c1 has a tag that points to it so the version number is "1.2.10" whereas the version number for the unreleased c2 is "1.4.10-1-gee486ab" -- a bit longer, breaks stuff in tests that try to print stats.
* fix slabs_reassign tests on 32bit hostsdormando2012-01-181-3/+3
| | | | 32bit pointers are smaller... need more items to fill the slabs, sigh.
* update protocol.txt1.4.11-rc11.4.11dormando2012-01-111-0/+57
|
* bug237: Don't compute incorrect argc for timedrunDustin Sallings2012-01-111-4/+2
| | | | | Since spawn_and_wait doesn't use argc anyway, might as well just not send a value in.
* fix 'age' stat for stats itemsdormando2012-01-111-1/+1
| | | | credit goes to anton.yuzhaninov for the report and patch
* binary deletes were not ticking stats countersdormando2012-01-111-0/+6
| | | | Thanks to Stephen Yang for the bug report.
* test for the error code, not the full messagedormando2012-01-111-2/+2
| | | | bad practice.
* more portable refcount atomicsdormando2012-01-104-8/+48
| | | | | | | Most credit to Dustin and Trond for showing me the way, though I have no way of testing this myself. These should probably just be defines...
* Fix a race condition from 1.4.10 on item_remove1.4.11-beta1dormando2012-01-082-48/+71
| | | | | | | | | | | | Updates the slab mover for the new method. 1.4.10 lacks some crucial protection around item freeing and removal, resulting in some potential crashes. Moving the cache_lock around item_remove caused a 30% performance drop, so it's been reimplemented with GCC atomics. refcount of 1 now means an item is linked but has no reference, which allows us to test an atomic sub and fetch of 0 as a clear indicator of when to free an item.
* fix braindead linked list faildormando2012-01-081-0/+1
| | | | | | | | | | | | I re-implemented a linked list for the slab freelist since we don't need to manage the tail, check the previous item, and use it as a FIFO. However prev/next must be managed so the slab mover is safe. However I neglected to clear prev on a fetch, so if the slab mover was zeroing the head of the freelist it would relink the next item in the freelist with one in the main LRU. Which results in chaos.
* close some idiotic race conditionsdormando2012-01-061-4/+7
| | | | | | | | do_item_update could decide to update an item, then wait on the cache_lock, but the item could be unlinked in the meantime. caused this to happen on purpose by flooding with sets, then flushing repeatedly. flush has to unlink items until it hits the previous second.
* reap items on read for slab moverdormando2012-01-051-1/+11
| | | | | popular items could stuck the slab mover forever, so if a move is in progress, check to see if the item we're fetching should be unlinked instead.
* no same-class reassignment, better errorsdormando2012-01-035-11/+18
| | | | | Add human parseable strings to the errors for slabs ressign. Also prevent reassigning memory to the same source and destination.
* initial slab automoverdormando2012-01-035-36/+402
| | | | | | | | | | Enable at startup with -o slab_reassign,slab_automove Enable or disable at runtime with "slabs automove 1\r\n" Has many weaknesses. Only pulls from slabs which have had zero recent evictions. Is slow, not tunable, etc. Use the scripts/mc_slab_mover example to write your own external automover if this doesn't satisfy.
* slab reassignmentdormando2011-12-198-21/+457
| | | | | | | | | | | | | | | | | | | | | Adds a "slabs reassign src dst" manual command, and a thread to safely process slab moves in the background. - slab freelist is now a linked list, reusing the item structure - is -o slab_reassign is enabled, an extra background thread is started - thread attempts to safely free up items when it's been told to move a page from one slab to another. -o slab_automove is stubbed. There are some limitations. Most notable is that you cannot repeatedly move pages around without first having items use up the memory. Slabs with newly assigned memory work off of a pointer, handing out chunks individually. We would need to change that to quickly split chunks for all newly assigned pages into that slabs freelist. Further testing is required to ensure such is possible without impacting performance.
* clean do_item_get logic a bit. fix race.dormando2011-12-151-28/+28
| | | | | | | | | | | | | | | | | | | | the race here is absolutely insane: - do_item_get and do_item_alloc call at the same time, against different items - do_item_get wins cache_lock lock race, returns item for internal testing - do_item_alloc runs next, pulls item off of tail of a slab class which is the same item as do_item_get just got - do_item_alloc sees refcount == 0 since do_item_get incrs it at the bottom, and starts messing with the item - do_item_get runs its tests and maybe even refcount++'s and returns the item - evil shit happens. This race is much more likely to hit during the slab reallocation work, so I'm fixing it even though it's almost impossible to cause. Also cleaned up the logic so it's not testing the item for NULL more than once. Far fewer branches now, though I did not examine gcc's output to see if it is optimized differently.
* clean up the do_item_alloc logicdormando2011-12-151-11/+7
| | | | | | | | | Fix an unlikely bug where search == NULL and the first alloc fails, which then attempts to use search. Also reorders branches from most likely to least likely, and removes all redundant tests that I can see. No longer double checks things like refcount or exptime for the eviction case.
* shorten lock for item allocation moredormando2011-12-121-3/+5
| | | | | after pulling an item off of the LRU, there's no reason to hold the cache lock while we initialize a few values and memcpy some junk.
* Fix to build with cyrus sasl 2.1.25Steve Wills2011-12-031-1/+2
|
* disable issue 140's test.1.4.10dormando2011-11-091-1/+5
| | | | | | | | | the fix for issue 140 only helped in the case of you poking at memcached with a handful of items (or this particular test). On real instances you could easily exhaust the 50 item search and still come up with a crap item. It was removed because adding the proper locks back in that place is difficult, and it makes "stats items" take longer in a gross lock anyway.
* Use a proper hash mask for item lock tabledormando2011-11-091-6/+24
| | | | | | Directly use the hash for accessing the table. Performance seems unchanged from before but this is more proper. It also scales the hash table a bit as worker threads are increased.
* push cache_lock deeper into item_allocdormando2011-11-092-2/+5
| | | | | easy win without restructuring item_alloc more: push the lock down after it's done fiddling with snprintf.