summaryrefslogtreecommitdiff
path: root/thread.c
Commit message (Collapse)AuthorAgeFilesLines
...
* allow redispatching sidethread conn to workerdormando2016-08-191-12/+19
| | | | | | | also fixes a bug where metadump was closing the client connection after a single slab class. not ported to the logger yet.
* remove some dead thread codedormando2016-08-191-14/+1
| | | | | is_listen_thread() was removed from service after the new listen sockets were added. this removes the rest of the code.
* typo in thread.cdormando2016-08-111-1/+1
| | | | reported by ramkrsna
* fix zero hash items evictionEiichi Tsukata2016-07-131-1/+1
| | | | | | | If all hash values of five tail items are zero on the specified slab class, expire check is unintentionally skipped and items stay without being evicted. Consequently, new item allocation consume memory space every time an item is set, that leads to slab OOM errors.
* use X macros to remove several struct iterationsdormando2016-06-271-79/+17
| | | | | | Adding new stats would require updating in too many locations. No longer have to. Could extend this a bit to do the actual stats printing, but for clarity's sake I'll decide on that later.
* clean up global stats code a little.dormando2016-06-271-2/+2
| | | | | | | | | | tons of stats were left in the global stats structure that're no longer used, and it looks like we kept accidentally adding new ones in there. There's also an unused mutex. Split global stats into `stats` and `stats_state`. initialize via memset, reset only `stats` via memset, removing several places where stats values get repeated. Looks much cleaner and should be less error prone.
* add get_flushed counter, fix expired_unfetcheddormando2016-06-271-0/+2
| | | | | | | | get_flushed ticks when directly fetching something that classified as expired. LRU crawler was ticking expired_unfetched for unfetched _flushed_ values. should no longer do that. also removes get_expired from the global stats list. I missed that on review.
* memcached idle connection killerJay Grizzard2016-06-211-1/+15
|
* indent fixes in thread_libevent_process to make future changes easierJay Grizzard2016-06-211-19/+20
|
* manage logger watcher flags.dormando2016-06-161-1/+1
| | | | | very temporary user control. allows to watch either fetchers or evictions, but not both, and always with timestamps.
* initial logger code.dormando2016-06-161-0/+46
| | | | | | | | | | | | | | Logs are written to per-thread buffers. A new background thread aggregates the logs, further processes them, then writes them to any "watchers". Logs can have the time added to them, and all have a GID so they can be put back into strict order. This is an early preview. Code needs refactoring and a more complete set of options. All watchers are also stuck viewing the global feed of logs, even if they asked for different data. As of this commit there's no way to toggle the "stderr" watcher.
* Implement get_expired statssergiocarlos2016-05-281-4/+6
|
* Setting the pthread id of the LIBEVENT_THREAD on thread creation.Saman Barghi2015-11-181-2/+1
|
* remove duplicated "#include "zhoutai2015-11-181-1/+0
|
* ding-dong the cache_lock is dead.dormando2015-01-091-4/+1
|
* first pass at LRU maintainer threaddormando2015-01-031-0/+2
| | | | | | | | | | | | | | | | | The basics work, but tests still do not pass. A background thread wakes up once per second, or when signaled. It is signaled if a slab class gets an allocation request and has fewer than N chunks free. The background thread shuffles LRU's: HOT, WARM, COLD. HOT is where new items exist. HOT and WARM flow into COLD. Active items in COLD flow back to WARM. Evictions are pulled from COLD. item_update's no longer do anything (and need to be fixed to tick it->time). Items are reshuffled within or around LRU's as they reach the bottom. Ratios of HOT/WARM memory are hardcoded, as are the low/high watermarks. Thread is not fast enough right now, sets cannot block on it.
* Beginning work for LRU reworkdormando2015-01-021-36/+4
| | | | | | | | Primarily splitting cache_lock into a lock-per LRU, and making the it->slab_clsid lookup indirect. cache_lock is now more or less gone. Stats are still wrong. they need to internally summarize over each sub-class.
* flush_all was not thread safe.dormando2015-01-011-9/+0
| | | | | | | | | | | | | | | Unfortunately if you disable CAS, all items set in the same second as a flush_all will immediately expire. This is the old (2006ish) behavior. However, if CAS is enabled (as is the default), it will still be more or less exact. The locking issue is that if the LRU lock is held, you may not be able to modify an item if the item lock is also held. This means that some items may not be flushed if locking is done correctly. In the current code, it could lead to corruption as an item could be locked and in use while the expunging is happening.
* cache_lock refactoringdormando2015-01-011-11/+11
| | | | | | | | | item_lock() now protects accesses to item structures. cache_lock is just for LRU and LRU stats. This patch removes cache_lock from a number of places it's no longer needed. Some pre-existing bugs became obvious: flush_all, cachedump, and slab reassignment's do_item_get short-circuit all need repairs.
* Fix issue #369 - uninitialized stats_lockdormando2014-12-271-2/+1
| | | | | "stats_lock is used in the assoc_init() function called in memcached.c, but it is only initialized in thread_init() that is called after assoc_init()."
* Pause all threads while swapping hash table.dormando2014-12-271-46/+39
| | | | | | | | | | | | | | | | | | | | | | | We used to hold a global lock around all modifications to the hash table. Then it was switched to wrapping hash table accesses in a global lock during hash table expansion, set by notifying each worker thread to change lock styles. There was a bug here which causes trylocks to clobber, due to the specific item locks not being held during the global lock: https://code.google.com/p/memcached/issues/detail?id=370 The patch previous to this one uses item locks during hash table expansion. Since the item lock table is always smaller than the hash table, an item lock will always cover both its new and old buckets. However, we still need to pause all threads during the pointer swap and setup. This patch pauses all background threads and worker threads, swaps the hash table, then unpauses them. This trades the (possibly significant) slowdown during the hash table copy, with a short total hang at the beginning of each expansion. As previously; those worried about consistent performance can presize the hash table with `-o hashpower=n`
* use item lock instead of global lock when hash expanding.Jason CHAN2014-12-261-1/+1
|
* rename thread_init to avoid runtime failure on AIXdormando2014-12-141-2/+2
| | | | patch by 'gwachter'
* use the right hashpower for the item_locks tabledormando2014-04-271-3/+5
| | | | | | | | | | | | | For some reason I can't understand, I was accessing item_locks via the main hash table's power level, then modulus'ed to the number of item locks. hashpower can change as the hash table grows, except it only ever changes while no item locks are being held (via the item_global_lock synchronization bits). The item_locks hashpower is static for the duration. So this isn't a safety issue, but instead just using the hash table wrong and doing an extra modulus. As an aside, this does improve benchmarks by a tiny bit.
* Make hash table algorithm selectabledormando2014-04-161-8/+8
| | | | | jenkins hash is old. Lets try murmur3 to start! Default is the old one, so people aren't surprised.
* Add statistics for allocation failuresTrond Norbye2013-12-081-1/+5
| | | | | | | | This patch adds a new stat "malloc_fails" that is a counter of how many times malloc/realloc/calloc returned NULL when we _needed_ it to return something else (resulting in closing the connection or something like that). Conditions where we could live without malloc returning a new chunk of memory is not tracked with this counter.
* Issue 294: Check for allocation failureTrond Norbye2013-12-081-0/+7
|
* Issue 293: Remove unused condition variableTrond Norbye2013-12-081-3/+0
|
* remove global stats lock from item allocationdormando2012-09-031-0/+6
| | | | | | | | | | This doesn't reduce mutex contention much, if at all, for the global stats lock, but it does remove a handful of instructions from the alloc hot path, which is always worth doing. Previous commits possibly added a handful of instructions for the loop and for the bucket readlock trylock, but this is still faster than .14 for writes overall.
* item locks now lock hash table bucketsdormando2012-09-031-17/+114
| | | | | | | | | | expansion requires switching to a global lock temporarily, so all buckets have a covered read lock. slab rebalancer is paused during hash table expansion. internal item "trylocks" are always issued, and tracked as the hash power variable can change out from under it.
* alloc loop now attempts an item_lockdormando2012-09-031-1/+5
| | | | | | | | | | | | | | | | Fixes a few issues with a restructuring... I think -M was broken before, should be fixed now. It had a refcount leak. Now walks up to five items from the bottom in case of the bottomost items being item_locked, or refcount locked. Helps avoid excessive OOM errors for some oddball cases. Those happen more often if you're hammering on a handful of pages in a very large class size (100k+) The hash item lock ensures that if we're holding that lock, no other thread can be incrementing the refcount lock at that time. It will mean more in future patches. slab rebalancer gets a similar update.
* call mutex_unlock() when we use mutex_lock()1.4.14dormando2012-07-301-7/+7
| | | | | use both #define's when using the spinlock version of our locks. not all locks are designed to be that way, so this doesn't touch the whole thing.
* - Fix inline issue with older compilers (gcc 4.2.2)1.4.13Steve Wills2012-02-021-2/+2
| | | | | ed note: this needs to be redone in memcached.h as a static inline, or changed to a define.
* properly detect GCC atomicsdormando2012-01-251-5/+5
| | | | | I was naive. GCC atomics were added in 4.1.2, and not easily detectable without configure tests. 32bit platforms, centos5, etc.
* more portable refcount atomicsdormando2012-01-101-0/+38
| | | | | | | Most credit to Dustin and Trond for showing me the way, though I have no way of testing this myself. These should probably just be defines...
* Use a proper hash mask for item lock tabledormando2011-11-091-6/+24
| | | | | | Directly use the hash for accessing the table. Performance seems unchanged from before but this is more proper. It also scales the hash table a bit as worker threads are increased.
* push cache_lock deeper into item_allocdormando2011-11-091-2/+1
| | | | | easy win without restructuring item_alloc more: push the lock down after it's done fiddling with snprintf.
* use item partitioned lock for as much as possibledormando2011-11-091-16/+43
| | | | push cache_lock deeper into the abyss
* move hash calls outside of cache_lockdormando2011-11-091-8/+20
| | | | | been hard to measure while using the intel hash (since it's very fast), but should help with the software hash.
* Use spinlocks for main cache lockdormando2011-11-091-13/+13
| | | | | | | Partly by Ripduman Sohan Appears to significantly help prevent performance dropoff from additional threads, but only when the locks are frequently contested and are short.
* experimental maxconns_fast optiondormando2011-09-271-0/+2
| | | | | | | | | | | | | Also fixes -c option to allow reducing the maximum connection limit. This gives a new option "-o maxconns_fast", which changes how memcached handles hitting the maximum connection limit. By default, it disables the accept listener and new connections will wait in the listen queue. With maxconns_fast enabled, new connections over the limited have an error written to them and are immediately closed by the listener thread. This is currently experimental, as we aren't sure how clients will handle the change. It may become the default in the future.
* Backport binary TOUCH/GAT/GATQ commandsdormando2011-09-271-0/+17
| | | | | Taken from the 1.6 branch, partly written by Trond. I hope the CAS handling is correct.
* fix incr/decr race conditions for binary protdormando2011-07-111-2/+3
| | | | | | | | | | | there were two race conditions in the incr/decr binary protocol handler. One was the original "fetches item outside of add_delta", and the second was in the initializer. I went for the quick fix by changing the semantics of the store request to be an ADD instead of a SET, so if someone beat them in that very narrow race the request simply bounces. Not perfect but this is an improvement and good enough for now.
* fix incr/decr race conditions for ASCII protdormando2011-07-111-2/+3
| | | | | binprot requires more work, since it touches CAS and also has a race for initializing a missed incr.
* Simplify stats aggregation codeDan McGee2010-11-021-16/+4
| | | | | | | | | We can use memset, unlike what the previous comment said, because this is a one-time allocated thread_stats struct that doesn't actually use the mutex for anything. This simplifies the setup code a decent amount and makes one fewer place where things need to be added if a new stat is introduced. Signed-off-by: Dan McGee <dan@archlinux.org>
* Added new stats to track sasl authentication.Matt Ingenthron2009-11-261-0/+6
| | | | | | | | Two new stats, auth_cmds and auth_unknowns have been added to allow end users to track how often authentications commands are submitted and when they "fail". Successes can be calculated by clients. Rename to auth_errors and add to protocol.txt.
* Cleanup of number of threads declarations (issue 91)Dmitry Isaykin2009-09-181-16/+13
| | | | | | | | | | | | | | | | | | | | | | | * Change setings.num_threads (-t option) meaning (Now it is a number of worker threads). * Fix bug in -t option checking. * Simple data struct for dispatcher (no thread-local stat and so on). * No special threads[0] for dispatcher thread info. * thread_local_stats_{reset|aggregate} does not cycle on unused dispatcher thread stat. * Simplify thread initialization and connection dispatching logic. (notes from Dustin): A list in a commit is typically a red flag, but this isn't really listing a bunch of distinct things that were done, but a bunch of ways things were made better by a simple refactoring. I also added a test that verifies that it fails if you add "-t 0". Before, it did not fail, but the whole server would crash if you connected to it. This test doesn't confirm the server crashed in that case, but at least confirms the exact issue 91 case, which is that it does the right thing when "-t 0" is specified.
* Issue 61: reqs_per_event handling (-R) is incorrect leading to client lockupsTrond Norbye2009-07-091-0/+3
|
* add_delta should return a proper status indicator.Dustin Sallings2009-06-291-3/+3
| | | | | Before, it was returning text protocol, requiring special handling in the binary protocol.
* fix and test for issue 38 (server does not respond to binary requests)Eric Lambert2009-05-021-5/+5
|