summaryrefslogtreecommitdiff
path: root/src/expire.c
Commit message (Collapse)AuthorAgeFilesLines
* Tracking: NOLOOP internals implementation.antirez2020-04-211-3/+3
|
* PERSIST should notify a keyspace eventGuy Benoish2020-03-291-0/+1
|
* Fix active expire division by zero.antirez2020-01-011-4/+7
| | | | | | | | | | | | | | Likely fix #6723. This is what happens AFAIK: we enter the main loop where we expire stuff until a given percentage of keys is still found to be logically expired. There are however other potential exit conditions. However the "sampled" variable is not always incremented inside the loop, because we may found no valid slot as we scan the hash table, but just NULLs ad dict entries. So when the do/while loop condition is triggered at the end, we do (expired*100/sampled), dividing by zero if we sampled 0 keys.
* Expire cycle: set a buckets limit as well.antirez2019-11-181-2/+13
|
* Expire cycle: fix parameters computation.antirez2019-11-181-3/+2
|
* Expire cycle: introduce configurable effort.antirez2019-11-181-9/+28
|
* Expire cycle: tollerate less stale keys, expire cycle CPU in INFO.antirez2019-11-151-12/+19
|
* Expire cycle: scan hash table buckets directly.antirez2019-11-151-29/+68
|
* Expire cycle: introduce the new state needed for the new algo.antirez2019-11-141-0/+5
|
* Client side caching: call the invalidation functions always.antirez2019-07-221-1/+1
| | | | | | | | Otherwise what happens is that the tracking table will never get garbage collected if there are no longer clients with tracking enabled. Now the invalidation function immediately checks if there is any table allocated, otherwise it returns ASAP, so the overhead when the feature is not used should be near zero.
* Client side caching: implement trackingInvalidateKey().antirez2019-07-031-0/+1
|
* fix typoshenlongxing2018-06-211-1/+1
|
* Track number of logically expired keys still in memory.antirez2018-02-191-1/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds two new fields in the INFO output, stats section: expired_stale_perc:0.34 expired_time_cap_reached_count:58 The first field is an estimate of the number of keys that are yet in memory but are already logically expired. They reason why those keys are yet not reclaimed is because the active expire cycle can't spend more time on the process of reclaiming the keys, and at the same time nobody is accessing such keys. However as the active expire cycle runs, while it will eventually have to return to the caller, because of time limit or because there are less than 25% of keys logically expired in each given database, it collects the stats in order to populate this INFO field. Note that expired_stale_perc is a running average, where the current sample accounts for 5% and the history for 95%, so you'll see it changing smoothly over time. The other field, expired_time_cap_reached_count, counts the number of times the expire cycle had to stop, even if still it was finding a sizeable number of keys yet to expire, because of the time limit. This allows people handling operations to understand if the Redis server, during mass-expiration events, is able to collect keys fast enough usually. It is normal for this field to increment during mass expires, but normally it should very rarely increment. When instead it constantly increments, it means that the current workloads is using a very important percentage of CPU time to expire keys. This feature was created thanks to the hints of Rashmi Ramesh and Bart Robinson from Twitter. In private email exchanges, they noted how it was important to improve the observability of this parameter in the Redis server. Actually in big deployments, the amount of keys that are yet to expire in each server, even if they are logically expired, may account for a very big amount of wasted memory.
* expire & latency: fix the missing latency records generated by expirezhaozhao.zz2017-11-211-8/+11
|
* Issue #4027: unify comment and modify return value in freeMemoryIfNeeded().antirez2017-06-231-2/+3
| | | | | | | | | | It looks safer to return C_OK from freeMemoryIfNeeded() when clients are paused because returning C_ERR may prevent success of writes. It is possible that there is no difference in practice since clients cannot execute writes while clients are paused, but it looks more correct this way, at least conceptually. Related to PR #4028.
* Merge pull request #4028 from zintrepid/prevent_expirations_while_pausedSalvatore Sanfilippo2017-06-231-0/+4
|\ | | | | Prevent expirations and evictions while paused
| * Prevent expirations and evictions while pausedZachary Marquez2017-06-011-0/+4
| | | | | | | | Proposed fix to https://github.com/antirez/redis/issues/4027
* | Fix PERSIST expired key resuscitation issue #4048.antirez2017-06-131-6/+3
|/
* Expire: Update comment of activeExpireCycle functionlorneli2017-04-081-1/+1
| | | | | | The macro REDIS_EXPIRELOOKUPS_TIME_PERC has been replaced by ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC in commit 6500fabfb881a7ffaadfbff74ab801c55d4591fc.
* Writable slaves expires: fix leak in key tracking.antirez2016-12-131-2/+11
| | | | | | We need to use a dictionary type that frees the key, since we copy the keys in the dictionary we use to track expires created in the slave side.
* INFO: show num of slave-expires keys tracked.antirez2016-12-131-0/+6
|
* Fix created->created typo in expire.cantirez2016-12-131-1/+1
|
* Replication: fix the infamous key leakage of writable slaves + EXPIRE.antirez2016-12-131-1/+134
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | BACKGROUND AND USE CASEj Redis slaves are normally write only, however the supprot a "writable" mode which is very handy when scaling reads on slaves, that actually need write operations in order to access data. For instance imagine having slaves replicating certain Sets keys from the master. When accessing the data on the slave, we want to peform intersections between such Sets values. However we don't want to intersect each time: to cache the intersection for some time often is a good idea. To do so, it is possible to setup a slave as a writable slave, and perform the intersection on the slave side, perhaps setting a TTL on the resulting key so that it will expire after some time. THE BUG Problem: in order to have a consistent replication, expiring of keys in Redis replication is up to the master, that synthesize DEL operations to send in the replication stream. However slaves logically expire keys by hiding them from read attempts from clients so that if the master did not promptly sent a DEL, the client still see logically expired keys as non existing. Because slaves don't actively expire keys by actually evicting them but just masking from the POV of read operations, if a key is created in a writable slave, and an expire is set, the key will be leaked forever: 1. No DEL will be received from the master, which does not know about such a key at all. 2. No eviction will be performed by the slave, since it needs to disable eviction because it's up to masters, otherwise consistency of data is lost. THE FIX In order to fix the problem, the slave should be able to tag keys that were created in the slave side and have an expire set in some way. My solution involved using an unique additional dictionary created by the writable slave only if needed. The dictionary is obviously keyed by the key name that we need to track: all the keys that are set with an expire directly by a client writing to the slave are tracked. The value in the dictionary is a bitmap of all the DBs where such a key name need to be tracked, so that we can use a single dictionary to track keys in all the DBs used by the slave (actually this limits the solution to the first 64 DBs, but the default with Redis is to use 16 DBs). This solution allows to pay both a small complexity and CPU penalty, which is zero when the feature is not used, actually. The slave-side eviction is encapsulated in code which is not coupled with the rest of the Redis core, if not for the hook to track the keys. TODO I'm doing the first smoke tests to see if the feature works as expected: so far so good. Unit tests should be added before merging into the 4.0 branch.
* Add expire.c and evict.c.antirez2016-07-061-0/+354