summaryrefslogtreecommitdiff
path: root/src/redis.h
Commit message (Collapse)AuthorAgeFilesLines
* HyperLogLog API prefix modified from "P" to "PF".antirez2014-03-311-4/+4
| | | | Using both the initials of Philippe Flajolet instead of just "P".
* HyperLogLog: make API use the P prefix in honor of Philippe Flajolet.antirez2014-03-311-4/+4
|
* HLLMERGE implemented.antirez2014-03-311-0/+1
| | | | | Merge N HLL data structures by selecting the max value for every M[i] register among the set of HLLs.
* String value unsharing refactored into proper function.antirez2014-03-301-0/+1
| | | | | | | | | | | All the Redis functions that need to modify the string value of a key in a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the object unshared (if refcount > 1) and encoded in raw format (if encoding is not already REDIS_ENCODING_RAW). This was cut & pasted many times in multiple places of the code. This commit puts the small logic needed into a function called dbUnshareStringValue().
* HLLCOUNT implemented.antirez2014-03-281-0/+1
|
* HLLADD implemented.antirez2014-03-281-0/+1
|
* HLLSELFTEST command implemented.antirez2014-03-281-0/+1
| | | | | | To test the bitfield array of counters set/get macros from the Redis Tcl suite is hard, so a specialized command that is able to test the internals was developed.
* Add REDIS_MIN_RESERVED_FDS define for open fdsMatt Stancliff2014-03-241-2/+3
| | | | | | | | | Also update the original REDIS_EVENTLOOP_FDSET_INCR to include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR exists to make sure more than (maxclients+RESERVED) entries are allocated, but we can only guarantee that if we include the current value of REDIS_MIN_RESERVED_FDS as a minimum for the INCR size.
* Fix maxclients error handlingMatt Stancliff2014-03-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Everywhere in the Redis code base, maxclients is treated as an int with (int)maxclients or `maxclients = atoi(source)`, so let's make maxclients an int. This fixes a bug where someone could specify a negative maxclients on startup and it would work (as well as set maxclients very high) because: unsigned int maxclients; char *update = "-300"; maxclients = atoi(update); if (maxclients < 1) goto fail; But, (maxclients < 1) can only catch the case when maxclients is exactly 0. maxclients happily sets itself to -300, which isn't -300, but rather 4294966996, which isn't < 1, so... everything "worked." maxclients config parsing checks for the case of < 1, but maxclients CONFIG SET parsing was checking for case of < 0 (allowing maxclients to be set to 0). CONFIG SET parsing is now updated to match config parsing of < 1. It's tempting to add a MINIMUM_CLIENTS define, but... I didn't. These changes were inspired by antirez#356, but this doesn't fix that issue.
* Sample and cache RSS in serverCron().antirez2014-03-241-0/+1
| | | | | | | | | | | | | Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX. This slowed down the generation of the INFO 'memory' section. Since the RSS does not require to be a real-time measurement, we now sample it with server.hz frequency (10 times per second by default) and use this value both to show the INFO rss field and to compute the fragmentation ratio. Practically this does not make any difference for memory profiling of Redis but speeds up the INFO call significantly.
* The default maxmemory policy is now noeviction.antirez2014-03-211-1/+1
| | | | | | This is safer as by default maxmemory should just set a memory limit without any key to be deleted, unless the policy is set to something more relaxed.
* Use 24 bits for the lru object field and improve resolution.antirez2014-03-201-3/+2
| | | | | | | | | | | | | There were 2 spare bits inside the Redis object structure that are now used in order to enlarge 4x the range of the LRU field. At the same time the resolution was improved from 10 to 1 second: this still provides 194 days before the LRU counter overflows (restarting from zero). This is not a problem since it only causes lack of eviction precision for objects not touched for a very long time, and the lack of precision is only temporary.
* Default LRU samples is now 5.antirez2014-03-201-1/+1
|
* LRU eviction pool implementation.antirez2014-03-201-1/+18
| | | | | | | | | This is an improvement over the previous eviction algorithm where we use an eviction pool that is persistent across evictions of keys, and gets populated with the best candidates for evictions found so far. It allows to approximate LRU eviction at a given number of samples better than the previous algorithm used.
* Obtain LRU clock in a resolution dependent way.antirez2014-03-201-1/+8
| | | | | | | | | | | | For testing purposes it is handy to have a very high resolution of the LRU clock, so that it is possible to experiment with scripts running in just a few seconds how the eviction algorithms works. This commit allows Redis to use the cached LRU clock, or a value computed on demand, depending on the resolution. So normally we have the good performance of a precomputed value, and a clock that wraps in many days using the normal resolution, but if needed, changing a define will switch behavior to an high resolution LRU clock.
* Specify lruclock in redisServer structure via REDIS_LRU_BITS.antirez2014-03-201-2/+1
| | | | The padding field was totally useless: removed.
* Specify LRU resolution in milliseconds.antirez2014-03-201-1/+1
|
* Set LRU parameters via REDIS_LRU_BITS define.antirez2014-03-201-2/+3
|
* Unify stats reset for CONFIG RESETSTAT / initServer().antirez2014-03-191-1/+2
| | | | | Now CONFIG RESETSTAT makes sure to reset all the fields, and in the future it will be simpler to avoid missing new fields.
* Cluster: SORT get keys helper implemented.antirez2014-03-101-0/+1
|
* Cluster: evalGetKey() added for EVAL/EVALSHA.antirez2014-03-101-0/+1
| | | | | | | Previously we used zunionInterGetKeys(), however after this function was fixed to account for the destination key (not needed when the API was designed for "diskstore") the two set of commands can no longer be served by an unique keys-extraction function.
* Cluster: getKeysFromCommand() API cleaned up.antirez2014-03-101-7/+3
| | | | | | | | | | This API originated from the "diskstore" experiment, not for Redis Cluster itself, so there were legacy/useless things trying to differentiate between keys that are going to be overwritten and keys that need to be fetched from disk (preloaded). All useless with Cluster, so removed with the result of code simplification.
* refer to updateLRUClock's comment REDIS_LRU_CLOCK_MAX is 22 bits,but #define ↵zhanghailei2014-03-041-1/+1
| | | | REDIS_LRU_CLOCK_MAX ((1<<21)-1) only 21 bits
* Initial implementation of BITPOS.antirez2014-02-271-0/+1
| | | | | It appears to work but more stress testing, and both unit tests and fuzzy testing, is needed in order to ensure the implementation is sane.
* Update cached time in rdbLoad() callback.antirez2014-02-131-0/+1
| | | | | | | | | | | | | | | | | | | server.unixtime and server.mstime are cached less precise timestamps that we use every time we don't need an accurate time representation and a syscall would be too slow for the number of calls we require. Such an example is the initialization and update process of the last interaction time with the client, that is used for timeouts. However rdbLoad() can take some time to load the DB, but at the same time it did not updated the time during DB loading. This resulted in the bug described in issue #1535, where in the replication process the slave loads the DB, creates the redisClient representation of its master, but the timestamp is so old that the master, under certain conditions, is sensed as already "timed out". Thanks to @yoav-steinberg and Redis Labs Inc for the bug report and analysis.
* AOF: don't abort on write errors unless fsync is 'always'.antirez2014-02-121-0/+2
| | | | | | | | | | | A system similar to the RDB write error handling is used, in which when we can't write to the AOF file, writes are no longer accepted until we are able to write again. For fsync == always we still abort on errors since there is currently no easy way to avoid replying with success to the user otherwise, and this would violate the contract with the user of only acknowledging data already secured on disk.
* CLIENT PAUSE and related API implemented.antirez2014-02-041-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | The API is one of the bulding blocks of CLUSTER FAILOVER command that executes a manual failover in Redis Cluster. However exposed as a command that the user can call directly, it makes much simpler to upgrade a standalone Redis instance using a slave in a safer way. The commands works like that: CLIENT PAUSE <milliesconds> All the clients that are not slaves and not in MONITOR state are paused for the specified number of milliesconds. This means that slaves are normally served in the meantime. At the end of the specified amount of time all the clients are unblocked and will continue operations normally. This command has no effects on the population of the slow log, since clients are not blocked in the middle of operations but only when there is to process new data. Note that while the clients are unblocked, still new commands are accepted and queued in the client buffer, so clients will likely not block while writing to the server while the pause is active.
* Scripting: use mstime() and mstime_t for lua_time_start.antirez2014-02-031-2/+2
| | | | | | | server.lua_time_start is expressed in milliseconds. Use mstime_t instead of long long, and populate it with mstime() instead of ustime()/1000. Functionally identical but more natural.
* Option "backlog" renamed "tcp-backlog".antirez2014-01-311-2/+2
| | | | | This is especially important since we already have a concept of backlog (the replication backlog).
* Add support for listen(2) backlog definitionNenad Merdanovic2014-01-311-0/+2
| | | | | | In high RPS environments, the default listen backlog is not sufficient, so giving users the power to configure it is the right approach, especially since it requires only minor modifications to the code.
* Cluster: configurable replicas migration barrier.antirez2014-01-311-0/+1
| | | | | | It is possible to configure the min number of additional working slaves a master should be left with, for a slave to migrate to an orphaned master.
* Cluster: function clusterGetSlaveRank() added.antirez2014-01-291-0/+1
| | | | | | Return the number of slaves for the same master having a better replication offset of the current slave, that is, the slave "rank" used to pick a delay before the request for election.
* Cluster: support to read from slave nodes.antirez2014-01-141-0/+3
| | | | | | | | | | A client can enter a special cluster read-only mode using the READONLY command: if the client read from a slave instance after this command, for slots that are actually served by the instance's master, the queries will be processed without redirection, allowing clients to read from slaves (but without any kind fo read-after-write guarantee). The READWRITE command can be used in order to exit the readonly state.
* Set REDIS_AOF_REWRITE_MIN_SIZE to 64mb.antirez2014-01-141-1/+1
| | | | | 64mb is the default value in redis.conf. For some reason instead the hard-coded default was 1mb that is too small.
* Don't send REPLCONF ACK to old masters.antirez2014-01-081-1/+1
| | | | | | | | | | | | | | | Masters not understanding REPLCONF ACK will reply with errors to our requests causing a number of possible issues. This commit detects a global replication offest set to -1 at the end of the replication, and marks the client representing the master with the REDIS_PRE_PSYNC flag. Note that this flag was called REDIS_PRE_PSYNC_SLAVE but now it is just REDIS_PRE_PSYNC as it is used for both slaves and masters starting with this commit. This commit fixes issue #1488.
* CONFIG REWRITE: don't throw some options on config rewriteYubao Liu2013-12-191-0/+1
| | | | | | Those options will be thrown without this patch: include, rename-command, min-slaves-to-write, min-slaves-max-lag, appendfilename.
* Fix wrong repldboff type which causes dropped replication in rare cases.Yossi Gottlieb2013-12-111-1/+1
|
* Slaves heartbeats during sync improved.antirez2013-12-101-0/+1
| | | | | | | | | | | | | | | The previous fix for false positive timeout detected by master was not complete. There is another blocking stage while loading data for the first synchronization with the master, that is, flushing away the current data from the DB memory. This commit uses the newly introduced dict.c callback in order to make some incremental work (to send "\n" heartbeats to the master) while flushing the old data from memory. It is hard to write a regression test for this issue unfortunately. More support for debugging in the Redis core would be needed in terms of functionalities to simulate a slow DB loading / deletion.
* dict.c: added optional callback to dictEmpty().antirez2013-12-101-1/+1
| | | | | | | | | | Redis hash table implementation has many non-blocking features like incremental rehashing, however while deleting a large hash table there was no way to have a callback called to do some incremental work. This commit adds this support, as an optiona callback argument to dictEmpty() that is currently called at a fixed interval (one time every 65k deletions).
* WAIT command: synchronous replication for Redis.antirez2013-12-041-1/+9
|
* BLPOP blocking code refactored to be generic & reusable.antirez2013-12-031-2/+25
|
* Removed old comments and dead code from freeClient().antirez2013-12-031-2/+0
|
* Cluster: basic data structures for nodes black list.antirez2013-11-291-0/+1
|
* Sentinel: test for writable config file.antirez2013-11-211-0/+1
| | | | | This commit introduces a funciton called when Sentinel is ready for normal operations to avoid putting Sentinel specific stuff in redis.c.
* Sentinel: sentinelFlushConfig() to CONFIG REWRITE + fsync.antirez2013-11-191-0/+1
|
* Sentinel: CONFIG REWRITE support for Sentinel config.antirez2013-11-191-0/+2
|
* SCAN code refactored to parse cursor first.antirez2013-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | The previous implementation of SCAN parsed the cursor in the generic function implementing SCAN, SSCAN, HSCAN and ZSCAN. The actual higher-level command implementation only checked for empty keys and return ASAP in that case. The result was that inverting the arguments of, for instance, SSCAN for example and write: SSCAN 0 key Instead of SSCAN key 0 Resulted into no error, since 0 is a non-existing key name very likely. Just the iterator returned no elements at all. In order to fix this issue the code was refactored to extract the function to parse the cursor and return the error. Every higher level command implementation now parses the cursor and later checks if the key exist or not.
* ZSCAN implemented.antirez2013-10-281-0/+1
|
* HSCAN implemented.antirez2013-10-281-0/+1
|
* SSCAN implemented.antirez2013-10-281-1/+3
|