summaryrefslogtreecommitdiff
path: root/src/redis.c
Commit message (Collapse)AuthorAgeFilesLines
* RDMF (Redis/Disque merge friendlyness) refactoring WIP 1.antirez2015-07-261-3908/+0
|
* Merge pull request #2636 from badboy/cluster-lock-fixSalvatore Sanfilippo2015-07-171-1/+0
|\ | | | | Cluster lock fix
| * Don't include sysctl headerJan-Erik Rediger2015-06-241-1/+0
| | | | | | | | It's not needed (anymore) and is not available on Solaris.
* | Merge pull request #2644 from MOON-CLJ/command_info_fixSalvatore Sanfilippo2015-07-171-1/+1
|\ \ | | | | | | pfcount support multi keys
| * | pfcount support multi keysMOON_CLJ2015-06-261-1/+1
| |/
* | Fix: aof_delayed_fsync is not resetTom Kiemes2015-07-171-0/+1
| | | | | | | | aof_delayed_fsync was not set to 0 when calling CONFIG RESETSTAT
* | Client timeout handling improved.antirez2015-07-161-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previos attempt to process each client at least once every ten seconds was not a good idea, because: 1. Usually because of the past min iterations set to 50, you get much better processing period most of the times. 2. However when there are many clients and a normal setting for server.hz, the edge case is triggered, and waiting 10 seconds for a BLPOP that asked for 1 second is not ok. 3. Moreover, because of the high min-itereations limit of 50, when HZ was set to an high value, the actual behavior was to process a lot of clients per second. Also the function checking for timeouts called gettimeofday() at each iteration which can be costly. The new implementation will try to process each client once per second, gets the current time as argument, and does not attempt to process more than 5 clients per iteration if not needed. So now: 1. The CPU usage of an idle Redis process is the same or better. 2. The CPU usage of a busy Redis process is the same or better. 3. However a non trivial amount of work may be performed per iteration when there are many many clients. In this particular case the user may want to raise the "HZ" value if needed. Btw with 4000 clients it was still not possible to noticy any actual latency created by processing 400 clients per second, since the work performed for each client is pretty small.
* | Clarify a comment in clientsCron().antirez2015-07-161-5/+5
| |
* | EXISTS is now variadic.antirez2015-07-131-1/+1
| | | | | | | | | | | | | | | | The new return value is the number of keys existing, among the ones specified in the command line, counting the same key multiple times if given multiple times (and if it exists). See PR #2667.
* | Geo: fix command table keys position indexes for three commands.antirez2015-07-131-3/+3
| | | | | | | | | | | | | | GEOHASH, GEOPOS and GEODIST where declared as commands not accepting keys, so the Redis Cluster redirection did not worked. Close #2671.
* | GEOENCODE / GEODECODE commands removed.antirez2015-07-091-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | Rationale: 1. The commands look like internals exposed without a real strong use case. 2. Whatever there is an use case, the client would implement the commands client side instead of paying RTT just to use a simple to reimplement library. 3. They add complexity to an otherwise quite straightforward API. So for now KILLED ;-)
* | Geo: GEODIST and tests.antirez2015-06-291-0/+1
| |
* | Geo: command function names converted to lowercase, as elsewhere.antirez2015-06-291-6/+6
| | | | | | | | | | In Redis MULTIWORDCOMMANDNAME are mapped to functions where the command name is all lowercase: multiwordcommandnameCommand().
* | Geo: GEOPOS command and tests.antirez2015-06-291-0/+1
| |
* | Geo: GEOHASH command added, returning standard geohash strings.antirez2015-06-241-0/+1
| |
* | Geo: JSON features removedantirez2015-06-221-1/+1
| | | | | | | | | | The command can only return data in the normal Redis protocol. It is up to the caller to translate to JSON if needed.
* | [In-Progress] Add Geo CommandsMatt Stancliff2015-06-221-0/+5
|/ | | | | | | | | | | | | | | | | | | | | | | | Current todo: - replace functions in zset.{c,h} with a new unified Redis zset access API. Once we get the zset interface fixed, we can squash relevant commits in this branch and have one nice commit to merge into unstable. This commit adds: - Geo commands - Tests; runnable with: ./runtest --single unit/geo - Geo helpers in deps/geohash-int/ - src/geo.{c,h} and src/geojson.{c,h} implementing geo commands - Updated build configurations to get everything working - TEMPORARY: src/zset.{c,h} implementing zset score and zset range reading without writing to client output buffers. - Modified linkage of one t_zset.c function for use in zset.c Conflicts: src/Makefile src/redis.c
* fix compile error for struct msghdrclark.kang2015-05-051-0/+1
|
* Cluster: redirection refactoring + handling of blocked clients.antirez2015-03-241-21/+9
| | | | | | | | | | | | | | | | | | | There was a bug in Redis Cluster caused by clients blocked in a blocking list pop operation, for keys no longer handled by the instance, or in a condition where the cluster became down after the client blocked. A typical situation is: 1) BLPOP <somekey> 0 2) <somekey> hash slot is resharded to another master. The client will block forever int this case. A symmentrical non-cluster-specific bug happens when an instance is turned from master to slave. In that case it is more serious since this will desynchronize data between slaves and masters. This other bug was discovered as a side effect of thinking about the bug explained and fixed in this commit, but will be fixed in a separated commit.
* Cluster: fix Lua scripts replication to slave nodes.antirez2015-03-221-0/+2
|
* Fix typo in beforeSleep() comment.antirez2015-03-211-1/+1
|
* Cluster: better cluster state transiction handling.antirez2015-03-201-0/+2
| | | | | | | | | | | | | | | | | Before we relied on the global cluster state to make sure all the hash slots are linked to some node, when getNodeByQuery() is called. So finding the hash slot unbound was checked with an assertion. However this is fragile. The cluster state is often updated in the clusterBeforeSleep() function, and not ASAP on state change, so it may happen to process clients with a cluster state that is 'ok' but yet certain hash slots set to NULL. With this commit the condition is also checked in getNodeByQuery() and reported with a identical error code of -CLUSTERDOWN but slightly different error message so that we have more debugging clue in the future. Root cause of issue #2288.
* Cluster: move clusterBeforeSleep() call before unblocked clients processing.antirez2015-03-201-3/+6
| | | | Related to issue #2288.
* Net: better Unix socket error. Issue #2449.antirez2015-03-111-1/+1
|
* CONFIG refactoring: configEnum abstraction.antirez2015-03-111-1/+1
| | | | | | Still many things to convert inside config.c in the next commits. Some const safety in String objects creation and addReply() family functions.
* Hash: HSTRLEN (was HVSTRLEN) improved.antirez2015-02-271-1/+1
| | | | | | | | | | | 1. HVSTRLEN -> HSTRLEN. It's unlikely one needs the length of the key, not clear how the API would work (by value does not make sense) and there will be better names anyway. 2. Default is to return 0 when field is missing. 3. Default is to return 0 when key is missing. 4. The implementation was slower than needed, and produced unnecessary COW. Related issue #2415.
* Merge pull request #2415 from landmime/unstableSalvatore Sanfilippo2015-02-271-0/+1
|\ | | | | added a new hvstrlen command
| * added a new hvstrlen commandJason Roth2015-02-211-0/+1
| | | | | | | | the hvstrlen command returns the length of a hash field value
* | Merge pull request #2273 from mattsta/improve/consistency/INFO/memorySalvatore Sanfilippo2015-02-241-5/+20
|\ \ | |/ |/| Improve consistency of INFO MEMORY
| * Add maxmemory limit to INFO MEMORYMatt Stancliff2015-01-091-4/+10
| | | | | | | | Since we have the eviction policy, we should have the memory limit too.
| * Improve consistency of INFO MEMORY fieldsMatt Stancliff2015-01-091-1/+10
| | | | | | | | | | Adds used_memory_rss_human and used_memory_lua_human to match all the other fields reporting human-readable memory too.
* | alsoPropagate: handle REDIS_CALL_PROPAGATE and AOF loading.antirez2015-02-111-4/+9
| |
* | Change alsoPropagate() behavior to make it more usable.antirez2015-02-111-2/+19
| | | | | | | | | | Now the API automatically creates its argv copy and increment ref count of passed objects.
* | SPOP with count: initial fixes to the implementation.antirez2015-02-111-5/+18
| | | | | | | | | | | | | | | | | | | | | | Severan problems are addressed but still a few missing. Since replication of this command was more complex than others since it needs to replicate multiple SREM commands, an old API able to do this was reused (it was taken inside the implementation since it was pretty obvious soon or later that would be useful). The API was improved a bit so that now a command may opt-out for the standard command replication when the server.dirty counter is incremented, in order to "manually" replicate what it wants.
* | Separate latency monitoring of eviction loop and eviction DELs.antirez2015-02-111-1/+5
| |
* | Remove optional single-key path from evictionPoolPopulate().antirez2015-02-111-6/+0
| |
* | dict.c: add dictGetSomeKeys(), specialized for eviction.antirez2015-02-111-1/+1
| |
* | Handle redis-check-rdb as a standalone program.antirez2015-02-031-18/+6
| | | | | | | | | | | | | | | | | | | | This also makes it backward compatible in the usage, but for the command name. However the old command name was less obvious so it is worth to break it probably. With the new setup the program main can perform argument parsing and everything else useful for an RDB check regardless of the Redis server itself.
* | Convert check-dump to Redis check-rdb modeMatt Stancliff2015-01-281-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | redis-check-dump is now named redis-check-rdb and it runs as a mode of redis-server instead of an independent binary. You can now use 'redis-server redis.conf --check-rdb' to check the RDB defined in redis.conf. Using argument --check-rdb checks the RDB and exits. We could potentially also allow the server to continue starting if the RDB check succeeds. This change also enables us to use RDB checking programatically from inside Redis for certain failure conditions.
* | Supervise redis processes only if configuredMatt Stancliff2015-01-091-22/+51
| | | | | | | | | | | | | | | | | | | | | | Adds configuration option 'supervised [no | upstart | systemd | auto]' Also removed 'bzero' from the previous implementation because it's 2015. (We could actually statically initialize those structs, but clang throws an invalid warning when we try, so it looks bad even though it isn't bad.) Fixes #2264
* | Define default pidfile when creating pidMatt Stancliff2015-01-091-1/+5
| | | | | | | | | | | | | | | | We want pidfile to be NULL on startup so we can detect if the user set an explicit value versus only using the default value. Closes #1967 Fixes #2076
* | Create PID file even if in foregroundrebx2015-01-091-3/+4
|/ | | | | | | | | | | | | | Previously, Redis only wrote the pid file if it was daemonizing, but many times it's useful to have the pid written out even if you're in the foreground. Some background for this is: I usually run redis via daemontools. That entails running redis-server on the foreground. Given that, I'd also want redis-server to create a pidfile so other processes (e.g. nagios) can run checks for that. Closes #463
* Config: Add quicklist, remove old list optionsMatt Stancliff2015-01-021-2/+2
| | | | | | | | | | | | | This removes: - list-max-ziplist-entries - list-max-ziplist-value This adds: - list-max-ziplist-size - list-compress-depth Also updates config file with new sections and updates tests to use quicklist settings instead of old list settings.
* Add quicklist implementationMatt Stancliff2015-01-021-0/+2
| | | | | | | | This replaces individual ziplist vs. linkedlist representations for Redis list operations. Big thanks for all the reviews and feedback from everybody in https://github.com/antirez/redis/pull/2143
* Allow all code tests to run using Redis argsMatt Stancliff2014-12-231-0/+24
| | | | | | | | | | | | | | | | | | | | | Previously, many files had individual main() functions for testing, but each required being compiled with their own testing flags. That gets difficult when you have 8 different flags you need to set just to run all tests (plus, some test files required other files to be compiled aaginst them, and it seems some didn't build at all without including the rest of Redis). Now all individual test main() funcions are renamed to a test function for the file itself and one global REDIS_TEST define enables testing across the entire codebase. Tests can now be run with: - `./redis-server test <test>` e.g. ./redis-server test ziplist If REDIS_TEST is not defined, then no tests get included and no tests are included in the final redis-server binary.
* Add addReplyBulkSds() functionMatt Stancliff2014-12-231-5/+1
| | | | | Refactor a common pattern into one function so we don't end up with copy/paste programming.
* INFO loading stats: three fixes.antirez2014-12-231-3/+3
| | | | | | | | | 1. Server unxtime may remain not updated while loading AOF, so ETA is not updated correctly. 2. Number of processed byte was not initialized. 3. Possible division by zero condition (likely cause of issue #1932).
* Fix adjustOpenFilesLimit() logging to match real state.antirez2014-12-191-12/+12
| | | | Fixes issue #2225.
* Merge pull request #2215 from advance512/spopWithCountSalvatore Sanfilippo2014-12-171-1/+1
|\ | | | | SPOP optional count argument. (issue #1793, supersedes pull request #1803)
| * Added <count> parameter to SPOP:Alon Diamant2014-12-141-1/+1
| | | | | | | | | | | | | | | | | | spopCommand() now runs spopWithCountCommand() in case the <count> param is found. Added intsetRandomMembers() to Intset: Copies N random members from the set into inputted 'values' array. Uses either the Knuth or Floyd sample algos depending on ratio count/size. Added setTypeRandomElements() to SET type: Returns a number of random elements from a non empty set. This is a version of setTypeRandomElement() that is modified in order to return multiple entries, using dictGetRandomKeys() and intsetRandomMembers(). Added tests for SPOP with <count>: unit/type/set, unit/scripting, integration/aof -- Cleaned up code a bit to match with required Redis coding style