summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Better SIGCHLD handling for #2897 debugging.waitpid-fixantirez2015-11-302-1/+3
|
* Fix renamed define after merge.antirez2015-11-271-1/+1
|
* Handle wait3() errors.antirez2015-11-271-1/+7
| | | | | | | | | | | My guess was that wait3() with WNOHANG could never return -1 and an error. However issue #2897 may possibly indicate that this could happen under non clear conditions. While we try to understand this better, better to handle a return value of -1 explicitly, otherwise in the case a BGREWRITE is in progress but wait3() returns -1, the effect is to match the first branch of the if/else block since server.rdb_child_pid is -1, and call backgroundSaveDoneHandler() without a good reason, that will, in turn, crash the Redis server with an assertion.
* Redis Cluster: hint about validity factor when slave can't failover.antirez2015-11-271-2/+4
|
* Remove "s" flag for MIGRATE in command table.antirez2015-11-171-1/+1
| | | | | | | Maybe there are legitimate use cases for MIGRATE inside Lua scripts, at least for now. When the command will be executed in an asynchronous fashion (planned) it is possible we'll no longer be able to permit it from within Lua scripts.
* Update redis-cli help and the script to generate it.antirez2015-11-172-9/+183
|
* Fix MIGRATE entry in command table.antirez2015-11-171-1/+1
| | | | | | Thanks to Oran Agra (@oranagra) for reporting. Key extraction would not work otherwise and it does not make sense to take wrong data in the command table.
* Fix error reply in subscribed Pub/Sub mode.antirez2015-11-091-1/+1
| | | | PING is now a valid command to issue in this context.
* CONTRIBUTING updated.antirez2015-10-271-5/+9
|
* Redis 3.0.53.0.5antirez2015-10-152-1/+25
|
* Add back blank lineDavid Thomson2015-10-151-0/+1
|
* Update import command to optionally use copy and replace parametersDavid Thomson2015-10-151-3/+7
|
* Cluster: redis-trib fix, coverage for migrating=1 case.antirez2015-10-151-2/+12
| | | | Kinda related to #2770.
* Redis.conf example: make clear user must pass its path as argument.antirez2015-10-151-1/+6
|
* Regression test for issue #2813.antirez2015-10-151-0/+53
|
* Move end-comment of handshake states.antirez2015-10-151-1/+1
| | | | | For an error I missed the last handshake state. Related to issue #2813.
* Make clear that slave handshake states must be ordered.antirez2015-10-151-0/+2
| | | | | Make sure that people from the future will not break this rule. Related to issue #2813.
* Minor changes to PR #2813.antirez2015-10-151-35/+22
| | | | | | | | | * Function to test for slave handshake renamed slaveIsInHandshakeState. * Function no longer accepts arguments since it always tests the same global state. * Test for state translated to a range test since defines are guaranteed to stay in order in the future. * Use the new function in the ROLE command implementation as well.
* Fix master timeout during handshakeKevin McGehee2015-10-151-3/+19
| | | | | | | This change allows a slave to properly time out a dead master during the extended asynchronous synchronization state machine. Now, slaves will record their last interaction with the master and apply the replication timeout before a response to the PSYNC request is received.
* redis-cli pipe mode: don't stay in the write loop forever.antirez2015-09-301-1/+6
| | | | | | | | | | | | | The code was broken and resulted in redis-cli --pipe to, most of the times, writing everything received in the standard input to the Redis connection socket without ever reading back the replies, until all the content to write was written. This means that Redis had to accumulate all the output in the output buffers of the client, consuming a lot of memory. Fixed thanks to the original report of anomalies in the behavior provided by Twitter user @fsaintjacques.
* Test: fix false positive in HSTRLEN test.antirez2015-09-151-5/+5
| | | | | | | HINCRBY* tests later used the value "tmp" that was sometimes generated by the random key generation function. The result was ovewriting what Tcl expected to be inside Redis with another value, causing the next HSTRLEN test to fail.
* Test: MOVE expire test improved.antirez2015-09-141-0/+13
| | | | Related to #2765.
* MOVE re-add TTL check fixed.antirez2015-09-141-1/+1
| | | | | | getExpire() returns -1 when no expire exists. Related to #2765.
* MOVE now can move TTL metadata as well.antirez2015-09-142-1/+16
| | | | | | | | | | | | | MOVE was not able to move the TTL: when a key was moved into a different database number, it became persistent like if PERSIST was used. In some incredible way (I guess almost nobody uses Redis MOVE) this bug remained unnoticed inside Redis internals for many years. Finally Andy Grunwald discovered it and opened an issue. This commit fixes the bug and adds a regression test. Close #2765.
* Release note typo fixed: senitel -> sentinel.antirez2015-09-081-1/+1
|
* Redis 3.0.4.3.0.4antirez2015-09-082-1/+49
|
* Sentinel: command arity check added where missing.antirez2015-09-081-0/+2
|
* Check args before run ckquorum. Fix issue #2635Rogerio Goncalves2015-09-081-0/+1
|
* Fix merge issues in 490847c.antirez2015-09-071-2/+2
|
* Undo slaves state change on failed rdbSaveToSlavesSockets().antirez2015-09-071-10/+26
| | | | | | | | | | | | | As Oran Agra suggested, in startBgsaveForReplication() when the BGSAVE attempt returns an error, we scan the list of slaves in order to remove them since there is no way to serve them currently. However we check for the replication state BGSAVE_START, which was modified by rdbSaveToSlaveSockets() before forking(). So when fork fails the state of slaves remain BGSAVE_END and no cleanup is performed. This commit fixes the problem by making rdbSaveToSlavesSockets() able to undo the state change on fork failure.
* Sentinel: fix bug in config rewriting during failoverantirez2015-09-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a check to rewrite the config properly when a failover is in progress, in order to add the current (already failed over) master as slave, and don't include in the slave list the promoted slave itself. However there was an issue, the variable with the right address was computed but never used when the code was modified, and no tests are available for this feature for two reasons: 1. The Sentinel unit test currently does not test Sentinel ability to persist its state at all. 2. It is a very hard to trigger state since it lasts for little time in the context of the testing framework. However this feature should be covered in the test in some way. The bug was found by @badboy using the clang static analyzer. Effects of the bug on safety of Sentinel === This bug results in severe issues in the following case: 1. A Sentinel is elected leader. 2. During the failover, it persists a wrong config with a known-slave entry listing the master address. 3. The Sentinel crashes and restarts, reading invalid configuration from disk. 4. It sees that the slave now does not obey the logical configuration (should replicate from the current master), so it sends a SLAVEOF command to the master (since the slave master is the same) creating a replication loop (attempt to replicate from itself) which Redis is currently unable to detect. 5. This means that the master is no longer available because of the bug. However the lack of availability should be only transient (at least in my tests, but other states could be possible where the problem is not recovered automatically) because: 6. Sentinels treat masters reporting to be slaves as failing. 7. A new failover is triggered, and a slave is promoted to master. Bug lifetime === The bug is there forever. Commit 16237d78 actually tried to fix the bug but in the wrong way (the computed variable was never used! My fault). So this bug is there basically since the start of Sentinel. Since the bug is hard to trigger, I remember little reports matching this condition, but I remember at least a few. Also in automated tests where instances were stopped and restarted multiple times automatically I remember hitting this issue, however I was not able to reproduce nor to determine with the information I had at the time what was causing the issue.
* Sentinel: clarify effect of resetting failover_start_time.antirez2015-09-071-2/+4
|
* SCAN iter parsing changed from atoi to chartoullubuntu2015-09-071-1/+1
|
* Log client details on SLAVEOF command having an effect.antirez2015-08-211-3/+8
|
* startBgsaveForReplication(): handle waiting slaves state change.antirez2015-08-211-47/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this commit, after triggering a BGSAVE it was up to the caller of startBgsavForReplication() to handle slaves in WAIT_BGSAVE_START in order to update them accordingly. However when the replication target is the socket, this is not possible since the process of updating the slaves and sending the FULLRESYNC reply must be coupled with the process of starting an RDB save (the reason is, we need to send the FULLSYNC command and spawn a child that will start to send RDB data to the slaves ASAP). This commit moves the responsibility of handling slaves in WAIT_BGSAVE_START to startBgsavForReplication() so that for both diskless and disk-based replication we have the same chain of responsiblity. In order accomodate such change, the syncCommand() also needs to put the client in the slave list ASAP (just after the initial checks) and not at the end, so that startBgsavForReplication() can find the new slave alrady in the list. Another related change is what happens if the BGSAVE fails because of fork() or other errors: we now remove the slave from the list of slaves and send an error, scheduling the slave connection to be terminated. As a side effect of this change the following errors found by Oran Agra are fixed (thanks!): 1. rdbSaveToSlavesSockets() on failed fork will get the slaves cleaned up, otherwise they remain in a wrong state forever since we setup them for full resync before actually trying to fork. 2. updateSlavesWaitingBgsave() with replication target set as "socket" was broken since the function changed the slaves state from WAIT_BGSAVE_START to WAIT_BGSAVE_END via replicationSetupSlaveForFullResync(), so later rdbSaveToSlavesSockets() will not find any slave in the right state (WAIT_BGSAVE_START) to feed.
* Fixed issues introduced during last merge.antirez2015-08-201-2/+2
|
* Force slaves to resync after unsuccessful PSYNC.antirez2015-08-201-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | Using chained replication where C is slave of B which is in turn slave of A, if B reconnects the replication link with A but discovers it is no longer possible to PSYNC, slaves of B must be disconnected and PSYNC not allowed, since the new B dataset may be completely different after the synchronization with the master. Note that there are varius semantical differences in the way this is handled now compared to the past. In the past the semantics was: 1. When a slave lost connection with its master, disconnected the chained slaves ASAP. Which is not needed since after a successful PSYNC with the master, the slaves can continue and don't need to resync in turn. 2. However after a failed PSYNC the replication backlog was not reset, so a slave was able to PSYNC successfully even if the instance did a full sync with its master, containing now an entirely different data set. Now instead chained slaves are not disconnected when the slave lose the connection with its master, but only when it is forced to full SYNC with its master. This means that if the slave having chained slaves does a successful PSYNC all its slaves can continue without troubles. See issue #2694 for more details.
* replicationHandleMasterDisconnection() belongs to replication.c.antirez2015-08-202-14/+14
|
* flushSlavesOutputBuffers(): details clarified via comments.antirez2015-08-201-0/+6
| | | | | | | | | | Talking with @oranagra we had to reason a little bit to understand if this function could ever flush the output buffers of the wrong slaves, having online state but actually not being ready to receive writes before the first ACK is received from them (this happens with diskless replication). Next time we'll just read this comment.
* checkTcpBacklogSetting() now called in Sentinel mode too.antirez2015-08-201-1/+1
|
* slaveTryPartialResynchronization and syncWithMaster: better synergy.antirez2015-08-071-14/+16
| | | | | | | | | It is simpler if removing the read event handler from the FD is up to slaveTryPartialResynchronization, after all it is only called in the context of syncWithMaster. This commit also makes sure that on error all the event handlers are removed from the socket before closing it.
* syncWithMaster(): non blocking state machine.antirez2015-08-073-101/+215
|
* startBgsaveForReplication(): log what you really do.antirez2015-08-061-3/+4
|
* Replication: add REPLCONF CAPA EOF support.antirez2015-08-063-11/+51
| | | | | | | | | | | | | | | | | Add the concept of slaves capabilities to Redis, the slave now presents to the Redis master with a set of capabilities in the form: REPLCONF capa SOMECAPA capa OTHERCAPA ... This has the effect of setting slave->slave_capa with the corresponding SLAVE_CAPA macros that the master can test later to understand if it the slave will understand certain formats and protocols of the replication process. This makes it much simpler to introduce new replication capabilities in the future in a way that don't break old slaves or masters. This patch was designed and implemented together with Oran Agra (@oranagra).
* Fix synchronous readline "\n" handling.antirez2015-08-051-0/+3
| | | | | | | Our function to read a line with a timeout handles newlines as requests to refresh the timeout, however the code kept subtracting the buffer size left every time a newline was received, for a bug in the loop logic. Fixed by this commit.
* Fix replication slave pings period.antirez2015-08-051-20/+26
| | | | | | | For PINGs we use the period configured by the user, but for the newlines of slaves waiting for an RDB to be created (including slaves waiting for the FULLRESYNC reply) we need to ping with frequency of 1 second, since the timeout is fixed and needs to be refreshed.
* Fix RDB encoding test for new csvdump format.antirez2015-08-051-13/+13
|
* Remove slave state change handled by replicationSetupSlaveForFullResync().antirez2015-08-051-1/+0
|
* Make sure we re-emit SELECT after each new slave full sync setup.antirez2015-08-053-17/+26
| | | | | | | | | | | | In previous commits we moved the FULLRESYNC to the moment we start the BGSAVE, so that the offset we provide is the right one. However this also means that we need to re-emit the SELECT statement every time a new slave starts to accumulate the changes. To obtian this effect in a more clean way, the function that sends the FULLRESYNC reply was overloaded with a more important role of also doing this and chanigng the slave state. So it was renamed to replicationSetupSlaveForFullResync() to better reflect what it does now.
* Test: csvdump now scans all DBs.antirez2015-08-051-32/+36
|