summaryrefslogtreecommitdiff
path: root/tests/integration/replication.tcl
Commit message (Collapse)AuthorAgeFilesLines
* prevent diskless replica from terminating on short readOran Agra2019-07-171-7/+90
| | | | | | | | now that replica can read rdb directly from the socket, it should avoid exiting on short read and instead try to re-sync. this commit tries to have minimal effects on non-diskless rdb reading. and includes a test that tries to trigger this scenario on various read cases.
* diskless replication on slave side (don't store rdb to file), plus some ↵Oran Agra2019-07-081-70/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | other related fixes The implementation of the diskless replication was currently diskless only on the master side. The slave side was still storing the received rdb file to the disk before loading it back in and parsing it. This commit adds two modes to load rdb directly from socket: 1) when-empty 2) using "swapdb" the third mode of using diskless slave by flushdb is risky and currently not included. other changes: -------------- distinguish between aof configuration and state so that we can re-enable aof only when sync eventually succeeds (and not when exiting from readSyncBulkPayload after a failed attempt) also a CONFIG GET and INFO during rdb loading would have lied When loading rdb from the network, don't kill the server on short read (that can be a network error) Fix rdb check when performed on preamble AOF tests: run replication tests for diskless slave too make replication test a bit more aggressive Add test for diskless load swapdb
* Remove debugging printf from replication.tcl test.antirez2018-12-121-1/+0
|
* Slave removal: remove slave from integration tests descriptions.antirez2018-09-111-16/+16
|
* Test: processing of master stream in slave -BUSY state.antirez2018-08-311-0/+44
| | | | See #5297.
* Regression test for issue #2813.antirez2015-10-151-0/+53
|
* Test: regression for issue #2473.antirez2015-03-271-8/+44
|
* Attempt to prevent false positives in replication test.antirez2014-11-241-11/+15
|
* Diskless replication tested with the multiple slaves consistency test.antirez2014-10-241-64/+67
|
* Remove trailing spaces from testsMatt Stancliff2014-09-291-5/+5
|
* Test: AOF rewrite during write load.antirez2014-07-101-9/+0
|
* Fixed assert conditional in ROLE command test.antirez2014-06-261-1/+1
|
* Basic tests for the ROLE command.antirez2014-06-231-0/+17
|
* Make tests compatible with new INFO replication output.antirez2013-05-301-1/+1
|
* Use `info nameofexectuable` to find current executableJohan Bergström2013-01-241-1/+2
|
* A reimplementation of blocking operation internals.antirez2012-09-171-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Redis provides support for blocking operations such as BLPOP or BRPOP. This operations are identical to normal LPOP and RPOP operations as long as there are elements in the target list, but if the list is empty they block waiting for new data to arrive to the list. All the clients blocked waiting for th same list are served in a FIFO way, so the first that blocked is the first to be served when there is more data pushed by another client into the list. The previous implementation of blocking operations was conceived to serve clients in the context of push operations. For for instance: 1) There is a client "A" blocked on list "foo". 2) The client "B" performs `LPUSH foo somevalue`. 3) The client "A" is served in the context of the "B" LPUSH, synchronously. Processing things in a synchronous way was useful as if "A" pushes a value that is served by "B", from the point of view of the database is a NOP (no operation) thing, that is, nothing is replicated, nothing is written in the AOF file, and so forth. However later we implemented two things: 1) Variadic LPUSH that could add multiple values to a list in the context of a single call. 2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH" side effect when receiving data. This forced us to make the synchronous implementation more complex. If client "B" is waiting for data, and "A" pushes three elemnents in a single call, we needed to propagate an LPUSH with a missing argument in the AOF and replication link. We also needed to make sure to replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not happened to serve another blocking client into another list ;) This were complex but with a few of mutually recursive functions everything worked as expected... until one day we introduced scripting in Redis. Scripting + synchronous blocking operations = Issue #614. Basically you can't "rewrite" a script to have just a partial effect on the replicas and AOF file if the script happened to serve a few blocked clients. The solution to all this problems, implemented by this commit, is to change the way we serve blocked clients. Instead of serving the blocked clients synchronously, in the context of the command performing the PUSH operation, it is now an asynchronous and iterative process: 1) If a key that has clients blocked waiting for data is the subject of a list push operation, We simply mark keys as "ready" and put it into a queue. 2) Every command pushing stuff on lists, as a variadic LPUSH, a script, or whatever it is, is replicated verbatim without any rewriting. 3) Every time a Redis command, a MULTI/EXEC block, or a script, completed its execution, we run the list of keys ready to serve blocked clients (as more data arrived), and process this list serving the blocked clients. 4) As a result of "3" maybe more keys are ready again for other clients (as a result of BRPOPLPUSH we may have push operations), so we iterate back to step "3" if it's needed. The new code has a much simpler semantics, and a simpler to understand implementation, with the disadvantage of not being able to "optmize out" a PUSH+BPOP as a No OP. This commit will be tested with care before the final merge, more tests will be added likely.
* Properly wait the slave to sync with master in BRPOPLPUSH test.antirez2012-04-301-3/+7
|
* A more lightweight implementation of issue 141 regression test.antirez2012-04-291-13/+26
|
* Redis test: More reliable BRPOPLPUSH replication test.antirez2012-04-261-2/+5
| | | | | | Now it uses the new wait_for_condition testing primitive. Also wait_for_condition implementation was fixed in this commit to properly escape the expr command and its argument.
* On slow computers, 10 seconds are not enough for this heavy replication test.antirez2012-04-041-1/+1
|
* Possible fix for false positives in issue 141 regression testantirez2012-01-121-1/+5
|
* Regression test for the main problem causing issue #141. Minor ↵antirez2012-01-061-0/+68
| | | | changes/fixes/additions to the test suite itself needed to write the test.
* Regression test for issue #142 addedantirez2011-10-171-0/+6
|
* new test engine valgrind supportantirez2011-07-111-0/+1
|
* replication test split into three parts in order to improve test execution ↵antirez2011-07-111-38/+0
| | | | time. Random fixes and improvements.
* Fix for bug 561 and other related problemsantirez2011-06-201-0/+18
|
* Comment typo fixedantirez2011-05-241-1/+1
|
* replication with expire test modified to produce no or less false failuresantirez2011-05-121-0/+2
|
* replication test with expiresantirez2010-08-031-0/+18
|
* better random dataset creation function in test. master-slave replication ↵antirez2010-07-281-0/+13
| | | | test now is able to save the two datasets in CSV when an inconsistency is detected.
* First implementation of a replication consistency testantirez2010-07-061-0/+15
|
* tags for existing testsPieter Noordhuis2010-06-021-1/+1
|
* changed how server.tcl accepts options to support more directives without ↵Pieter Noordhuis2010-06-021-2/+2
| | | | requiring more arguments to the proc
* initial rough integration test for replicationPieter Noordhuis2010-05-141-0/+32