summaryrefslogtreecommitdiff
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Fix synchronous readline "\n" handling.antirez2015-08-051-0/+3
| | | | | | | Our function to read a line with a timeout handles newlines as requests to refresh the timeout, however the code kept subtracting the buffer size left every time a newline was received, for a bug in the loop logic. Fixed by this commit.
* Fix replication slave pings period.antirez2015-08-051-20/+26
| | | | | | | For PINGs we use the period configured by the user, but for the newlines of slaves waiting for an RDB to be created (including slaves waiting for the FULLRESYNC reply) we need to ping with frequency of 1 second, since the timeout is fixed and needs to be refreshed.
* Remove slave state change handled by replicationSetupSlaveForFullResync().antirez2015-08-051-1/+0
|
* Make sure we re-emit SELECT after each new slave full sync setup.antirez2015-08-053-17/+26
| | | | | | | | | | | | In previous commits we moved the FULLRESYNC to the moment we start the BGSAVE, so that the offset we provide is the right one. However this also means that we need to re-emit the SELECT statement every time a new slave starts to accumulate the changes. To obtian this effect in a more clean way, the function that sends the FULLRESYNC reply was overloaded with a more important role of also doing this and chanigng the slave state. So it was renamed to replicationSetupSlaveForFullResync() to better reflect what it does now.
* Don't send SELECT to slaves in WAIT_BGSAVE_START state.antirez2015-08-051-0/+1
|
* syncCommand() comments improved.antirez2015-08-051-1/+8
|
* PSYNC initial offset fix.antirez2015-08-044-17/+61
| | | | | | | | | | | | | | | | | | | | | | | | This commit attempts to fix a bug involving PSYNC and diskless replication (currently experimental) found by Yuval Inbar from Redis Labs and that was later found to have even more far reaching effects (the bug also exists when diskstore is off). The gist of the bug is that, a Redis master replies with +FULLRESYNC to a PSYNC attempt that fails and requires a full resynchronization. However, the baseline offset sent along with FULLRESYNC was always the current master replication offset. This is not ok, because there are many reasosn that may delay the RDB file creation. And... guess what, the master offset we communicate must be the one of the time the RDB was created. So for example: 1) When the BGSAVE for replication is delayed since there is one already but is not good for replication. 2) When the BGSAVE is not needed as we attach one currently ongoing. 3) When because of diskless replication the BGSAVE is delayed. In all the above cases the PSYNC reply is wrong and the slave may reconnect later claiming to need a wrong offset: this may cause data curruption later.
* Sentinel: add more commonly useful sections to INFO.antirez2015-07-291-6/+15
| | | | | Debugging is hard without those when there are problems like the one investigated in issue #2700.
* checkTcpBacklogSetting() now called in Sentinel mode too.antirez2015-07-291-1/+1
|
* Support for CLIENT KILL TYPE MASTER.antirez2015-07-281-3/+1
|
* CLIENT_MASTER introduced.antirez2015-07-284-10/+20
|
* Force slaves to resync after unsuccessful PSYNC.antirez2015-07-281-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | Using chained replication where C is slave of B which is in turn slave of A, if B reconnects the replication link with A but discovers it is no longer possible to PSYNC, slaves of B must be disconnected and PSYNC not allowed, since the new B dataset may be completely different after the synchronization with the master. Note that there are varius semantical differences in the way this is handled now compared to the past. In the past the semantics was: 1. When a slave lost connection with its master, disconnected the chained slaves ASAP. Which is not needed since after a successful PSYNC with the master, the slaves can continue and don't need to resync in turn. 2. However after a failed PSYNC the replication backlog was not reset, so a slave was able to PSYNC successfully even if the instance did a full sync with its master, containing now an entirely different data set. Now instead chained slaves are not disconnected when the slave lose the connection with its master, but only when it is forced to full SYNC with its master. This means that if the slave having chained slaves does a successful PSYNC all its slaves can continue without troubles. See issue #2694 for more details.
* replicationHandleMasterDisconnection() belongs to replication.c.antirez2015-07-282-14/+14
|
* RDMF: Redis -> Server in adjustOpenFilesLimit().antirez2015-07-281-2/+2
|
* Avoid magic "0" argument to prepareForShutdown().antirez2015-07-282-4/+5
| | | | Backported from Disque.
* RDMF: dictRedisObjectDestructor -> dictObjectDestructor."antirez2015-07-281-8/+8
|
* Use mstime_t as return value of mstime().antirez2015-07-281-1/+1
|
* RDMF: use representClusterNodeFlags() generic name.antirez2015-07-271-4/+4
|
* RDMF: more names updated.antirez2015-07-278-271/+271
|
* RDMF: More consistent define names.antirez2015-07-2733-1740/+1740
|
* RDMF: REDIS_OK REDIS_ERR -> C_OK C_ERR.antirez2015-07-2624-512/+512
|
* RDMF: redisAssert -> serverAssert.antirez2015-07-2624-209/+209
|
* RDMF: OBJ_ macros for object related stuff.antirez2015-07-2623-578/+578
|
* RDMF: use client instead of redisClient, like Disque.antirez2015-07-2629-619/+619
|
* RDMF: redisLog -> serverLog.antirez2015-07-2614-395/+395
|
* RDMF (Redis/Disque merge friendlyness) refactoring WIP 1.antirez2015-07-2637-178/+178
|
* SDS: Copyright updated further.antirez2015-07-252-0/+2
|
* SDS: changes to unify Redis SDS with antirez/sds repo.antirez2015-07-254-27/+75
|
* SDS: Copyright notice updated.antirez2015-07-252-4/+6
|
* SDS: sdsjoinsds() call ported from antirez/sds fork.antirez2015-07-252-0/+13
|
* SDS: avoid compiler warning in sdsIncrLen().antirez2015-07-241-0/+1
|
* Merge branch 'sds' into unstableantirez2015-07-249-159/+434
|\
| * SDS: use type 8 if we are likely to append to the string.sdsantirez2015-07-231-0/+11
| | | | | | | | | | | | | | When empty strings are created, or when sdsMakeRoomFor() is called, we are likely into an appending pattern. Use at least type 8 SDS strings since TYPE 5 does not remember the free allocation size and requires to call sdsMakeRoomFor() at every new piece appended.
| * Fix SDS type 5 sdsIncrLen() bug and added test.antirez2015-07-201-15/+27
| | | | | | | | Thanks to @oranagra for spotting this error.
| * Add sdshdr5 to DEBUG structsize.antirez2015-07-161-0/+1
| |
| * SDS: New sds type 5 implemented.antirez2015-07-152-57/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is an attempt to use the refcount feature of the sds.c fork provided in the Pull Request #2509. A new type, SDS_TYPE_5 is introduced having a one byte header with just the string length, without information about the available additional length at the end of the string (this means that sdsMakeRoomFor() will be required each time we want to append something, since the string will always report to have 0 bytes available). More work needed in order to avoid common SDS functions will pay the cost of this type. For example both sdscatprintf() and sdscatfmt() should try to upgrade to SDS_TYPE_8 ASAP when appending chars.
| * Fix redis-benchmark sds binding.antirez2015-07-142-2/+2
| | | | | | | | | | | | Same as redis-cli, now redis-benchmark requires to use hiredis sds copy since it is different compared to the memory optimized fork of Redis sds.
| * Fix DEBUG structsize output.antirez2015-07-141-7/+7
| |
| * sds size classes - memory optimizationOran Agra2015-07-148-139/+364
| |
* | Merge pull request #2636 from badboy/cluster-lock-fixSalvatore Sanfilippo2015-07-172-1/+7
|\ \ | | | | | | Cluster lock fix
| * | Don't include sysctl headerJan-Erik Rediger2015-06-241-1/+0
| | | | | | | | | | | | It's not needed (anymore) and is not available on Solaris.
| * | Do not attempt to lock on SolarisJan-Erik Rediger2015-06-241-0/+7
| | |
* | | Merge pull request #2644 from MOON-CLJ/command_info_fixSalvatore Sanfilippo2015-07-171-1/+1
|\ \ \ | | | | | | | | pfcount support multi keys
| * | | pfcount support multi keysMOON_CLJ2015-06-261-1/+1
| |/ /
* | | bugfix: errno might change before loggingYongyue Sun2015-07-172-2/+2
| | | | | | | | | | | | Signed-off-by: Yongyue Sun <abioy.sun@gmail.com>
* | | Fix: aof_delayed_fsync is not resetTom Kiemes2015-07-171-0/+1
| | | | | | | | | | | | aof_delayed_fsync was not set to 0 when calling CONFIG RESETSTAT
* | | Merge pull request #2676 from july2993/unstableSalvatore Sanfilippo2015-07-172-4/+4
|\ \ \ | | | | | | | | config tcp-keepalive should be numerical field not bool
| * | | config tcp-keepalive should be numerical field not boolJiahao Huang2015-07-162-4/+4
| | |/ | |/|
* | | Client timeout handling improved.antirez2015-07-161-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previos attempt to process each client at least once every ten seconds was not a good idea, because: 1. Usually because of the past min iterations set to 50, you get much better processing period most of the times. 2. However when there are many clients and a normal setting for server.hz, the edge case is triggered, and waiting 10 seconds for a BLPOP that asked for 1 second is not ok. 3. Moreover, because of the high min-itereations limit of 50, when HZ was set to an high value, the actual behavior was to process a lot of clients per second. Also the function checking for timeouts called gettimeofday() at each iteration which can be costly. The new implementation will try to process each client once per second, gets the current time as argument, and does not attempt to process more than 5 clients per iteration if not needed. So now: 1. The CPU usage of an idle Redis process is the same or better. 2. The CPU usage of a busy Redis process is the same or better. 3. However a non trivial amount of work may be performed per iteration when there are many many clients. In this particular case the user may want to raise the "HZ" value if needed. Btw with 4000 clients it was still not possible to noticy any actual latency created by processing 400 clients per second, since the work performed for each client is pretty small.
* | | Clarify a comment in clientsCron().antirez2015-07-161-5/+5
|/ /