summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* assert.h replaced with redisassert.h when appropriate.issue-1240antirez2013-08-195-5/+5
| | | | | Also a warning was suppressed by including unistd.h in redisassert.h (needed for _exit()).
* Added redisassert.h as drop in replacement for assert.h.antirez2013-08-191-0/+45
| | | | | By using redisassert.h version of assert() you get stack traces in the log instead of a process disappearing on assertions.
* dictFingerprint() fingerprinting made more robust.antirez2013-08-191-9/+29
| | | | | | | | | | | | | | | | The previous hashing used the trivial algorithm of xoring the integers together. This is not optimal as it is very likely that different hash table setups will hash the same, for instance an hash table at the start of the rehashing process, and at the end, will have the same fingerprint. Now we hash N integers in a smarter way, by summing every integer to the previous hash, and taking the integer hashing again (see the code for further details). This way it is a lot less likely that we get a collision. Moreover this way of hashing explicitly protects from the same set of integers in a different order to hash to the same number. This commit is related to issue #1240.
* Fix comments for correctness in zunionInterGenericCommand().antirez2013-08-161-3/+5
| | | | Related to issue #1240.
* Properly init/release iterators in zunionInterGenericCommand().antirez2013-08-161-19/+18
| | | | | | | | | | | | | This commit does mainly two things: 1) It fixes zunionInterGenericCommand() by removing mass-initialization of all the iterators used, so that we don't violate the unsafe iterator API of dictionaries. This fixes issue #1240. 2) Since the zui* APIs required the allocator to be initialized in the zsetopsrc structure in order to use non-iterator related APIs, this commit fixes this strict requirement by accessing objects directly via the op->subject->ptr pointer we have to the object.
* dict.c iterator API misuse protection.antirez2013-08-162-4/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dict.c allows the user to create unsafe iterators, that are iterators that will not touch the dictionary data structure in any way, preventing copy on write, but at the same time are limited in their usage. The limitation is that when itearting with an unsafe iterator, no call to other dictionary functions must be done inside the iteration loop, otherwise the dictionary may be incrementally rehashed resulting into missing elements in the set of the elements returned by the iterator. However after introducing this kind of iterators a number of bugs were found due to misuses of the API, and we are still finding bugs about this issue. The bugs are not trivial to track because the effect is just missing elements during the iteartion. This commit introduces auto-detection of the API misuse. The idea is that an unsafe iterator has a contract: from initialization to the release of the iterator the dictionary should not change. So we take a fingerprint of the dictionary state, xoring a few important dict properties when the unsafe iteartor is initialized. We later check when the iterator is released if the fingerprint is still the same. If it is not, we found a misuse of the iterator, as not allowed API calls changed the internal state of the dictionary. This code was checked against a real bug, issue #1240. This is what Redis prints (aborting) when a misuse is detected: Assertion failed: (iter->fingerprint == dictFingerprint(iter->d)), function dictReleaseIterator, file dict.c, line 587.
* Revert "Temporary commit: Track dict.c unsafe iterator misuse."antirez2013-08-162-16/+4
| | | | This reverts commit a9b3c3eb16e85384087c97f94e746042458563b7.
* Temporary commit: Track dict.c unsafe iterator misuse.antirez2013-08-162-4/+16
|
* Use precomptued objects for bulk and mbulk prefixes.antirez2013-08-121-2/+9
|
* replicationFeedSlaves() func name typo: feedReplicationBacklogWithObject -> ↵antirez2013-08-121-1/+1
| | | | feedReplicationBacklog.
* replicationFeedSlave() reworked for correctness and speed.antirez2013-08-121-98/+42
| | | | | | | | | | | | | | The previous code using a static buffer as an optimization was lame: 1) Premature optimization, actually it was *slower* than naive code because resulted into the creation / destruction of the object encapsulating the output buffer. 2) The code was very hard to test, since it was needed to have specific tests for command lines exceeding the size of the static buffer. 3) As a result of "2" the code was bugged as the current tests were not able to stress specific corner cases. It was replaced with easy to understand code that is safer and faster.
* Fix a PSYNC bug caused by a variable name typo.antirez2013-08-121-1/+1
|
* Fix sdsempty() prototype in sds.h.antirez2013-08-122-2/+2
|
* Replication: better way to send a preamble before RDB payload.antirez2013-08-123-15/+26
| | | | | | | | | | | | | | | During the replication full resynchronization process, the RDB file is transfered from the master to the slave. However there is a short preamble to send, that is currently just the bulk payload length of the file in the usual Redis form $..length..<CR><LF>. This preamble used to be sent with a direct write call, assuming that there was alway room in the socket output buffer to hold the few bytes needed, however this does not scale in case we'll need to send more stuff, and is not very robust code in general. This commit introduces a more general mechanism to send a preamble up to 2GB in size (the max length of an sds string) in a non blocking way.
* redis-benchmark: changes to random arguments substitution.antirez2013-08-081-28/+80
| | | | | | | | | | | | | | | | | | | | | Before this commit redis-benchmark supported random argumetns in the form of :rand:000000000000. In every string of that form, the zeros were replaced with a random number of 12 digits at every command invocation. However this was far from perfect as did not allowed to generate simply random numbers as arguments, there was always the :rand: prefix. Now instead every argument in the form __rand_int__ is replaced with a 12 digits number. Note that "__rand_int__" is 12 characters itself. In order to implement the new semantic, it was needed to change a few thigns in the internals of redis-benchmark, as new clients are created cloning old clients, so without a stable prefix such as ":rand:" the old way of cloning the client was no longer able to understand, from the old command line, what was the position of the random strings to substitute. Now instead a client structure is passed as a reference for cloning, so that we can directly clone the offsets inside the command line.
* redis-benchmark: replace snprintf()+memcpy with faster code.antirez2013-08-081-5/+10
| | | | | This change was profiler-driven, but the actual effect is hard to measure in real-world redis benchmark runs.
* Merge pull request #1234 from badboy/patch-2Salvatore Sanfilippo2013-08-071-1/+1
|\ | | | | Little typo
| * Little typoJan-Erik Rediger2013-08-071-1/+1
|/
* redis-benchmark: fix memory leak introduced by 346256fantirez2013-08-071-0/+1
|
* redis-benchmark: max pipeline length hardcoded limit removed.antirez2013-08-071-7/+15
|
* redis-benchmark: fix db selection when :rand: feature is used.antirez2013-08-061-0/+6
|
* redis-benchmark: ability to SELECT a specifid db number.antirez2013-08-061-3/+43
|
* Add per-db average TTL information in INFO output.expirealgoantirez2013-08-062-4/+30
| | | | | | | | | | | | Example: db0:keys=221913,expires=221913,avg_ttl=655 The algorithm uses a running average with only two samples (current and previous). Keys found to be expired are considered at TTL zero even if the actual TTL can be negative. The TTL is reported in milliseconds.
* activeExpireCycle(): fix about fast cycle early start.antirez2013-08-061-1/+1
| | | | | | | | | | | | | | | We don't want to repeat a fast cycle too soon, the previous code was broken, we need to wait two times the period *since* the start of the previous cycle in order to avoid there is an even space between cycles: .-> start .-> second start | | +-------------+-------------+--------------+ | first cycle | pause | second cycle | +-------------+-------------+--------------+ The second and first start must be PERIOD*2 useconds apart hence the *2 in the new code.
* Some activeExpireCycle() refactoring.antirez2013-08-062-21/+32
|
* Remove dead code and fix comments for new expire code.antirez2013-08-061-56/+10
|
* Darft #2 for key collection algo: more improvements.antirez2013-08-051-2/+14
| | | | | | | This commit makes the fast collection cycle time configurable, at the same time it does not allow to run a new fast collection cycle for the same amount of time as the max duration of the fast collection cycle.
* Draft #1 of a new expired keys collection algorithm.antirez2013-08-052-21/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main idea here is that when we are no longer to expire keys at the rate the are created, we can't block more in the normal expire cycle as this would result in too big latency spikes. For this reason the commit introduces a "fast" expire cycle that does not run for more than 1 millisecond but is called in the beforeSleep() hook of the event loop, so much more often, and with a frequency bound to the frequency of executed commnads. The fast expire cycle is only called when the standard expiration algorithm runs out of time, that is, consumed more than REDIS_EXPIRELOOKUPS_TIME_PERC of CPU in a given cycle without being able to take the number of already expired keys that are yet not collected to a number smaller than 25% of the number of keys. You can test this commit with different loads, but a simple way is to use the following: Extreme load with pipelining: redis-benchmark -r 100000000 -n 100000000 \ -P 32 set ele:rand:000000000000 foo ex 2 Remove the -P32 in order to avoid the pipelining for a more real-world load. In another terminal tab you can monitor the Redis behavior with: redis-cli -i 0.1 -r -1 info keyspace and redis-cli --latency-history Note: this commit will make Redis printing a lot of debug messages, it is not a good idea to use it in production.
* Test: regression test for issue #1221.antirez2013-07-291-0/+38
|
* Fix replicationFeedSlaves() off-by-one bug.antirez2013-07-281-2/+2
| | | | This fixes issue #1221.
* Remove dead variable bothsds from object.c.antirez2013-07-281-3/+0
| | | | | | | | | | Thanks to @run and @badboy for spotting this. Triva: clang was not able to provide me a warning about that when compiling. This closes #1024 and #1207, committing the change myself as the pull requests no longer apply cleanly after other changes to the same function.
* Use latest sds.c in the hiredis library under deps.antirez2013-07-254-120/+422
|
* Ignore sdsrange return value.antirez2013-07-241-2/+2
|
* sdsrange() does not need to return a value.antirez2013-07-245-11/+10
| | | | | | Actaully the string is modified in-place and a reallocation is never needed, so there is no need to return the new sds string pointer as return value of the function, that is now just "void".
* Inline protocol improved to accept quoted strings.antirez2013-07-241-2/+4
|
* Every function inside sds.c is now commented.antirez2013-07-231-7/+145
|
* getStringObjectSdsUsedMemory() function added.antirez2013-07-231-19/+17
| | | | | | | | | | Now that EMBSTR encoding exists we calculate the amount of memory used by the SDS part of a Redis String object in two different ways: 1) For raw string object, the size of the allocation is considered. 2) For embstr objects, the length of the string itself is used. The new function takes care of this logic.
* Test: regression test for issue #1208.antirez2013-07-221-0/+7
|
* Fix setDeferredMultiBulkLength() c->reply_bytes handling with EMBSTRantirez2013-07-221-1/+4
| | | | | | | | This function missed proper handling of reply_bytes when gluing to the previous object was used. The issue was introduced with the EMBSTR new string object encoding. This fixes issue #1208.
* Fixed a possible bug in client->reply_bytes computation.antirez2013-07-221-0/+1
|
* Fix replicationFeedSlaves() to use sdsEncodedObject() macro.antirez2013-07-221-2/+3
|
* Introduction of a new string encoding: EMBSTRantirez2013-07-2217-71/+157
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously two string encodings were used for string objects: 1) REDIS_ENCODING_RAW: a string object with obj->ptr pointing to an sds stirng. 2) REDIS_ENCODING_INT: a string object where the obj->ptr void pointer is casted to a long. This commit introduces a experimental new encoding called REDIS_ENCODING_EMBSTR that implements an object represented by an sds string that is not modifiable but allocated in the same memory chunk as the robj structure itself. The chunk looks like the following: +--------------+-----------+------------+--------+----+ | robj data... | robj->ptr | sds header | string | \0 | +--------------+-----+-----+------------+--------+----+ | ^ +-----------------------+ The robj->ptr points to the contiguous sds string data, so the object can be manipulated with the same functions used to manipulate plan string objects, however we need just on malloc and one free in order to allocate or release this kind of objects. Moreover it has better cache locality. This new allocation strategy should benefit both the memory usage and the performances. A performance gain between 60 and 70% was observed during micro-benchmarks, however there is more work to do to evaluate the performance impact and the memory usage behavior.
* addReplyDouble(): format infinite in a libc agnostic way.antirez2013-07-171-3/+10
| | | | | | | | There are systems that when printing +/- infinte with printf-family functions will not use the usual "inf" "-inf", but different strings. Handle that explicitly. Fixes issue #930.
* Fixed typo in rio.h, simgle -> single.antirez2013-07-161-1/+1
|
* Chunked loading of RDB to prevent redis from stalling reading very large keys.yoav2013-07-165-15/+45
|
* Make sure that ZADD can accept the full range of double values.antirez2013-07-161-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes issue #1194, that contains many details. However in short, it was possible for ZADD to not accept as score values that was however possible to obtain with multiple calls to ZINCRBY, like in the following example: redis 127.0.0.1:6379> zadd k 2.5e-308 m (integer) 1 redis 127.0.0.1:6379> zincrby k -2.4e-308 m "9.9999999999999694e-310" redis 127.0.0.1:6379> zscore k m "9.9999999999999694e-310" redis 127.0.0.1:6379> zadd k 9.9999999999999694e-310 m1 (error) ERR value is not a valid float The problem was due to strtod() returning ERANGE in the following case specified by POSIX: "If the correct value would cause an underflow, a value whose magnitude is no greater than the smallest normalized positive number in the return type shall be returned and errno set to [ERANGE].". Now instead the returned value is accepted even when ERANGE is returned as long as the return value of the function is not negative or positive HUGE_VAL or zero.
* Merge pull request #1193 from tnm/patch-1Salvatore Sanfilippo2013-07-121-1/+1
|\ | | | | Make sure the log standardizes on 'timeout'
| * Make sure the log standardizes on 'timeout'Ted Nyman2013-07-121-1/+1
|/
* Use the environment locale for strcoll() collation.antirez2013-07-121-0/+2
|
* SORT ALPHA: use collation instead of binary comparison.antirez2013-07-122-3/+15
| | | | | | | | | Note that we only do it when STORE is not used, otherwise we want an absolutely locale independent and binary safe sorting in order to ensure AOF / replication consistency. This is probably an unexpected behavior violating the least surprise rule, but there is currently no other simple / good alternative.