summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* HyperLogLog apply bias correction using a polynomial.antirez2014-04-161-11/+18
| | | | | | | | | | | | Better results can be achieved by compensating for the bias of the raw approximation just after 2.5m (when LINEARCOUNTING is no longer used) by using a polynomial that approximates the bias at a given cardinality. The curve used was found using this web page: http://www.xuru.org/rt/PR.asp That performs polynomial regression given a set of values.
* HLLMERGE implemented.antirez2014-04-163-1/+67
| | | | | Merge N HLL data structures by selecting the max value for every M[i] register among the set of HLLs.
* HLLCOUNT is technically a write commandantirez2014-04-161-0/+6
| | | | | When we update the cached value, we need to propagate the command and signal the key as modified for WATCH.
* HLLADD: propagate write when only variable name is given.antirez2014-04-161-0/+1
| | | | | | | | | | | | | The following form is given: HLLADD myhll No element is provided in the above case so if 'myhll' var does not exist the result is to just create an empty HLL structure, and no update will be performed on the registers. In this case, the DB should still be set dirty and the command propagated.
* hll-gnuplot-graph.rb: Use |error| when filter is :maxantirez2014-04-161-0/+1
|
* Ignore txt files inside utils/hyperloglog.antirez2014-04-161-0/+1
| | | | Those are generated to trace graphs using gnuplot.
* HyperLogLog: use LINEARCOUNTING up to 3m.antirez2014-04-161-3/+11
| | | | | | The HyperLogLog original paper suggests using LINEARCOUNTING for cardinalities < 2.5m, however for P=14 the median / max error curves show that a value of '3' is the best pick for m = 16384.
* hll-gnuplot-graph.rb added to plot HyperLogLog error graphs.antirez2014-04-161-0/+68
|
* HyperLogLog approximated cardinality caching.antirez2014-04-161-3/+51
| | | | | | | | | | | | | | | | | | | | The more we add elements to an HyperLogLog counter, the smaller is the probability that we actually update some register. From this observation it is easy to see how it is possible to use caching of a previously computed cardinality and reuse it to serve HLLCOUNT queries as long as no register was updated in the data structure. This commit does exactly this by using just additional 8 bytes for the data structure to store a 64 bit unsigned integer value cached cardinality. When the most significant bit of the 64 bit integer is set, it means that the value computed is no longer usable since at least a single register was modified and we need to recompute it at the next call of HLLCOUNT. The value is always stored in little endian format regardless of the actual CPU endianess.
* String value unsharing refactored into proper function.antirez2014-04-166-31/+44
| | | | | | | | | | | All the Redis functions that need to modify the string value of a key in a destructive way (APPEND, SETBIT, SETRANGE, ...) require to make the object unshared (if refcount > 1) and encoded in raw format (if encoding is not already REDIS_ENCODING_RAW). This was cut & pasted many times in multiple places of the code. This commit puts the small logic needed into a function called dbUnshareStringValue().
* Use endian neutral hash function for HyperLogLog.antirez2014-04-161-14/+28
| | | | | | | | | We need to be sure that you can save a dataset in a Redis instance, reload it in a different architecture, and continue to count in the same HyperLogLog structure. So 32 and 64 bit, little or bit endian, must all guarantee to output the same hash for the same element.
* HyperLogLog internal representation modified.antirez2014-04-161-57/+87
| | | | | | | | | | | | | The new representation is more obvious, starting from the LSB of the first byte and using bits going to MSB, and passing to next byte as needed. There was also a subtle error: first two bits were unused, everything was carried over on the right of two bits, even if it worked because of the code requirement of always having a byte more at the end. During the rewrite the code was made safer trying to avoid undefined behavior due to shifting an uint8_t for more than 8 bits.
* Remove a few useless operations from hllCount() fast path.antirez2014-04-161-4/+4
|
* HLLCOUNT 3x faster taking fast path for default params.antirez2014-04-161-11/+47
|
* Use processor base types in HLL_(GET|SET)_REGISTER.antirez2014-04-161-8/+8
| | | | This speedups the macros by a noticeable factor.
* HyperLogLog: use precomputed table for 2^(-M[i]).antirez2014-04-161-1/+14
|
* hll-err.rb: speedup using pipelining.antirez2014-04-161-10/+16
|
* hll-err.rb added to test error rate of Redis HyperLogLog.antirez2014-04-161-0/+21
|
* HyperLogLog algorithm fixed in two ways.antirez2014-04-161-3/+8
| | | | | There was an error in the computation of 2^register, and the sequence of zeroes computed after the hashing did not included the "1".
* HLLCOUNT implemented.antirez2014-04-163-2/+68
|
* HLLADD implemented.antirez2014-04-163-4/+94
|
* hllAdd() low level HyperLogLog "add" implemented.antirez2014-04-161-0/+45
|
* HyperLogLog: redefine constants using "P".antirez2014-04-161-2/+3
|
* HLL_SET_REGISTER fixed.antirez2014-04-161-13/+18
| | | | | There was an error in the first version of the macro. Now the HLLSELFTEST test reports success.
* Use REDIS_HLL_REGISTER_MAX when possible.antirez2014-04-161-1/+1
|
* HLL_(SET|GET)_REGISTER types fixed.antirez2014-04-161-7/+9
|
* HLLSELFTEST command implemented.antirez2014-04-164-8/+53
| | | | | | To test the bitfield array of counters set/get macros from the Redis Tcl suite is hard, so a specialized command that is able to test the internals was developed.
* HyperLogLog: initial sketch of registers access.antirez2014-04-161-0/+175
|
* Document TTL changes in the 2.6 -> 2.8 migration doc.antirez2014-03-271-0/+3
|
* Redis 2.8.8.2.8.8antirez2014-03-252-1/+40
|
* Test: do not complain when "leaks" can't run because process died.antirez2014-03-251-0/+4
|
* adjustOpenFilesLimit() refactoring.antirez2014-03-251-15/+17
| | | | | | | | | | | | In this commit: * Decrement steps are semantically differentiated from the reserved FDs. Previously both values were 32 but the meaning was different. * Make it clear that we save setrlimit errno. * Don't explicitly handle wrapping of 'f', but prevent it from happening. * Add comments to make the function flow more readable. This integrates PR #1630
* Fix potentially incorrect errno usageMatt Stancliff2014-03-251-1/+2
| | | | | errno may be reset by the previous call to redisLog, so capture the original value for proper error reporting.
* Add REDIS_MIN_RESERVED_FDS define for open fdsMatt Stancliff2014-03-252-7/+8
| | | | | | | | | Also update the original REDIS_EVENTLOOP_FDSET_INCR to include REDIS_MIN_RESERVED_FDS. REDIS_EVENTLOOP_FDSET_INCR exists to make sure more than (maxclients+RESERVED) entries are allocated, but we can only guarantee that if we include the current value of REDIS_MIN_RESERVED_FDS as a minimum for the INCR size.
* Fix infinite loop on startup if ulimit too lowMatt Stancliff2014-03-251-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fun fact: rlim_t is an unsigned long long on all platforms. Continually subtracting from a rlim_t makes it get smaller and smaller until it wraps, then you're up to 2^64-1. This was causing an infinite loop on Redis startup if your ulimit was extremely (almost comically) low. The case of (f > oldlimit) would never be met in a case like: f = 150 while (f > 20) f -= 128 Since f is unsigned, it can't go negative and would take on values of: Iteration 1: 150 - 128 => 22 Iteration 2: 22 - 128 => 18446744073709551510 Iterations 3-∞: ... To catch the wraparound, we use the previous value of f stored in limit.rlimit_cur. If we subtract from f and get a larger number than the value it had previously, we print an error and exit since we don't have enough file descriptors to help the user at this point. Thanks to @bs3g for the inspiration to fix this problem. Patches existed from @bs3g at antirez#1227, but I needed to repair a few other parts of Redis simultaneously, so I didn't get a chance to use them.
* Improve error handling around setting ulimitsMatt Stancliff2014-03-251-4/+22
| | | | | | | | | | | | | | | The log messages about open file limits have always been slightly opaque and confusing. Here's an attempt to fix their wording, detail, and meaning. Users will have a better understanding of how to fix very common problems with these reworded messages. Also, we handle a new error case when maxclients becomes less than one, essentially rendering the server unusable. We now exit on startup instead of leaving the user with a server unable to handle any connections. This fixes antirez#356 as well.
* Replace magic 32 with REDIS_EVENTLOOP_FDSET_INCRMatt Stancliff2014-03-251-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | 32 was the additional number of file descriptors Redis would reserve when managing a too-low ulimit. The number 32 was in too many places statically, so now we use a macro instead that looks more appropriate. When Redis sets up the server event loop, it uses: server.maxclients+REDIS_EVENTLOOP_FDSET_INCR So, when reserving file descriptors, it makes sense to reserve at least REDIS_EVENTLOOP_FDSET_INCR FDs instead of only 32. Currently, REDIS_EVENTLOOP_FDSET_INCR is set to 128 in redis.h. Also, I replaced the static 128 in the while f < old loop with REDIS_EVENTLOOP_FDSET_INCR as well, which results in no change since it was already 128. Impact: Users now need at least maxclients+128 as their open file limit instead of maxclients+32 to obtain actual "maxclients" number of clients. Redis will carve the extra REDIS_EVENTLOOP_FDSET_INCR file descriptors it needs out of the "maxclients" range instead of failing to start (unless the local ulimit -n is too low to accomidate the request).
* Fix maxclients error handlingMatt Stancliff2014-03-252-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Everywhere in the Redis code base, maxclients is treated as an int with (int)maxclients or `maxclients = atoi(source)`, so let's make maxclients an int. This fixes a bug where someone could specify a negative maxclients on startup and it would work (as well as set maxclients very high) because: unsigned int maxclients; char *update = "-300"; maxclients = atoi(update); if (maxclients < 1) goto fail; But, (maxclients < 1) can only catch the case when maxclients is exactly 0. maxclients happily sets itself to -300, which isn't -300, but rather 4294966996, which isn't < 1, so... everything "worked." maxclients config parsing checks for the case of < 1, but maxclients CONFIG SET parsing was checking for case of < 0 (allowing maxclients to be set to 0). CONFIG SET parsing is now updated to match config parsing of < 1. It's tempting to add a MINIMUM_CLIENTS define, but... I didn't. These changes were inspired by antirez#356, but this doesn't fix that issue.
* Sentinel: remove variable causing warningMatt Stancliff2014-03-251-3/+1
| | | | | | | | | GCC-4.9 warned about this, but clang didn't. This commit fixes warning: sentinel.c: In function 'sentinelReceiveHelloMessages': sentinel.c:2156:43: warning: variable 'master' set but not used [-Wunused-but-set-variable] sentinelRedisInstance *ri = c->data, *master;
* Fixed undefined variable value with certain code paths.antirez2014-03-251-2/+1
| | | | | | | | | In sentinelFlushConfig() fd could be undefined when the following if statement was true: if (rewrite_status == -1) goto werr; This could cause random file descriptors to get closed.
* Sentinel: Notify user when config can't be savedMatt Stancliff2014-03-251-9/+12
|
* Small typo fixedJan-Erik Rediger2014-03-251-1/+1
|
* Fix data loss when save AOF/RDB with no free spaceMatt Stancliff2014-03-242-6/+6
| | | | | | | | | | | | Previously, the (!fp) would only catch lack of free space under OS X. Linux waits to discover it can't write until it actually writes contents to disk. (fwrite() returns success even if the underlying file has no free space to write into. All the errors only show up at flush/sync/close time.) Fixes antirez/redis#1604
* Finally fix the `install_server.sh` script.Jan-Erik Rediger2014-03-243-73/+140
| | | | | Includes changes from a dozen bug reports and pull requests. Was tested on Ubuntu, Debian and CentOS.
* Sample and cache RSS in serverCron().antirez2014-03-244-5/+10
| | | | | | | | | | | | | Obtaining the RSS (Resident Set Size) info is slow in Linux and OSX. This slowed down the generation of the INFO 'memory' section. Since the RSS does not require to be a real-time measurement, we now sample it with server.hz frequency (10 times per second by default) and use this value both to show the INFO rss field and to compute the fragmentation ratio. Practically this does not make any difference for memory profiling of Redis but speeds up the INFO call significantly.
* sdscatvprintf(): Try to use a static buffer.antirez2014-03-241-6/+18
| | | | | | | | | For small content the function now tries to use a static buffer to avoid a malloc/free cycle that is too costly when the function is used in the context of performance critical code path such as INFO output generation. This change was verified to have positive effects in the execution speed of the INFO command.
* Cache uname() output across INFO calls.antirez2014-03-241-2/+9
| | | | | | | | | | Uname was profiled to be a slow syscall. It produces always the same output in the context of a single execution of Redis, so calling it at every INFO output generation does not make too much sense. The uname utsname structure was modified as a static variable. At the same time a static integer was added to check if we need to call uname the first time.
* sdscatvprintf(): guess buflen using format length.antirez2014-03-241-1/+2
| | | | | | | | | | | | | | | | | sdscatvprintf() uses a loop where it tries to output the formatted string in a buffer of the initial length, if there was not enough room, a buffer of doubled size is tried and so forth. The initial guess for the buffer length was very poor, an hardcoded "16". This caused the printf to be processed multiple times without a good reason. Given that printf functions are already not fast, the overhead was significant. The new heuristic is to use a buffer 4 times the length of the format buffer, and 32 as minimal size. This appears to be a good balance for typical uses of the function inside the Redis code base. This change improved INFO command performances 3 times.
* Use 24 bits for the lru object field and improve resolution.antirez2014-03-211-3/+2
| | | | | | | | | | | | | There were 2 spare bits inside the Redis object structure that are now used in order to enlarge 4x the range of the LRU field. At the same time the resolution was improved from 10 to 1 second: this still provides 194 days before the LRU counter overflows (restarting from zero). This is not a problem since it only causes lack of eviction precision for objects not touched for a very long time, and the lack of precision is only temporary.
* Specify lruclock in redisServer structure via REDIS_LRU_BITS.antirez2014-03-211-2/+1
| | | | The padding field was totally useless: removed.