summaryrefslogtreecommitdiff
path: root/src/debug.c
Commit message (Collapse)AuthorAgeFilesLines
* PSYNC2: make persisiting replication info more solidzhaozhao.zz2017-09-201-1/+3
| | | | | | | | | | | | | | This commit is a reinforcement of commit c1c99e9. 1. Replication information can be stored when the RDB file is generated by a mater using server.slaveseldb when server.repl_backlog is not NULL, or set repl_stream_db be -1. That's safe, because NULL server.repl_backlog will trigger full synchronization, then master will send SELECT command to replicaiton stream. 2. Only do rdbSave* when rsiptr is not NULL, if we do rdbSave* without rdbSaveInfo, slave will miss repl-stream-db. 3. Save the replication informations also in the case of SAVE command, FLUSHALL command and DEBUG reload.
* Modules: DEBUG DIGEST interface.antirez2017-07-061-0/+9
|
* Use SipHash hash function to mitigate HashDos attempts.antirez2017-02-201-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change attempts to switch to an hash function which mitigates the effects of the HashDoS attack (denial of service attack trying to force data structures to worst case behavior) while at the same time providing Redis with an hash function that does not expect the input data to be word aligned, a condition no longer true now that sds.c strings have a varialbe length header. Note that it is possible sometimes that even using an hash function for which collisions cannot be generated without knowing the seed, special implementation details or the exposure of the seed in an indirect way (for example the ability to add elements to a Set and check the return in which Redis returns them with SMEMBERS) may make the attacker's life simpler in the process of trying to guess the correct seed, however the next step would be to switch to a log(N) data structure when too many items in a single bucket are detected: this seems like an overkill in the case of Redis. SPEED REGRESION TESTS: In order to verify that switching from MurmurHash to SipHash had no impact on speed, a set of benchmarks involving fast insertion of 5 million of keys were performed. The result shows Redis with SipHash in high pipelining conditions to be about 4% slower compared to using the previous hash function. However this could partially be related to the fact that the current implementation does not attempt to hash whole words at a time but reads single bytes, in order to have an output which is endian-netural and at the same time working on systems where unaligned memory accesses are a problem. Further X86 specific optimizations should be tested, the function may easily get at the same level of MurMurHash2 if a few optimizations are performed.
* Merge pull request #3712 from oranagra/fix_assert_debug_digestSalvatore Sanfilippo2017-01-201-1/+1
|\ | | | | fix rare assertion in DEBUG DIGEST
| * fix rare assertion in DEBUG DIGESToranagra2016-12-241-1/+1
| | | | | | | | | | getExpire calls dictFind which can do rehashing. found by calling computeDatasetDigest from serverCron and running the test suite.
* | serverPanic(): allow printf() alike formatting.antirez2017-01-181-2/+12
| | | | | | | | | | | | | | | | | | This is of great interest because allows us to print debugging informations that could be of useful when debugging, like in the following example: serverPanic("Unexpected encoding for object %d, %d", obj->type, obj->encoding);
* | active defrag improvementsoranagra2017-01-021-1/+2
| |
* | active memory defragmentationoranagra2016-12-301-3/+12
|/
* DEBUG: new "ziplist" subcommand added. Dumps a ziplist on stdout.antirez2016-12-161-0/+14
| | | | | | The commit improves ziplistRepr() and adds a new debugging subcommand so that we can trigger the dump directly from the Redis API. This command capability was used while investigating issue #3684.
* PSYNC2: different improvements to Redis replication.antirez2016-11-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The gist of the changes is that now, partial resynchronizations between slaves and masters (without the need of a full resync with RDB transfer and so forth), work in a number of cases when it was impossible in the past. For instance: 1. When a slave is promoted to mastrer, the slaves of the old master can partially resynchronize with the new master. 2. Chained slalves (slaves of slaves) can be moved to replicate to other slaves or the master itsef, without requiring a full resync. 3. The master itself, after being turned into a slave, is able to partially resynchronize with the new master, when it joins replication again. In order to obtain this, the following main changes were operated: * Slaves also take a replication backlog, not just masters. * Same stream replication for all the slaves and sub slaves. The replication stream is identical from the top level master to its slaves and is also the same from the slaves to their sub-slaves and so forth. This means that if a slave is later promoted to master, it has the same replication backlong, and can partially resynchronize with its slaves (that were previously slaves of the old master). * A given replication history is no longer identified by the `runid` of a Redis node. There is instead a `replication ID` which changes every time the instance has a new history no longer coherent with the past one. So, for example, slaves publish the same replication history of their master, however when they are turned into masters, they publish a new replication ID, but still remember the old ID, so that they are able to partially resynchronize with slaves of the old master (up to a given offset). * The replication protocol was slightly modified so that a new extended +CONTINUE reply from the master is able to inform the slave of a replication ID change. * REPLCONF CAPA is used in order to notify masters that a slave is able to understand the new +CONTINUE reply. * The RDB file was extended with an auxiliary field that is able to select a given DB after loading in the slave, so that the slave can continue receiving the replication stream from the point it was disconnected without requiring the master to insert "SELECT" statements. This is useful in order to guarantee the "same stream" property, because the slave must be able to accumulate an identical backlog. * Slave pings to sub-slaves are now sent in a special form, when the top-level master is disconnected, in order to don't interfer with the replication stream. We just use out of band "\n" bytes as in other parts of the Redis protocol. An old design document is available here: https://gist.github.com/antirez/ae068f95c0d084891305 However the implementation is not identical to the description because during the work to implement it, different changes were needed in order to make things working well.
* debug.c: include dlfcn.h regardless of BACKTRACE support.antirez2016-09-271-1/+1
|
* add zmalloc used mem to DEBUG SDSLENoranagra2016-09-161-3/+5
|
* Memory related subcommands of DEBUG moved to MEMORY.antirez2016-09-161-36/+0
|
* debug.c: no need to define _GNU_SOURCE, is defined in fmacros.h.antirez2016-09-091-1/+0
|
* crash log - improve code dump with more info and called symbols.antirez2016-09-091-20/+59
|
* crash log - add hex dump of function codeoranagra2016-09-081-0/+22
|
* Use const in Redis Module API where possible.Yossi Gottlieb2016-06-201-7/+7
|
* DEBUG command self documentation.antirez2016-05-041-1/+48
|
* add DEBUG JEMALLC PURGE and JEMALLOC INFO cleanupOran Agra2016-04-251-1/+16
|
* Hopefully better memory test on crash.antirez2015-12-161-47/+46
| | | | | | | | | | | | | | | The old test, designed to do a transformation on the bits that was invertible, in order to avoid touching the original memory content, was not effective as it was redis-server --test-memory. The former often reported OK while the latter was able to spot the error. So the test was substituted with one that may perform better, however the new one must backup the memory tested, so it tests memory in small pieces. This limits the effectiveness because of the CPU caches. However some attempt is made in order to trash the CPU cache between the fill and the check stages, but not for the addressing test unfortunately. We'll see if this test will be able to find errors where the old failed.
* Suppress harmless warnings.antirez2015-12-161-2/+2
|
* Crash report format improvements.antirez2015-12-161-24/+35
|
* Log address causing SIGSEGV.antirez2015-12-151-0/+4
|
* fix sprintf and snprintf format stringantirez2015-11-281-1/+1
| | | | | | | There are some cases of printing unsigned integer with %d conversion specificator and vice versa (signed integer with %u specificator). Patch by Sergey Polovko. Backported to Redis from Disque.
* More reliable DEBUG loadaof.antirez2015-10-301-0/+1
| | | | | Make sure to flush the AOF output buffers before reloading. Result: less false timing related false positives on AOF tests.
* Scripting: ability to turn on Lua commands style replication globally.antirez2015-10-301-0/+5
| | | | | | Currently this feature is only accessible via DEBUG for testing, since otherwise depending on the instance configuration a given script works or is broken, which is against the Redis philosophy.
* DEBUG RESTART/CRASH-AND-RECOVER [delay] implemented.antirez2015-10-131-0/+14
|
* Lazyfree: ability to free whole DBs in background.antirez2015-10-011-2/+2
|
* Lazyfree: Hash converted to use plain SDS WIP 4.antirez2015-10-011-10/+8
|
* DEBUG DIGEST Set type memory leak fixed.antirez2015-10-011-0/+1
|
* Lazyfree: Sorted sets convereted to plain SDS. (several commits squashed)antirez2015-10-011-6/+3
|
* Lazyfree: Convert Sets to use plains SDS (several commits squashed).antirez2015-10-011-1/+3
|
* RDMF: More consistent define names.antirez2015-07-271-62/+62
|
* RDMF: REDIS_OK REDIS_ERR -> C_OK C_ERR.antirez2015-07-261-6/+6
|
* RDMF: redisAssert -> serverAssert.antirez2015-07-261-11/+11
|
* RDMF: OBJ_ macros for object related stuff.antirez2015-07-261-18/+18
|
* RDMF: use client instead of redisClient, like Disque.antirez2015-07-261-4/+4
|
* RDMF: redisLog -> serverLog.antirez2015-07-261-60/+60
|
* RDMF (Redis/Disque merge friendlyness) refactoring WIP 1.antirez2015-07-261-1/+1
|
* Add sdshdr5 to DEBUG structsize.antirez2015-07-161-0/+1
|
* Fix DEBUG structsize output.antirez2015-07-141-7/+7
|
* sds size classes - memory optimizationOran Agra2015-07-141-1/+4
|
* DEBUG HTSTATS <dbid> added.antirez2015-07-141-0/+21
| | | | | | | | The command reports information about the hash table internal state representing the specified database ID. This can be used in order to investigate rehashings, memory usage issues and for other debugging purposes.
* Merge pull request #2301 from mattsta/fix/lengthsSalvatore Sanfilippo2015-02-241-2/+2
|\ | | | | Improve type correctness
| * Improve RDB type correctnessMatt Stancliff2015-01-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible large objects could be larger than 'int', so let's upgrade all size counters to ssize_t. This also fixes rdbSaveObject serialized bytes calculation. Since entire serializations of data structures can be large, so we don't want to limit their calculated size to a 32 bit signed max. This commit increases object size calculation and cascades the change back up to serializedlength printing. Before: 127.0.0.1:6379> debug object hihihi ... encoding:quicklist serializedlength:-2147483559 ... After: 127.0.0.1:6379> debug object hihihi ... encoding:quicklist serializedlength:2147483737 ...
* | DEBUG structsizeantirez2015-01-231-0/+7
|/ | | | Show sizes of a few important data structures in Redis. More missing.
* Revert "Use REDIS_SUPERVISED_NONE instead of 0."antirez2015-01-121-2/+1
| | | | | | This reverts commit 2c925b0c302ad8612cc4ac6261549482d3c69846. Nevermind.
* Use REDIS_SUPERVISED_NONE instead of 0.antirez2015-01-121-1/+2
|
* Add more quicklist info to DEBUG OBJECTMatt Stancliff2015-01-021-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Adds: ql_compressed (boolean, 1 if compression enabled for list, 0 otherwise) Adds: ql_uncompressed_size (actual uncompressed size of all quicklistNodes) Adds: ql_ziplist_max (quicklist max ziplist fill factor) Compression ratio of the list is then ql_uncompressed_size / serializedlength We report ql_uncompressed_size for all quicklists because serializedlength is a _compressed_ representation anyway. Sample output from a large list: 127.0.0.1:6379> llen abc (integer) 38370061 127.0.0.1:6379> debug object abc Value at:0x7ff97b51d140 refcount:1 encoding:quicklist serializedlength:19878335 lru:9718164 lru_seconds_idle:5 ql_nodes:21945 ql_avg_node:1748.46 ql_ziplist_max:-2 ql_compressed:0 ql_uncompressed_size:1643187761 (1.36s) The 1.36s result time is because rdbSavedObjectLen() is serializing the object, not because of any new stats reporting. If we run DEBUG OBJECT on a compressed list, DEBUG OBJECT takes almost *zero* time because rdbSavedObjectLen() reuses already-compressed ziplists: 127.0.0.1:6379> debug object abc Value at:0x7fe5c5800040 refcount:1 encoding:quicklist serializedlength:19878335 lru:9718109 lru_seconds_idle:5 ql_nodes:21945 ql_avg_node:1748.46 ql_ziplist_max:-2 ql_compressed:1 ql_uncompressed_size:1643187761
* Allow compression of interior quicklist nodesMatt Stancliff2015-01-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let user set how many nodes to *not* compress. We can specify a compression "depth" of how many nodes to leave uncompressed on each end of the quicklist. Depth 0 = disable compression. Depth 1 = only leave head/tail uncompressed. - (read as: "skip 1 node on each end of the list before compressing") Depth 2 = leave head, head->next, tail->prev, tail uncompressed. - ("skip 2 nodes on each end of the list before compressing") Depth 3 = Depth 2 + head->next->next + tail->prev->prev - ("skip 3 nodes...") etc. This also: - updates RDB storage to use native quicklist compression (if node is already compressed) instead of uncompressing, generating the RDB string, then re-compressing the quicklist node. - internalizes the "fill" parameter for the quicklist so we don't need to pass it to _every_ function. Now it's just a property of the list. - allows a runtime-configurable compression option, so we can expose a compresion parameter in the configuration file if people want to trade slight request-per-second performance for up to 90%+ memory savings in some situations. - updates the quicklist tests to do multiple passes: 200k+ tests now.