summaryrefslogtreecommitdiff
path: root/tests
Commit message (Collapse)AuthorAgeFilesLines
* Add RedisModule_KeyExists (#9600)Viktor Söderqvist2021-10-182-0/+16
| | | | The LRU of the key is not touched. Locically expired keys are logically not existing, so they're treated as such.
* Skip Active-defrag edge case test until we fix it. (#9645)yoav-steinberg2021-10-181-0/+4
| | | Test started failing consistently in 32bit builds after upgrading to jemalloc 5.2.1 (#9623).
* Attempt to fix a valgrind test failure due to timing (#9643)Oran Agra2021-10-181-1/+1
| | | | | | | | in the past few days i've seen two failures in the valgrind daily test. *** [err]: slave fails full sync and diskless load swapdb recovers it in tests/integration/replication.tcl Replica didn't get into loading mode can't reproduce it, but i'm hoping it's just too slow (to start loading within 5 seconds)
* Modify mem_usage2 module callback to enable to take sample_size argument (#9612)Hanna Fadida2021-10-171-1/+2
| | | | This is useful for approximating size computation of complex module types. Note that the mem_usage2 callback is new and has not been released yet, which is why we can modify it.
* Fix daily failures due to macos-latest change. (#9637)Yossi Gottlieb2021-10-171-2/+3
| | | | | * Fix test modules linking on macOS 11.x. * Use macOS 10.x for FreeBSD VM as VirtualBox is not yet supported on 11.
* Improved the reliability of cluster replica sync tests (#9628)Madelyn Olson2021-10-132-23/+40
| | | Improved the reliability of cluster replica sync tests
* Move config `cluster-config-file` to generic configs (#9597)Bjorn Svensson2021-10-071-0/+1
|
* obuf based eviction tests run until eviction occurs (#9611)yoav-steinberg2021-10-071-34/+33
| | | | | | | | | | | obuf based eviction tests run until eviction occurs instead of assuming a certain amount of writes will fill the obuf enough for eviction to occur. This handles the kernel buffering written data and emptying the obuf even though no one actualy reads from it. The tests have a new timeout of 20sec: if the test doesn't pass after 20 sec it'll fail. Hopefully this enough for our slow CI targets. This also eliminates the need to skip some tests in TLS.
* Make tracking invalidation messages always after command's reply (#9422)Huang Zhw2021-10-071-0/+95
| | | | | | Tracking invalidation messages were sometimes sent in inconsistent order, before the command's reply rather than after. In addition to that, they were sometimes embedded inside other commands responses, like MULTI-EXEC and MGET.
* Hide empty and loading replicas from CLUSTER SLOTS responses (#9287)GutovskyMaria2021-10-061-0/+103
| | | Hide empty and loading replicas from CLUSTER SLOTS responses
* Test fails when flushdb triggers a bgsave (#9535)yoav-steinberg2021-10-061-1/+1
| | | Flush db and *then* wait for the bgsave to complete.
* Attempt to fix rare pubsub oubuf maxmemory eviction test failure (#9603)yoav-steinberg2021-10-051-2/+4
| | | | | * Reduce delay between publishes to allow less time to write the obufs. * More subscribed clients to buffer more data per publish. * Make sure main connection isn't evicted (it has a large qbuf).
* argv mem leak during multi command execution. (#9598)yoav-steinberg2021-10-051-0/+17
| | | | | | | Changes in #9528 lead to memory leak if the command implementation used rewriteClientCommandArgument inside MULTI-EXEC. Adding an explicit test for that case since the test that uncovered it didn't specifically target this scenario
* Fix invalid memory write on lua stack overflow (CVE-2021-32626) (#9591)Meir Shpilraien (Spielrein)2021-10-041-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When LUA call our C code, by default, the LUA stack has room for 10 elements. In most cases, this is more than enough but sometimes it's not and the caller must verify the LUA stack size before he pushes elements. On 3 places in the code, there was no verification of the LUA stack size. On specific inputs this missing verification could have lead to invalid memory write: 1. On 'luaReplyToRedisReply', one might return a nested reply that will explode the LUA stack. 2. On 'redisProtocolToLuaType', the Redis reply might be deep enough to explode the LUA stack (notice that currently there is no such command in Redis that returns such a nested reply, but modules might do it) 3. On 'ldbRedis', one might give a command with enough arguments to explode the LUA stack (all the arguments will be pushed to the LUA stack) This commit is solving all those 3 issues by calling 'lua_checkstack' and verify that there is enough room in the LUA stack to push elements. In case 'lua_checkstack' returns an error (there is not enough room in the LUA stack and it's not possible to increase the stack), we will do the following: 1. On 'luaReplyToRedisReply', we will return an error to the user. 2. On 'redisProtocolToLuaType' we will exit with panic (we assume this scenario is rare because it can only happen with a module). 3. On 'ldbRedis', we return an error.
* Fix protocol parsing on 'ldbReplParseCommand' (CVE-2021-32672) (#9590)Oran Agra2021-10-041-1/+16
| | | | | | | | | | | | | | | | The protocol parsing on 'ldbReplParseCommand' (LUA debugging) Assumed protocol correctness. This means that if the following is given: *1 $100 test The parser will try to read additional 94 unallocated bytes after the client buffer. This commit fixes this issue by validating that there are actually enough bytes to read. It also limits the amount of data that can be sent by the debugger client to 1M so the client will not be able to explode the memory. Co-authored-by: meir@redislabs.com <meir@redislabs.com>
* Fix ziplist and listpack overflows and truncations (CVE-2021-32627, ↵Oran Agra2021-10-041-0/+156
| | | | | | | | | | | | | | | | CVE-2021-32628) (#9589) - fix possible heap corruption in ziplist and listpack resulting by trying to allocate more than the maximum size of 4GB. - prevent ziplist (hash and zset) from reaching size of above 1GB, will be converted to HT encoding, that's not a useful size. - prevent listpack (stream) from reaching size of above 1GB. - XADD will start a new listpack if the new record may cause the previous listpack to grow over 1GB. - XADD will respond with an error if a single stream record is over 1GB - List type (ziplist in quicklist) was truncating strings that were over 4GB, now it'll respond with an error. Co-authored-by: sundb <sundbcn@gmail.com>
* Prevent unauthenticated client from easily consuming lots of memory ↵Oran Agra2021-10-041-0/+16
| | | | | | | | | (CVE-2021-32675) (#9588) This change sets a low limit for multibulk and bulk length in the protocol for unauthenticated connections, so that they can't easily cause redis to allocate massive amounts of memory by sending just a few characters on the network. The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
* improve the stability and correctness of "Test child sending info" (#9562)YaacovHazan2021-10-041-5/+16
| | | | | | | | | Since we measure the COW size in this test by changing some keys and reading the reported COW size, we need to ensure that the "dismiss mechanism" (#8974) will not free memory and reduce the COW size. For that, this commit changes the size of the keys to 512B (less than a page). and because some keys may fall into the same page, we are modifying ten keys on each iteration and check for at least 50% change in the COW size.
* decrby LLONG_MIN caused nagation overflow. (#9577)yoav-steinberg2021-10-031-0/+6
| | | | | Note that this breaks compatibility because in the past doing: DECRBY x -9223372036854775808 would succeed (and create an invalid result) and now this returns an error.
* Remove argument count limit, dynamically grow argv. (#9528)yoav-steinberg2021-10-031-1/+9
| | | | | | Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX. When client sends an m-bulk that's higher than 1024, we initially only allocate the argv array for 1024 arguments, and gradually grow that allocation as arguments are received.
* Modules: add RM_LoadDataTypeFromStringEncver (#9537)Hanna Fadida2021-09-302-11/+19
| | | adding an advanced api to enable loading data that was sereialized with a specific encoding version
* verbose debug print in test to debug rare CI failure. (#9563)yoav-steinberg2021-09-291-1/+5
|
* Fix stream sanitization for non-int first value (#9553)Oran Agra2021-09-261-1/+1
| | | | This was recently broken in #9321 when we validated stream IDs to be integers but did that after to the stepping next record instead of before.
* Client eviction ci issues (#9549)yoav-steinberg2021-09-264-17/+20
| | | | | | | Fixing CI test issues introduced in #8687 - valgrind warnings in readQueryFromClient when client was freed by processInputBuffer - adding DEBUG pause-cron for tests not to be time dependent. - skipping a test that depends on socket buffers / events not compatible with TLS - making sure client got subscribed by not using deferring client
* Add --skipfile and --skiptest regex support. (#9555)Yossi Gottlieb2021-09-262-4/+4
| | | | Empty patterns are not considered and skipped. Also, improve help text.
* Fix test randstring, compare string and int is wrong. (#9544)Huang Zhw2021-09-241-2/+3
| | | | This will cause the generated string containing "\". Fixes a broken change in #8687
* Add RM_TrimStringAllocation(). (#9540)Yossi Gottlieb2021-09-231-0/+1
| | | | | | | | | | | This commit makes it possible to explicitly trim the allocation of a RedisModuleString. Currently, Redis automatically trims strings that have been retained by a module command when it returns. However, this is not thread safe and may result with corruption in threaded modules. Supporting explicit trimming offers a backwards compatible workaround to this problem.
* Client eviction (#8687)yoav-steinberg2021-09-234-6/+652
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### Description A mechanism for disconnecting clients when the sum of all connected clients is above a configured limit. This prevents eviction or OOM caused by accumulated used memory between all clients. It's a complimentary mechanism to the `client-output-buffer-limit` mechanism which takes into account not only a single client and not only output buffers but rather all memory used by all clients. #### Design The general design is as following: * We track memory usage of each client, taking into account all memory used by the client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date after reading from the socket, after processing commands and after writing to the socket. * Based on the used memory we sort all clients into buckets. Each bucket contains all clients using up up to x2 memory of the clients in the bucket below it. For example up to 1m clients, up to 2m clients, up to 4m clients, ... * Before processing a command and before sleep we check if we're over the configured limit. If we are we start disconnecting clients from larger buckets downwards until we're under the limit. #### Config `maxmemory-clients` max memory all clients are allowed to consume, above this threshold we disconnect clients. This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of `maxmemory`). #### Important code changes * During the development I encountered yet more situations where our io-threads access global vars. And needed to fix them. I also had to handle keeps the clients sorted into the memory buckets (which are global) while their memory usage changes in the io-thread. To achieve this I decided to simplify how we check if we're in an io-thread and make it much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking if the client is in an io-thread (it wasn't used for anything else) and just used the global `io_threads_op` variable the same way to check during writes. * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing. We now store a pointer in the `client` struct to this list so we don't need to search in it (`pending_read_list_node`). * Added `evicted_clients` stat to `INFO` command. * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the client eviction mechanism. Added corrosponding 'e' flag in the client info string. * Added `multi-mem` field in the client info string to show how much memory is used up by buffered multi commands. * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and channels (partially), tracking prefixes (partially). * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so clients will be disconnected between processing different clients and not only before sleep. This new function can be used in the future for work we want to do outside the command processing loop but don't want to wait for all clients to be processed before we get to it. Specifically I wanted to handle output-buffer-limit related closing before we process client eviction in case the two race with each other. * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction buckets. * Each client now holds a pointer to the client eviction memory usage bucket it belongs to and listNode to itself in that bucket for quick removal. * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value indicating no io-threading is currently being executed. * In order to track memory used by each clients in real-time we can't rely on updating these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()` (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after writing data to pubsub clients, after writing the output buffer and after reading from the socket (and maybe other places too). The function is written to be fast. * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before processing a command (before performing oom-checks and key-eviction). * All clients memory usage buckets are grouped as follows: * All clients using less than 64k. * 64K..128K * 128K..256K * ... * 2G..4G * All clients using 4g and up. * Added client-eviction.tcl with a bunch of tests for the new mechanism. * Extended maxmemory.tcl to test the interaction between maxmemory and maxmemory-clients settings. * Added an option to flag a numeric configuration variable as a "percent", this means that if we encounter a '%' after the number in the config file (or config set command) we consider it as valid. Such a number is store internally as a negative value. This way an integer value can be interpreted as either a percent (negative) or absolute value (positive). This is useful for example if some numeric configuration can optionally be set to a percentage of something else. Co-authored-by: Oran Agra <oran@redislabs.com>
* Adding ACL support for modules (#9309)YaacovHazan2021-09-233-0/+243
| | | | | | | | | | | | | | | | | | This commit introduced a new flag to the RM_Call: 'C' - Check if the command can be executed according to the ACLs associated with it. Also, three new API's added to check if a command, key, or channel can be executed or accessed by a user, according to the ACLs associated with it. - RM_ACLCheckCommandPerm - RM_ACLCheckKeyPerm - RM_ACLCheckChannelPerm The user for these API's is a RedisModuleUser object, that for a Module user returned by the RM_CreateModuleUser API, or for a general ACL user can be retrieved by these two new API's: - RM_GetCurrentUserName - Retrieve the user name of the client connection behind the current context. - RM_GetModuleUserFromUserName - Get a RedisModuleUser from a user name As a result of getting a RedisModuleUser from name, it can now also access the general ACL users (not just ones created by the module). This mean the already existing API RM_SetModuleUserACL(), can be used to change the ACL rules for such users.
* Add ZMPOP/BZMPOP commands. (#9484)Binbin2021-09-233-92/+592
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is similar to the recent addition of LMPOP/BLMPOP (#9373), but zset. Syntax for the new ZMPOP command: `ZMPOP numkeys [<key> ...] MIN|MAX [COUNT count]` Syntax for the new BZMPOP command: `BZMPOP timeout numkeys [<key> ...] MIN|MAX [COUNT count]` Some background: - ZPOPMIN/ZPOPMAX take only one key, and can return multiple elements. - BZPOPMIN/BZPOPMAX take multiple keys, but return only one element from just one key. - ZMPOP/BZMPOP can take multiple keys, and can return multiple elements from just one key. Note that ZMPOP/BZMPOP can take multiple keys, it eventually operates on just on key. And it will propagate as ZPOPMIN or ZPOPMAX with the COUNT option. As new commands, if we can not pop any elements, the response like: - ZMPOP: Return a NIL in both RESP2 and RESP3, unlike ZPOPMIN/ZPOPMAX return emptyarray. - BZMPOP: Return a NIL in both RESP2 and RESP3 when timeout is reached, like BZPOPMIN/BZPOPMAX. For the normal response is nested arrays in RESP2 and RESP3: ``` ZMPOP/BZMPOP 1) keyname 2) 1) 1) member1 2) score1 2) 1) member2 2) score2 In RESP2: 1) "myzset" 2) 1) 1) "three" 2) "3" 2) 1) "two" 2) "2" In RESP3: 1) "myzset" 2) 1) 1) "three" 2) (double) 3 2) 1) "two" 2) (double) 2 ```
* tune lazyfree test timeout (#9527)Oran Agra2021-09-221-3/+3
| | | | | | | | | i've seen this CI failure a couple of times on MacOS: *** [err]: lazy free a stream with all types of metadata in tests/unit/lazyfree.tcl lazyfree isn't done only reason i can think of is that 500ms is sometimes not enough on slow systems.
* fix replication test failure, probing the wrong log file (#9513)Oran Agra2021-09-191-1/+1
|
* Adds limit to SINTERCARD/ZINTERCARD. (#9425)Binbin2021-09-162-3/+54
| | | | | | | | | | | | | | | | Implements the [LIMIT limit] variant of SINTERCARD/ZINTERCARD. Now with the LIMIT, we can stop the searching when cardinality reaching the limit, and return the cardinality ASAP. Note that in SINTERCARD, the old synatx was: `SINTERCARD key [key ...]` In order to add a optional parameter, we must break the old synatx. So the new syntax of SINTERCARD will be consistent with ZINTERCARD. New syntax: `SINTERCARD numkeys key [key ...] [LIMIT limit]`. Note that this means that SINTERCARD has a different syntax than SINTER and SINTERSTORE (taking numkeys argument) As for ZINTERCARD, we can easily add a optional parameter to it. New syntax: `ZINTERCARD numkeys key [key ...] [LIMIT limit]`
* Sentinel: Fix failed daily tests, due to race condition (#9501)Wen Hui2021-09-152-5/+15
|
* A better approach for COMMAND INFO for movablekeys commands (#8324)guybe72021-09-153-0/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix #7297 The problem: Today, there is no way for a client library or app to know the key name indexes for commands such as ZUNIONSTORE/EVAL and others with "numkeys", since COMMAND INFO returns no useful info for them. For cluster-aware redis clients, this requires to 'patch' the client library code specifically for each of these commands or to resolve each execution of these commands with COMMAND GETKEYS. The solution: Introducing key specs other than the legacy "range" (first,last,step) The 8th element of the command info array, if exists, holds an array of key specs. The array may be empty, which indicates the command doesn't take any key arguments or may contain one or more key-specs, each one may leads to the discovery of 0 or more key arguments. A client library that doesn't support this key-spec feature will keep using the first,last,step and movablekeys flag which will obviously remain unchanged. A client that supports this key-specs feature needs only to look at the key-specs array. If it finds an unrecognized spec, it must resort to using COMMAND GETKEYS if it wishes to get all key name arguments, but if all it needs is one key in order to know which cluster node to use, then maybe another spec (if the command has several) can supply that, and there's no need to use GETKEYS. Each spec is an array of arguments, first one is the spec name, the second is an array of flags, and the third is an array containing details about the spec (specific meaning for each spec type) The initial flags we support are "read" and "write" indicating if the keys that this key-spec finds are used for read or for write. clients should ignore any unfamiliar flags. In order to easily find the positions of keys in a given array of args we introduce keys specs. There are two logical steps of key specs: 1. `start_search`: Given an array of args, indicate where we should start searching for keys 2. `find_keys`: Given the output of start_search and an array of args, indicate all possible indices of keys. ### start_search step specs - `index`: specify an argument index explicitly - `index`: 0 based index (1 means the first command argument) - `keyword`: specify a string to match in `argv`. We should start searching for keys just after the keyword appears. - `keyword`: the string to search for - `start_search`: an index from which to start the keyword search (can be negative, which means to search from the end) Examples: - `SET` has start_search of type `index` with value `1` - `XREAD` has start_search of type `keyword` with value `[“STREAMS”,1]` - `MIGRATE` has start_search of type `keyword` with value `[“KEYS”,-2]` ### find_keys step specs - `range`: specify `[count, step, limit]`. - `lastkey`: index of the last key. relative to the index returned from begin_search. -1 indicating till the last argument, -2 one before the last - `step`: how many args should we skip after finding a key, in order to find the next one - `limit`: if count is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit. 2 means ½ of the remaining args, 3 means ⅓, and so on. - “keynum”: specify `[keynum_index, first_key_index, step]`. - `keynum_index`: is relative to the return of the `start_search` spec. - `first_key_index`: is relative to `keynum_index`. - `step`: how many args should we skip after finding a key, in order to find the next one Examples: - `SET` has `range` of `[0,1,0]` - `MSET` has `range` of `[-1,2,0]` - `XREAD` has `range` of `[-1,1,2]` - `ZUNION` has `start_search` of type `index` with value `1` and `find_keys` of type `keynum` with value `[0,1,1]` - `AI.DAGRUN` has `start_search` of type `keyword` with value `[“LOAD“,1]` and `find_keys` of type `keynum` with value `[0,1,1]` (see https://oss.redislabs.com/redisai/master/commands/#aidagrun) Note: this solution is not perfect as the module writers can come up with anything, but at least we will be able to find the key args of the vast majority of commands. If one of the above specs can’t describe the key positions, the module writer can always fall back to the `getkeys-api` option. Some keys cannot be found easily (`KEYS` in `MIGRATE`: Imagine the argument for `AUTH` is the string “KEYS” - we will start searching in the wrong index). The guarantee is that the specs may be incomplete (`incomplete` will be specified in the spec to denote that) but we never report false information (assuming the command syntax is correct). For `MIGRATE` we start searching from the end - `startfrom=-1` - and if one of the keys is actually called "keys" we will report only a subset of all keys - hence the `incomplete` flag. Some `incomplete` specs can be completely empty (i.e. UNKNOWN begin_search) which should tell the client that COMMAND GETKEYS (or any other way to get the keys) must be used (Example: For `SORT` there is no way to describe the STORE keyword spec, as the word "store" can appear anywhere in the command). We will expose these key specs in the `COMMAND` command so that clients can learn, on startup, where the keys are for all commands instead of holding hardcoded tables or use `COMMAND GETKEYS` in runtime. Comments: 1. Redis doesn't internally use the new specs, they are only used for COMMAND output. 2. In order to support the current COMMAND INFO format (reply array indices 4, 5, 6) we created a synthetic range, called legacy_range, that, if possible, is built according to the new specs. 3. Redis currently uses only getkeys_proc or the legacy_range to get the keys indices (in COMMAND GETKEYS for example). "incomplete" specs: the command we have issues with are MIGRATE, STRALGO, and SORT for MIGRATE, because the token KEYS, if exists, must be the last token, we can search in reverse. it one of the keys is actually the string "keys" will return just a subset of the keys (hence, it's "incomplete") for SORT and STRALGO we can use this heuristic (the keys can be anywhere in the command) and therefore we added a key spec that is both "incomplete" and of "unknown type" if a client encounters an "incomplete" spec it means that it must find a different way (either COMMAND GETKEYS or have its own parser) to retrieve the keys. please note that all commands, apart from the three mentioned above, have "complete" key specs
* Added URI support to redis-benchmark (cli and benchmark share the same ↵filipe oliveira2021-09-142-56/+64
| | | | | | | | | | | uri-parsing methods) (#9314) - Add `-u <uri>` command line option to support `redis://` URI scheme. - included server connection information object (`struct cliConnInfo`), used to describe an ip:port pair, db num user input, and user:pass to avoid a large number of function arguments. - Using sds on connection info strings for redis-benchmark/redis-cli Co-authored-by: yoav-steinberg <yoav@monfort.co.il>
* Modules: Add remaining list API functions (#8439)Viktor Söderqvist2021-09-143-0/+303
| | | | | | | | | | | | | | | | | | | | | | | | | | | | List functions operating on elements by index: * RM_ListGet * RM_ListSet * RM_ListInsert * RM_ListDelete Iteration is done using a simple for loop over indices. The index based functions use an internal iterator as an optimization. This is explained in the docs: ``` * Many of the list functions access elements by index. Since a list is in * essence a doubly-linked list, accessing elements by index is generally an * O(N) operation. However, if elements are accessed sequentially or with * indices close together, the functions are optimized to seek the index from * the previous index, rather than seeking from the ends of the list. * * This enables iteration to be done efficiently using a simple for loop: * * long n = RM_ValueLength(key); * for (long i = 0; i < n; i++) { * RedisModuleString *elem = RedisModule_ListGet(key, i); * // Do stuff... * } ```
* Fix memory leak due to missing freeCallback in blockonbackground moduleapi ↵sundb2021-09-141-1/+6
| | | | | | test (#9499) Before #9497, before redis-server was shut down, we did not manually shut down all the clients, which would have prevented valgrind from detecting a memory leak in the client's argc.
* Fixed leaked client for "start_server" when running in --loop (#9497)yoav-steinberg2021-09-132-1/+8
| | | | | * On `kill_server` make sure we close the default `"client"` connection. * Don't reconnect when trying to execute the client's `close` command. * On `restart_server` make sure to remove the (closed) default `"client"` after killing the old server.
* PSYNC2: make partial sync possible after master reboot (#8015)zhaozhao.zz2021-09-133-0/+184
| | | | | | | | | | | | | | | | | The main idea is how to allow a master to load replication info from RDB file when rebooting, if master can load replication info it means that replicas may have the chance to psync with master, it can save much traffic. The key point is we need guarantee safety and consistency, so there are two differences between master and replica: 1. master would load the replication info as secondary ID and offset, in case other masters have the same replid. 2. when master loading RDB, it would propagate expired keys as DEL command to replication backlog, then replica can receive these commands to delete stale keys. p.s. the expired keys when RDB loading is useful for users, so we show it as `rdb_last_load_keys_expired` and `rdb_last_load_keys_loaded` in info persistence. Moreover, after load replication info, master should update `no_replica_time` in case loading RDB cost too long time.
* bitpos/bitcount add bit index (#9324)Huang Zhw2021-09-122-21/+183
| | | | | | | | | | Make bitpos/bitcount support bit index: ``` BITPOS key bit [start [end [BIT|BYTE]]] BITCOUNT key [start end [BIT|BYTE]] ``` The default behavior is `BYTE`, so these commands are still compatible with old.
* Fix RedisModule_Call tests on 32bit (#9481)Meir Shpilraien (Spielrein)2021-09-091-1/+1
|
* Replace all usage of ziplist with listpack for t_zset (#9366)sundb2021-09-099-24/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Part two of implementing #8702 (zset), after #8887. ## Description of the feature Replaced all uses of ziplist with listpack in t_zset, and optimized some of the code to optimize performance. ## Rdb format changes New `RDB_TYPE_ZSET_LISTPACK` rdb type. ## Rdb loading improvements: 1) Pre-expansion of dict for validation of duplicate data for listpack and ziplist. 2) Simplifying the release of empty key objects when RDB loading. 3) Unify ziplist and listpack data verify methods for zset and hash, and move code to rdb.c. ## Interface changes 1) New `zset-max-listpack-entries` config is an alias for `zset-max-ziplist-entries` (same with `zset-max-listpack-value`). 2) OBJECT ENCODING will return listpack instead of ziplist. ## Listpack improvements: 1) Add `lpDeleteRange` and `lpDeleteRangeWithEntry` functions to delete a range of entries from listpack. 2) Improve the performance of `lpCompare`, converting from string to integer is faster than converting from integer to string. 3) Replace `snprintf` with `ll2string` to improve performance in converting numbers to strings in `lpGet()`. ## Zset improvements: 1) Improve the performance of `zzlFind` method, use `lpFind` instead of `lpCompare` in a loop. 2) Use `lpDeleteRangeWithEntry` instead of `lpDelete` twice to delete a element of zset. ## Tests 1) Add some unittests for `lpDeleteRange` and `lpDeleteRangeWithEntry` function. 2) Add zset RDB loading test. 3) Add benchmark test for `lpCompare` and `ziplsitCompare`. 4) Add empty listpack zset corrupt dump test.
* Remove redundant validation and prevent duplicate users during ACL load (#9330)Madelyn Olson2021-09-091-0/+50
| | | | Throw an error when a user is provided multiple times on the command line instead of silently throwing one of them away. Remove unneeded validation for validating users on ACL load.
* Add LMPOP/BLMPOP commands. (#9373)Binbin2021-09-092-59/+385
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to add COUNT option for BLPOP. But we can't do it without breaking compatibility due to the command arguments syntax. So this commit introduce two new commands. Syntax for the new LMPOP command: `LMPOP numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Syntax for the new BLMPOP command: `BLMPOP timeout numkeys [<key> ...] LEFT|RIGHT [COUNT count]` Some background: - LPOP takes one key, and can return multiple elements. - BLPOP takes multiple keys, but returns one element from just one key. - LMPOP can take multiple keys and return multiple elements from just one key. Note that LMPOP/BLMPOP can take multiple keys, it eventually operates on just one key. And it will propagate as LPOP or RPOP with the COUNT option. As a new command, it still return NIL if we can't pop any elements. For the normal response is nested arrays in RESP2 and RESP3, like: ``` LMPOP/BLMPOP 1) keyname 2) 1) element1 2) element2 ``` I.e. unlike BLPOP that returns a key name and one element so it uses a flat array, and LPOP that returns multiple elements with no key name, and again uses a flat array, this one has to return a nested array, and it does for for both RESP2 and RESP3 (like SCAN does) Some discuss can see: #766 #8824
* Delay to discard cached master when full synchronization (#9398)Wang Yuan2021-09-091-0/+68
| | | | | | | | | | | | * Delay to discard cache master when full synchronization * Don't disconnect with replicas before loading transferred RDB when full sync Previously, once replica need to start full synchronization with master, it will discard cached master whatever full synchronization is failed or not. Now we discard cached master only when transferring RDB is finished and start to change data space, this make replica could start partial resynchronization with another new master if new master is failed during full synchronization.
* Fix callReplyParseCollection memleak when use AutoMemory (#9446)chenyang80942021-09-091-0/+40
| | | | | | | | When parsing an array type reply, ctx will be lost when recursively parsing its elements, which will cause a memory leak in automemory mode. This is a result of the changes in #9202 Add test for callReplyParseCollection fix
* Fix wrong offset when replica pause (#9448)zhaozhao.zz2021-09-081-0/+51
| | | | | | | | When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied). To fix it, here are the changes: * Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`. * Show `slave_read_repl_offset` in info reply. * Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does).
* Optimize quicklistIndex to seek from the nearest end (#9454)Viktor Söderqvist2021-09-061-2/+2
| | | | | | | | | | | | | | | | | | Until now, giving a negative index seeks from the end of a list and a positive seeks from the beginning. This change makes it seek from the nearest end, regardless of the sign of the given index. quicklistIndex is used by all list commands which operate by index. LINDEX key 999999 in a list if 1M elements is greately optimized by this change. Latency is cut by 75%. LINDEX key -1000000 in a list of 1M elements, likewise. LRANGE key -1 -1 is affected by this, since LRANGE converts the indices to positive numbers before seeking. The tests for corrupt dumps are updated to make sure the corrup data is seeked in the same direction as before.
* Speed up sentinel tests (#9408)Wen Hui2021-09-057-6/+35
| | | Use sentinel debug to reduce default timeouts and allow tests to execute faster.