summaryrefslogtreecommitdiff
path: root/src/util.c
Commit message (Collapse)AuthorAgeFilesLines
* String pattern matching had exponential time complexity on pathological ↵Oran Agra2023-02-281-4/+23
| | | | | | | | | patterns (CVE-2022-36021) (#11858) Authenticated users can use string matching commands with a specially crafted pattern to trigger a denial-of-service attack on Redis, causing it to hang and consume 100% CPU time. Co-authored-by: Tom Levy <tomlevy93@gmail.com>
* skip new page cache reclame unit test when running in valgrind (#11808)Oran Agra2023-02-161-1/+5
| | | | | the new test is incompatible with valgrind. added a new `--valgrind` argument to `redis-server tests` mode, which will cause that test to be skipped..
* Reclaim page cache of RDB file (#11248)Tian2023-02-121-1/+58
| | | | | | | | | | | | | | | | | | | | # Background The RDB file is usually generated and used once and seldom used again, but the content would reside in page cache until OS evicts it. A potential problem is that once the free memory exhausts, the OS have to reclaim some memory from page cache or swap anonymous page out, which may result in a jitters to the Redis service. Supposing an exact scenario, a high-capacity machine hosts many redis instances, and we're upgrading the Redis together. The page cache in host machine increases as RDBs are generated. Once the free memory drop into low watermark(which is more likely to happen in older Linux kernel like 3.10, before [watermark_scale_factor](https://lore.kernel.org/lkml/1455813719-2395-1-git-send-email-hannes@cmpxchg.org/) is introduced, the `low watermark` is linear to `min watermark`, and there'is not too much buffer space for `kswapd` to be wake up to reclaim memory), a `direct reclaim` happens, which means the process would stall to wait for memory allocation. # What the PR does The PR introduces a capability to reclaim the cache when the RDB is operated. Generally there're two cases, read and write the RDB. For read it's a little messy to address the incremental reclaim, so the reclaim is done in one go in background after the load is finished to avoid blocking the work thread. For write, incremental reclaim amortizes the work of reclaim so no need to put it into background, and the peak watermark of cache can be reduced in this way. Two cases are addresses specially, replication and restart, for both of which the cache is leveraged to speed up the processing, so the reclaim is postponed to a right time. To do this, a flag is added to`rdbSave` and `rdbLoad` to control whether the cache need to be kept, with the default value false. # Something deserve noting 1. Though `posix_fadvise` is the POSIX standard, but only few platform support it, e.g. Linux, FreeBSD 10.0. 2. In Linux `posix_fadvise` only take effect on writeback-ed pages, so a `sync`(or `fsync`, `fdatasync`) is needed to flush the dirty page before `posix_fadvise` if we reclaim write cache. # About test A unit test is added to verify the effect of `posix_fadvise`. In integration test overall cache increase is checked, as well as the cache backed by RDB as a specific TCL test is executed in isolated Github action job.
* Optimize ZRANGE replies WITHSCORES in case of integer scores (#11779)filipe oliveira2023-02-061-0/+5
| | | | | | | | | | If we have integer scores on the sorted set we're not using the fastest way to reply by calling `d2string` which uses `double2ll` and `ll2string` when it can, instead of `fpconv_dtoa`. This results by some 50% performance improvement in certain cases of integer scores for both RESP2 and RESP3, and no apparent impact on double scores. Co-authored-by: Oran Agra <oran@redislabs.com>
* Fixed small distance replies on GEODIST and GEO commands WITHDIST (#11631)filipe oliveira2022-12-151-7/+17
| | | | | | Fixes a regression introduced by #11552 in 7.0.6. it causes replies in the GEO commands to contain garbage when the result is a very small distance (less than 1) Includes test to confirm indeed with junk in buffer now we properly reply
* Normalize NAN to a single nan type, like we do with inf (#11597)Binbin2022-12-081-0/+19
| | | | | | | | | | | From https://en.wikipedia.org/wiki/NaN#Display, it says that apart from nan and -nan, we can also get NAN and even nan(char-sequence) from libc. In #11482, our conclusion was that we wanna normalize it in Redis to a single nan type, like we already normalized inf. For this, we also reverted the assert_match part of the test added in #11506, using assert_equal to validate the changes.
* Speedup GEODIST with fixedpoint_d2string as an optimized version of snprintf ↵filipe oliveira2022-12-041-0/+163
| | | | | | | | | | | %.4f (#11552) GEODIST used snprintf("%.4f") for the reply using addReplyDoubleDistance, which was slow. This PR optimizes it without breaking compatibility by following the approach of ll2string with some changes to match the use case of distance and precision. I.e. we multiply it by 10000 format it as an integer, and then add a decimal point. This can achieve about 35% increase in the achievable ops/sec. Co-authored-by: Oran Agra <oran@redislabs.com>
* optimizing d2string() and addReplyDouble() with grisu2: double to string ↵filipe oliveira2022-10-151-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | conversion based on Florian Loitsch's Grisu-algorithm (#10587) All commands / use cases that heavily rely on double to a string representation conversion, (e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ), could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures 100% coverage of conversion. This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries ( fmtlib ) that use the optimized double to string conversion underneath. The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ). test suite changes: Despite being compatible, in some cases it produces a different result from printf, and some tests had to be adjusted. one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000` instead of 5e+9, which sounds like a bug? In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding issues (`expr 0.8 == 0.79999999999999999`)
* Avoid using unsafe C functions (#10932)ranshid2022-07-181-32/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | replace use of: sprintf --> snprintf strcpy/strncpy --> redis_strlcpy strcat/strncat --> redis_strlcat **why are we making this change?** Much of the code uses some unsafe variants or deprecated buffer handling functions. While most cases are probably not presenting any issue on the known path programming errors and unterminated strings might lead to potential buffer overflows which are not covered by tests. **As part of this PR we change** 1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl 2. change all occurrences of use of sprintf with use of snprintf 3. change occurrences of use of strcpy/strncpy with redis_strlcpy 4. change occurrences of use of strcat/strncat with redis_strlcat 5. change the behavior of ll2string/ull2string/ld2string so that it will always place null termination ('\0') on the output buffer in the first index. this was done in order to make the use of these functions more safe in cases were the user will not check the output returned by them (for example in rdbRemoveTempFile) 6. we added a compiler directive to issue a deprecation error in case a use of sprintf/strcpy/strcat is found during compilation which will result in error during compile time. However keep in mind that since the deprecation attribute is not supported on all compilers, this is expected to fail during push workflows. **NOTE:** while this is only an initial milestone. We might also consider using the *_s implementation provided by the C11 Extensions (however not yet widly supported). I would also suggest to start looking at static code analyzers to track unsafe use cases. For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling which can help locate unsafe function usage. https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c The main reason not to onboard it at this stage is that the alternative excepted by clang is to use the C11 extensions which are not always supported by stdlib.
* Fsync directory while persisting AOF manifest, RDB file, and config file ↵Tian2022-06-201-0/+49
| | | | | | | | | | | | | | | | | | | | (#10737) The current process to persist files is `write` the data, `fsync` and `rename` the file, but a underlying problem is that the rename may be lost when a sudden crash like power outage and the directory hasn't been persisted. The article [Ensuring data reaches disk](https://lwn.net/Articles/457667/) mentions a safe way to update file should be: 1. create a new temp file (on the same file system!) 2. write data to the temp file 3. fsync() the temp file 4. rename the temp file to the appropriate name 5. fsync() the containing directory This commit handles CONFIG REWRITE, AOF manifest, and RDB file (both for persistence, and the one the replica gets from the master). It doesn't handle (yet), ACL SAVE and Cluster configs, since these don't yet follow this pattern.
* Replace float zero comparison to FP_ZERO comparison (#10675)Mariya Markova2022-05-101-2/+2
| | | | | | | | | | | I suggest to use "[fpclassify](https://en.cppreference.com/w/cpp/numeric/math/fpclassify)" for float comparison with zero, because of expression "value == 0" with value very close to zero can be considered as true with some performance compiler optimizations. Note: this code was introduced by 9d520a7f to accept zset scores that get ERANGE in conversion due to precision loss near 0. But with Intel compilers, ICC and ICX, where optimizations for 0 check are more aggressive, "==0" is true for mentioned functions, however should not be. Behavior is seen starting from O2. This leads to a failure in the ZSCAN test in scan.tcl
* Fix long long to double implicit conversion warning (#10595)Binbin2022-04-181-1/+1
| | | | | | | | | | | | | There is a implicit conversion warning in clang: ``` util.c:574:23: error: implicit conversion from 'long long' to 'double' changes value from -4611686018427387903 to -4611686018427387904 [-Werror,-Wimplicit-const-int-float-conversion] if (d < -LLONG_MAX/2 || d > LLONG_MAX/2) ``` introduced in #10486 Co-authored-by: sundb <sundbcn@gmail.com>
* Optimize integer zset scores in listpack (converting to string and back) ↵Oran Agra2022-04-171-15/+34
| | | | | | | | | | | | | | | | | | | | | | | (#10486) When the score doesn't have fractional part, and can be stored as an integer, we use the integer capabilities of listpack to store it, rather than convert it to string. This already existed before this PR (lpInsert dose that conversion implicitly). But to do that, we would have first converted the score from double to string (calling `d2string`), then pass the string to `lpAppend` which identified it as being an integer and convert it back to an int. Now, instead of converting it to a string, we store it using lpAppendInteger`. Unrelated: --- * Fix the double2ll range check (negative and positive ranges, and also the comparison operands were slightly off. but also, the range could be made much larger, see comment). * Unify the double to string conversion code in rdb.c with the one in util.c * Small optimization in lpStringToInt64, don't attempt to convert strings that are obviously too long. Benchmark; --- Up to 20% improvement in certain tight loops doing zzlInsert with large integers. (if listpack is pre-allocated to avoid realloc, and insertion is sorted from largest to smaller)
* improve string2ll() to avoid extra conversion for long integer string. (#10408)DarrenJiang132022-03-141-2/+2
| | | | | | | For an integer string like "123456789012345678901" which could cause overflow-failure in string2ll() conversion, we could compare its length at the beginning to avoid extra work. * move LONG_STR_SIZE to be in declared in util.h, next to MAX_LONG_DOUBLE_CHARS
* Fix additional AOF filename issues. (#10110)Yossi Gottlieb2022-01-181-10/+0
| | | | | | | | This extends the previous fix (#10049) to address any form of non-printable or whitespace character (including newlines, quotes, non-printables, etc.) Also, removes the limitation on appenddirname, to align with the way filenames are handled elsewhere in Redis.
* Support whitespace characters in appendfilename, and ban them in ↵chenyang80942022-01-101-0/+10
| | | | | | appenddirname (#10049) 1. Ban whitespace characters in `appenddirname` 2. Handle the case where `appendfilename` contains spaces (for backwards compatibility)
* Changed latency histogram output to omit trailing 0s and periods (#10075)Madelyn Olson2022-01-091-0/+14
| | | Changed latency percentile output to omit trailing 0s and periods
* Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)chenyang80942022-01-031-0/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement Multi-Part AOF mechanism to avoid overheads during AOFRW. Introducing a folder with multiple AOF files tracked by a manifest file. The main issues with the the original AOFRW mechanism are: * buffering of commands that are processed during rewrite (consuming a lot of RAM) * freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it. * double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files) The main modifications of this PR: 1. Remove the AOF rewrite buffer and related code. 2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type, it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the incremental commands since the last AOFRW. 3. Use a AOF manifest file to record and manage these AOF files mentioned above. 4. The original configuration of `appendfilename` will be the base part of the new file name, for example: `appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof` 5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename` 6. Remove the `aof_rewrite_buffer_length` field in info. 7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs. It also gives users the opportunity to preserve the history AOFs. just for testing use now. 8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now), we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately. 9. Support upgrade (load) data from old version redis. 10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and manifest file will be placed in this directory. 11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if `aof-load-truncated` is enabled. Co-authored-by: Oran Agra <oran@redislabs.com>
* Replace ziplist with listpack in quicklist (#9740)sundb2021-11-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Part three of implementing #8702, following #8887 and #9366 . ## Description of the feature 1. Replace the ziplist container of quicklist with listpack. 2. Convert existing quicklist ziplists on RDB loading time. an O(n) operation. ## Interface changes 1. New `list-max-listpack-size` config is an alias for `list-max-ziplist-size`. 2. Replace `debug ziplist` command with `debug listpack`. ## Internal changes 1. Add `lpMerge` to merge two listpacks . (same as `ziplistMerge`) 2. Add `lpRepr` to print info of listpack which is used in debugCommand and `quicklistRepr`. (same as `ziplistRepr`) 3. Replace `QUICKLIST_NODE_CONTAINER_ZIPLIST` with `QUICKLIST_NODE_CONTAINER_PACKED`(following #9357 ). It represent that a quicklistNode is a packed node, as opposed to a plain node. 4. Remove `createZiplistObject` method, which is never used. 5. Calculate listpack entry size using overhead overestimation in `quicklistAllowInsert`. We prefer an overestimation, which would at worse lead to a few bytes below the lowest limit of 4k. ## Improvements 1. Calling `lpShrinkToFit` after converting Ziplist to listpack, which was missed at #9366. 2. Optimize `quicklistAppendPlainNode` to avoid memcpy data. ## Bugfix 1. Fix crash in `quicklistRepr` when ziplist is compressed, introduced from #9366. ## Test 1. Add unittest for `lpMerge`. 2. Modify the old quicklist ziplist corrupt dump test. Co-authored-by: Oran Agra <oran@redislabs.com>
* Add --large-memory flag for REDIS_TEST to enable tests that consume more ↵sundb2021-11-161-2/+2
| | | | | than 100mb (#9784) This is a preparation step in order to add a new test in quicklist.c see #9776
* Client eviction (#8687)yoav-steinberg2021-09-231-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### Description A mechanism for disconnecting clients when the sum of all connected clients is above a configured limit. This prevents eviction or OOM caused by accumulated used memory between all clients. It's a complimentary mechanism to the `client-output-buffer-limit` mechanism which takes into account not only a single client and not only output buffers but rather all memory used by all clients. #### Design The general design is as following: * We track memory usage of each client, taking into account all memory used by the client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date after reading from the socket, after processing commands and after writing to the socket. * Based on the used memory we sort all clients into buckets. Each bucket contains all clients using up up to x2 memory of the clients in the bucket below it. For example up to 1m clients, up to 2m clients, up to 4m clients, ... * Before processing a command and before sleep we check if we're over the configured limit. If we are we start disconnecting clients from larger buckets downwards until we're under the limit. #### Config `maxmemory-clients` max memory all clients are allowed to consume, above this threshold we disconnect clients. This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%` would mean 10% of `maxmemory`). #### Important code changes * During the development I encountered yet more situations where our io-threads access global vars. And needed to fix them. I also had to handle keeps the clients sorted into the memory buckets (which are global) while their memory usage changes in the io-thread. To achieve this I decided to simplify how we check if we're in an io-thread and make it much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking if the client is in an io-thread (it wasn't used for anything else) and just used the global `io_threads_op` variable the same way to check during writes. * I optimized the cleanup of the client from the `clients_pending_read` list on client freeing. We now store a pointer in the `client` struct to this list so we don't need to search in it (`pending_read_list_node`). * Added `evicted_clients` stat to `INFO` command. * Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the client eviction mechanism. Added corrosponding 'e' flag in the client info string. * Added `multi-mem` field in the client info string to show how much memory is used up by buffered multi commands. * Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and channels (partially), tracking prefixes (partially). * CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so clients will be disconnected between processing different clients and not only before sleep. This new function can be used in the future for work we want to do outside the command processing loop but don't want to wait for all clients to be processed before we get to it. Specifically I wanted to handle output-buffer-limit related closing before we process client eviction in case the two race with each other. * Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction buckets. * Each client now holds a pointer to the client eviction memory usage bucket it belongs to and listNode to itself in that bucket for quick removal. * Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value indicating no io-threading is currently being executed. * In order to track memory used by each clients in real-time we can't rely on updating these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()` (used to be `clientsCronTrackClientsMemUsage()`) after command processing, after writing data to pubsub clients, after writing the output buffer and after reading from the socket (and maybe other places too). The function is written to be fast. * Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before processing a command (before performing oom-checks and key-eviction). * All clients memory usage buckets are grouped as follows: * All clients using less than 64k. * 64K..128K * 128K..256K * ... * 2G..4G * All clients using 4g and up. * Added client-eviction.tcl with a bunch of tests for the new mechanism. * Extended maxmemory.tcl to test the interaction between maxmemory and maxmemory-clients settings. * Added an option to flag a numeric configuration variable as a "percent", this means that if we encounter a '%' after the number in the config file (or config set command) we consider it as valid. Such a number is store internally as a negative value. This way an integer value can be interpreted as either a percent (negative) or absolute value (positive). This is useful for example if some numeric configuration can optionally be set to a percentage of something else. Co-authored-by: Oran Agra <oran@redislabs.com>
* config memory limits: handle values larger than (signed) LLONG_MAX (#9313)Wen Hui2021-08-231-28/+38
| | | | | | | This aims to solve the issue in CONFIG SET maxmemory can only set maxmemory to up to 9223372036854775807 (2^63) while the maxmemory should be ULLONG. Added a memtoull function to convert a string representing an amount of memory into the number of bytes (similar to memtoll but for ull). Also added ull2string to convert a ULLong to string (Similar to ll2string).
* Add run all test support with define REDIS_TEST (#8570)sundb2021-03-101-1/+2
| | | | | | | | | | | | 1. Add `redis-server test all` support to run all tests. 2. Add redis test to daily ci. 3. Add `--accurate` option to run slow tests for more iterations (so that by default we run less cycles (shorter time, and less prints). 4. Move dict benchmark to REDIS_TEST. 5. fix some leaks in tests 6. make quicklist tests run on a specific fill set of options rather than huge ranges 7. move some prints in quicklist test outside their loops to reduce prints 8. removing sds.h from dict.c since it is now used in both redis-server and redis-cli (uses hiredis sds)
* Escape unsafe field name characters in INFO. (#8492)Yossi Gottlieb2021-02-151-0/+27
| | | | Fixes #8489
* Update getTimeZone to long (#8346)Raghav Muddur2021-01-181-2/+2
|
* Several (mostly Solaris-related) cleanups (#8171)Yossi Gottlieb2020-12-131-2/+1
| | | | | | * Allow runtest-moduleapi use a different 'make', for systems where GNU Make is 'gmake'. * Fix issue with builds on Solaris re-building everything from scratch due to CFLAGS/LDFLAGS not stored. * Fix compile failure on Solaris due to atomicvar and a bunch of warnings. * Fix garbled log timestamps on Solaris.
* stringmatchlen() should not expect null terminated strings.antirez2020-05-061-2/+2
|
* Remove unreachable branch.Brad Dunbar2020-05-051-2/+0
|
* getRandomBytes(): use HMAC-SHA256.antirez2020-04-231-10/+30
| | | | | | | | | | Now that we have an interface to use this API directly, via ACL GENPASS, we are no longer sure what people could do with it. So why don't make it a strong primitive exported by Redis in order to create unique IDs and so forth? The implementation was tested against the test vectors that can be found in RFC4231.
* Merge pull request #6546 from guybe7/fix_neg_zeroSalvatore Sanfilippo2020-04-021-0/+4
|\ | | | | Make sure Redis does not reply with negative zero
| * Make sure Redis does not reply with negative zeroGuy Benoish2019-11-051-0/+4
| |
* | ld2string should fail if string contains \0 in the middleGuy Benoish2020-01-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug affected RM_StringToLongDouble and HINCRBYFLOAT. I added tests for both cases. Main changes: 1. Fixed string2ld to fail if string contains \0 in the middle 2. Use string2ld in getLongDoubleFromObject - No point of having duplicated code here The two changes above broke RM_SaveLongDouble/RM_LoadLongDouble because the long double string was saved with length+1 (An innocent mistake, but it's actually a bug - The length passed to RM_SaveLongDouble should not include the last \0).
* | Merge branch 'unstable' into rm_get_server_infoSalvatore Sanfilippo2019-11-211-24/+36
|\ \ | |/
| * Module API for loading and saving long doubleOran Agra2019-11-031-24/+36
| | | | | | | | | | | | | | | | | | | | | | looks like each platform implements long double differently (different bit count) so we can't save them as binary, and we also want to avoid creating a new RDB format version, so we save these are hex strings using "%La". This commit includes a change in the arguments of ld2string to support this. as well as tests for coverage and short reads. coded by @guybe7
* | Add RM_ServerInfoGetFieldUnsignedOran Agra2019-11-041-0/+20
| | | | | | | | | | | | rename RM_ServerInfoGetFieldNumerical RM_ServerInfoGetFieldSigned move string2ull to util.c fix leak in RM_GetServerInfo when duplicate info fields exist
* | Add module api for looking into INFO fieldsOran Agra2019-11-031-0/+21
|/ | | | | | | | | | | | | - Add RM_GetServerInfo and friends - Add auto memory for new opaque struct - Add tests for new APIs other minor fixes: - add const in various char pointers - requested_section in modulesCollectInfo was actually not sds but char* - extract new string2d out of getDoubleFromObject for code reuse Add module API for
* Increase string2ld's buffer size (and fix HINCRBYFLOAT)Guy Benoish2019-01-281-1/+1
| | | | | | | | The string representation of `long double` may take up to ~5000 chars (see PR #3745). Before this fix HINCRBYFLOAT would never overflow (since the string could not exceed 256 chars). Now it can.
* stringmatchlen() fuzz test added.antirez2018-12-111-0/+16
| | | | | Verified to be able to trigger at least #5632. Does not report other issues.
* Fix stringmatchlen() read past buffer bug.antirez2018-12-111-1/+1
| | | | See #5632.
* fix comment typo in util.cWeiliang Li2018-11-151-1/+1
| | | | fix comment typo in util.c
* needs it for the globalDavid Carlier2018-10-261-0/+1
|
* Fix non Linux build.David Carlier2018-10-261-0/+18
| | | | | timezone global is a linux-ism whereas it is a function under BSD. Here a helper to get the timezone value in a more portable manner.
* string2ll(): better commenting.antirez2018-07-241-0/+6
|
* removing redundant checkdsomeshwar2018-07-211-3/+0
|
* Fix typoJack Drogon2018-07-031-1/+1
|
* Modules API: RM_GetRandomBytes() / GetRandomHexChars().antirez2018-04-051-59/+43
|
* Prevent off-by-one read in stringmatchlen() (fixes #4527)nashe2017-12-121-1/+1
|
* Modules: first preview 31 March 2016.antirez2016-05-101-1/+1
|
* Fix HINCRBYFLOAT to work with long doubles.antirez2015-11-041-3/+3
| | | | | | | | | | | | | | | | | During the refactoring needed for lazy free, specifically the conversion of t_hash from struct robj to plain SDS strings, HINCRBFLOAT was accidentally moved away from long doubles to doubles for internal processing of increments and formatting. The diminished precision created more obvious artifacts in the way small numbers are formatted once we convert from decimal number in radix 10 to double and back to its string in radix 10. By using more precision, we now have less surprising results at least with small numbers like "1.23", exactly like in the previous versions of Redis. See issue #2846.
* Lazyfree: Hash converted to use plain SDS WIP 5.antirez2015-10-011-3/+3
|