summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorantirez <antirez@gmail.com>2017-01-30 16:34:08 +0100
committerantirez <antirez@gmail.com>2017-02-01 15:01:59 +0100
commitc495d095ae495ea5253443ee4562aaa30681a854 (patch)
tree82d9356afc4687d772fcd203f706b090537e2566
parent3a7410a8a69dca0ff5d4cd4e48abf974d2e53b9f (diff)
downloadredis-c495d095ae495ea5253443ee4562aaa30681a854.tar.gz
Ziplist: insertion bug under particular conditions fixed.
Ziplists had a bug that was discovered while investigating a different issue, resulting in a corrupted ziplist representation, and a likely segmentation foult and/or data corruption of the last element of the ziplist, once the ziplist is accessed again. The bug happens when a specific set of insertions / deletions is performed so that an entry is encoded to have a "prevlen" field (the length of the previous entry) of 5 bytes but with a count that could be encoded in a "prevlen" field of a since byte. This could happen when the "cascading update" process called by ziplistInsert()/ziplistDelete() in certain contitious forces the prevlen to be bigger than necessary in order to avoid too much data moving around. Once such an entry is generated, inserting a very small entry immediately before it will result in a resizing of the ziplist for a count smaller than the current ziplist length (which is a violation, inserting code expects the ziplist to get bigger actually). So an FF byte is inserted in a misplaced position. Moreover a realloc() is performed with a count smaller than the ziplist current length so the final bytes could be trashed as well. SECURITY IMPLICATIONS: Currently it looks like an attacker can only crash a Redis server by providing specifically choosen commands. However a FF byte is written and there are other memory operations that depend on a wrong count, so even if it is not immediately apparent how to mount an attack in order to execute code remotely, it is not impossible at all that this could be done. Attacks always get better... and we did not spent enough time in order to think how to exploit this issue, but security researchers or malicious attackers could.
-rw-r--r--src/ziplist.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/src/ziplist.c b/src/ziplist.c
index 81d23ca38..0da1b0ca6 100644
--- a/src/ziplist.c
+++ b/src/ziplist.c
@@ -778,7 +778,12 @@ unsigned char *__ziplistInsert(unsigned char *zl, unsigned char *p, unsigned cha
/* When the insert position is not equal to the tail, we need to
* make sure that the next entry can hold this entry's length in
* its prevlen field. */
+ int forcelarge = 0;
nextdiff = (p[0] != ZIP_END) ? zipPrevLenByteDiff(p,reqlen) : 0;
+ if (nextdiff == -4 && reqlen < 4) {
+ nextdiff = 0;
+ forcelarge = 1;
+ }
/* Store offset because a realloc may change the address of zl. */
offset = p-zl;
@@ -791,7 +796,10 @@ unsigned char *__ziplistInsert(unsigned char *zl, unsigned char *p, unsigned cha
memmove(p+reqlen,p-nextdiff,curlen-offset-1+nextdiff);
/* Encode this entry's raw length in the next entry. */
- zipStorePrevEntryLength(p+reqlen,reqlen);
+ if (forcelarge)
+ zipStorePrevEntryLength(p+reqlen,reqlen);
+ else
+ zipStorePrevEntryLengthLarge(p+reqlen,reqlen);
/* Update offset for tail */
ZIPLIST_TAIL_OFFSET(zl) =