summaryrefslogtreecommitdiff
path: root/src/stream.h
Commit message (Collapse)AuthorAgeFilesLines
* XPENDING should not update consumer's seen-timeGuy Benoish2020-05-041-1/+6
| | | | | Same goes for XGROUP DELCONSUMER (But in this case, it doesn't have any visible effect)
* Stream: Handle streamID-related edge casesGuy Benoish2019-12-261-0/+1
| | | | | | | | | | | This commit solves several edge cases that are related to exhausting the streamID limits: We should correctly calculate the succeeding streamID instead of blindly incrementing 'seq' This affects both XREAD and XADD. Other (unrelated) changes: Reply with a better error message when trying to add an entry to a stream that has exhausted last_id
* Support streams in general module API functionsGuy Benoish2019-11-061-0/+1
| | | | | | | | Fixes GitHub issue #6492 Added stream support in RM_KeyType and RM_ValueLength. Also moduleDelKeyIfEmpty was updated, even though it has no effect now (It will be relevant when stream type direct API will be coded - i.e. RM_StreamAdd)
* stream.h: fix typoJamison Judge2019-10-071-1/+1
|
* prevent diskless replica from terminating on short readOran Agra2019-07-171-0/+1
| | | | | | | | now that replica can read rdb directly from the socket, it should avoid exiting on short read and instead try to re-sync. this commit tries to have minimal effects on non-diskless rdb reading. and includes a test that tries to trigger this scenario on various read cases.
* Streams: add streamCompareID() declaration in stream.h.dejun.xdj2018-07-141-0/+1
|
* Streams: iterator entry deletion abilities.antirez2018-04-171-0/+2
|
* CG: AOF rewriting implemented.antirez2018-03-231-0/+1
|
* CG: Replication WIP 1: XREADGROUP and XCLAIM propagated as XCLAIM.antirez2018-03-191-1/+8
|
* CG: RDB loading first implementation.antirez2018-03-151-0/+2
|
* CG: XPENDING should not create consumers and obey to count.antirez2018-03-151-1/+1
|
* CG: Now XREADGROUP + blocking operations work.antirez2018-03-151-0/+2
|
* CG: first draft of streamReplyWithRangeFromConsumerPEL().antirez2018-03-151-1/+1
|
* CG: creation of NACK entries in PELs.antirez2018-03-151-5/+5
|
* CG: consumer lookup + initial streamReplyWithRange() work to supprot CG.antirez2018-03-151-2/+2
|
* CG: XGROUPREAD group option parsing and groups lookup.antirez2018-03-151-1/+1
|
* CG: data structures design + XGROUP CREATE implementation.antirez2018-03-151-1/+40
|
* Streams: state machine for reverse iteration WIP 1.antirez2017-12-011-3/+4
|
* Streams: items compression implemented.antirez2017-12-011-0/+5
| | | | | | | | | | | | | | The approach used is to set a fixed header at the start of every listpack blob (that contains many entries). The header contains a "master" ID and fields, that are initially just obtained from the first entry inserted in the listpack, so that the first enty is always well compressed. Later every new entry is checked against these fields, and if it matches, the SAMEFIELD flag is set in the entry so that we know to just use the master entry flags. The IDs are always delta-encoded against the first entry. This approach avoids cascading effects in which entries are encoded depending on the previous entries, in order to avoid complexity and rewritings of the data when data is removed in the middle (which is a planned feature).
* Streams: export iteration API.antirez2017-12-011-0/+31
|
* Streams: RDB saving.antirez2017-12-011-0/+1
|
* Streams: 12 commits squashed into the initial Streams implementation.antirez2017-12-011-0/+21