diff options
author | antirez <antirez@gmail.com> | 2012-11-30 15:41:26 +0100 |
---|---|---|
committer | antirez <antirez@gmail.com> | 2012-11-30 16:36:42 +0100 |
commit | f50e658455f6455ac443e185e5fc738ef15093b3 (patch) | |
tree | 974395fec7f8c402348b4f3118ed7ff4ddad32f3 /tests | |
parent | b4abbaf755d723d33b0e81880fb035ab88f3544f (diff) | |
download | redis-f50e658455f6455ac443e185e5fc738ef15093b3.tar.gz |
SDIFF is now able to select between two algorithms for speed.
SDIFF used an algorithm that was O(N) where N is the total number
of elements of all the sets involved in the operation.
The algorithm worked like that:
ALGORITHM 1:
1) For the first set, add all the members to an auxiliary set.
2) For all the other sets, remove all the members of the set from the
auxiliary set.
So it is an O(N) algorithm where N is the total number of elements in
all the sets involved in the diff operation.
Cristobal Viedma suggested to modify the algorithm to the following:
ALGORITHM 2:
1) Iterate all the elements of the first set.
2) For every element, check if the element also exists in all the other
remaining sets.
3) Add the element to the auxiliary set only if it does not exist in any
of the other sets.
The complexity of this algorithm on the worst case is O(N*M) where N is
the size of the first set and M the total number of sets involved in the
operation.
However when there are elements in common, with this algorithm we stop
the computation for a given element as long as we find a duplicated
element into another set.
I (antirez) added an additional step to algorithm 2 to make it faster,
that is to sort the set to subtract from the biggest to the
smallest, so that it is more likely to find a duplicate in a larger sets
that are checked before the smaller ones.
WHAT IS BETTER?
None of course, for instance if the first set is much larger than the
other sets the second algorithm does a lot more work compared to the
first algorithm.
Similarly if the first set is much smaller than the other sets, the
original algorithm will less work.
So this commit makes Redis able to guess the number of operations
required by each algorithm, and select the best at runtime according
to the input received.
However, since the second algorithm has better constant times and can do
less work if there are duplicated elements, an advantage is given to the
second algorithm.
Diffstat (limited to 'tests')
-rw-r--r-- | tests/unit/type/set.tcl | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/tests/unit/type/set.tcl b/tests/unit/type/set.tcl index 9603eb568..a77759e5d 100644 --- a/tests/unit/type/set.tcl +++ b/tests/unit/type/set.tcl @@ -199,9 +199,10 @@ start_server { test "SDIFFSTORE with three sets - $type" { r sdiffstore setres set1 set4 set5 - # The type is determined by type of the first key to diff against. - # See the implementation for more information. - assert_encoding $type setres + # When we start with intsets, we should always end with intsets. + if {$type eq {intset}} { + assert_encoding intset setres + } assert_equal {1 2 3 4} [lsort [r smembers setres]] } } |