summaryrefslogtreecommitdiff
path: root/table
diff options
context:
space:
mode:
authorleveldb Team <no-reply@google.com>2021-01-11 15:32:34 +0000
committerVictor Costan <costan@google.com>2021-01-11 15:41:38 +0000
commit8cce47e450b365347769959c53b8836ef0216df9 (patch)
treea7be7012dc37327a1bb0bc3abc1d99a2de50800f /table
parentfdc8f72895544cfbc05bb7fa50d4f235e5808ae0 (diff)
downloadleveldb-8cce47e450b365347769959c53b8836ef0216df9.tar.gz
Optimize leveldb block seeks to utilize the current iterator location.
This is beneficial when iterators are reused and seeks are not random but increasing. It is additionally beneficial with larger block sizes and keys with common prefixes. Add a benchmark "seekordered" to db_bench that reuses iterators across increasing seeks. Add support to the benchmark to count comparisons made and to support common key prefix length. Change benchmark random seeds to be reproducible for entire benchmark suite executions but unique for threads in different benchmarks runs. This changes a benchmark suite of readrandom,seekrandom from having a 100% found ratio as previously it had the same seed used for fillrandom. ./db_bench --benchmarks=fillrandom,compact,seekordered --block_size=262144 --comparisons=1 --key_prefix=100 without this change (though with benchmark changes): seekrandom : 55.309 micros/op; (631820 of 1000000 found) Comparisons: 27001049 seekordered : 1.732 micros/op; (631882 of 1000000 found) Comparisons: 26998402 with this change: seekrandom : 55.866 micros/op; (631820 of 1000000 found) Comparisons: 26952143 seekordered : 1.686 micros/op; (631882 of 1000000 found) Comparisons: 25549369 For ordered seeking, this is a reduction of 5% comparisons and a 3% speedup. For random seeking (with single use iterators) the comparisons and speed are less than 1% and likely noise. PiperOrigin-RevId: 351149832
Diffstat (limited to 'table')
-rw-r--r--table/block.cc27
1 files changed, 26 insertions, 1 deletions
diff --git a/table/block.cc b/table/block.cc
index 2fe89ea..3b15257 100644
--- a/table/block.cc
+++ b/table/block.cc
@@ -166,6 +166,24 @@ class Block::Iter : public Iterator {
// with a key < target
uint32_t left = 0;
uint32_t right = num_restarts_ - 1;
+ int current_key_compare = 0;
+
+ if (Valid()) {
+ // If we're already scanning, use the current position as a starting
+ // point. This is beneficial if the key we're seeking to is ahead of the
+ // current position.
+ current_key_compare = Compare(key_, target);
+ if (current_key_compare < 0) {
+ // key_ is smaller than target
+ left = restart_index_;
+ } else if (current_key_compare > 0) {
+ right = restart_index_;
+ } else {
+ // We're seeking to the key we're already at.
+ return;
+ }
+ }
+
while (left < right) {
uint32_t mid = (left + right + 1) / 2;
uint32_t region_offset = GetRestartPoint(mid);
@@ -189,8 +207,15 @@ class Block::Iter : public Iterator {
}
}
+ // We might be able to use our current position within the restart block.
+ // This is true if we determined the key we desire is in the current block
+ // and is after than the current key.
+ assert(current_key_compare == 0 || Valid());
+ bool skip_seek = left == restart_index_ && current_key_compare < 0;
+ if (!skip_seek) {
+ SeekToRestartPoint(left);
+ }
// Linear search (within restart block) for first key >= target
- SeekToRestartPoint(left);
while (true) {
if (!ParseNextKey()) {
return;