summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlexandra (Sasha) Fedorova <fedorova@users.noreply.github.com>2016-12-28 08:07:35 -0800
committersueloverso <sue@mongodb.com>2016-12-28 11:07:35 -0500
commit74cc96ce14d386d9f81a45cca7adddaaab5fb9d5 (patch)
tree1ebaad1423cff080e5b43fe0dba806210340c615
parent4d0b97a7f138f4079024c23ce9cfb70827bc133c (diff)
downloadmongo-74cc96ce14d386d9f81a45cca7adddaaab5fb9d5.tar.gz
WT-2898 evict dynamic workers (#3039)
* WT-2898. NOT ready for review. Initial implementation of dynamically tuning the number of eviction workers. * Not ready for review. All the code is there. Still need to test/tune on different machines. * Remove debugging prints. * Style police * Spelling. * Fixup merge issue and compiler warning * Sulab and David do a review! * Fix compiler warning. Not ready for review. There is a performance regression after merging with develop. I'm on it. * Conversion to signed values in percent calculation to make sure that we always correctly compute percent difference, which can be negative, regardless of how the complier performs sign extension. Change thresholds so we have less churn. * Fix more compiler warnnings. Sorry about the churn, I don't see the same failures locally as on the autobuild even though I compile with -Werror. * Replace 1/0 with true/false * More compiler warning and style fixes. Configured with --enable-strict, so hopefully I have caught everything this time. * Minor nit/pick, init a variable * Rename free to free_thread, otherwise hides a global * Fix indentation * Fixes to the changes. Percent difference must be signed as it can be negative if the number of pages evicted per second decreased since the last period. * Added stats and log messages to reflect eviction worker tuning activity. Fixed a bug in the code that checks the group bounds when stopping the thread. * Removed verbose message, because we already have a statistic tracking evictions per second, so this is probably redundant. * whitespace * KNF * More aggressive addition/removal of eviction workers. We used to add/remove them one at a time; it's difficult to see the effects of extra workers with such an incremental change, because eviction throughput is affected by other noise, such as what happens in the kernel and in the I/O system. Now we add and remove eviction workers in batches. * Style fixes. * Fix compiler warning. * Simplified the tuning logic. Addressed Sulabh's comments. * A tuning parameter change * Fixed a bug where we needed a random value, but were not getting it via the random number generator, so it was not random and the code did not have the right behaviour. Added stats. * Move the call to tune evict workers into __evict_pass, so we can begin the tuning earlier. * NOT READY FOR REVIEW. Changed defaults for the number of eviction workers, so I can experiment with larger values. * NOT READY FOR REVIEW. A parameter to put a cap on how many threads we are adding at a time. * Reverse the changes of the last commit. That change hurt performance. * Changed all wtperf runners that set eviction thread maximum to 30, so we could evaluate the effects of the dynamic branch. * Updated the number reserved for internal sessions to 40, since we can now create up to 30 eviction worker threads by default. * Fix spellchecker complaints * KNF * NOT READY FOR REVIEW. Revised the algorithm to settle on a good value of evict workers once we sufficiently explore the configuration space using the gradient descent approach with random adjustments. The algorithm successfully finds the best static number of workers, but performs works. I suspect that there is an issue with how threads are removed. Suspect a bug in thread support code. Have not chased it yet. * Remove prints, add stats. * Fix a copy-paste bug where a code line was inadvertently eliminated. * Reduce the maximum for eviction workers to 30. Prevent dereferencing a NULL pointer if we dynamically grow a thread group after we've shrunk it and freed the associated memory. * Cleaned up and simplified the code. * NOT READY FOR REVIEW. A new version of the tuning algorithm that fixes a memory issue when we try to pre-allocate a large eviction thread group. Still need to tune and clean up the code. * Clean up the code. * Get rid of s_label warnings. Remove unused code. * Fix various style errors. * Fixed the logic in figuring out the maximum value for eviction threads upon cache creation or reconfiguration, which had caused a crash in one of the tests. * Changed default max for the number of eviction threads to eight. * Fix ranges for the minimum number of eviction threads * Fix eviction thread ranges to make the csuite happy * Commit automatic changes by s_all * Review: KNF, whitespace and renamed a few things. * Fix lock usage * KNF
-rw-r--r--bench/wtperf/runners/500m-btree-50r50u.wtperf2
-rw-r--r--bench/wtperf/runners/500m-btree-80r20u.wtperf2
-rw-r--r--bench/wtperf/runners/500m-btree-populate.wtperf2
-rw-r--r--bench/wtperf/runners/500m-btree-rdonly.wtperf2
-rw-r--r--bench/wtperf/runners/checkpoint-stress.wtperf2
-rw-r--r--bench/wtperf/runners/evict-btree-readonly.wtperf2
-rw-r--r--bench/wtperf/runners/evict-btree-stress-multi.wtperf2
-rw-r--r--bench/wtperf/runners/evict-btree-stress.wtperf2
-rw-r--r--bench/wtperf/runners/evict-btree.wtperf2
-rw-r--r--bench/wtperf/runners/evict-lsm-readonly.wtperf2
-rw-r--r--bench/wtperf/runners/evict-lsm.wtperf2
-rw-r--r--bench/wtperf/runners/log.wtperf2
-rw-r--r--bench/wtperf/runners/mongodb-secondary-apply.wtperf2
-rw-r--r--bench/wtperf/runners/multi-btree-read-heavy-stress.wtperf2
-rw-r--r--bench/wtperf/runners/multi-btree-stress.wtperf2
-rw-r--r--bench/wtperf/runners/multi-btree-zipfian-populate.wtperf2
-rw-r--r--bench/wtperf/runners/multi-btree-zipfian-workload.wtperf2
-rw-r--r--bench/wtperf/stress/btree-split-stress.wtperf2
-rw-r--r--dist/api_data.py2
-rw-r--r--dist/stat_data.py4
-rw-r--r--src/config/config_def.c10
-rw-r--r--src/conn/conn_cache.c4
-rw-r--r--src/evict/evict_lru.c204
-rw-r--r--src/include/connection.h14
-rw-r--r--src/include/extern.h1
-rw-r--r--src/include/stat.h4
-rw-r--r--src/include/wiredtiger.in384
-rw-r--r--src/support/stat.c16
-rw-r--r--src/support/thread_group.c55
-rw-r--r--tools/wtstats/stat_data.py2
30 files changed, 497 insertions, 239 deletions
diff --git a/bench/wtperf/runners/500m-btree-50r50u.wtperf b/bench/wtperf/runners/500m-btree-50r50u.wtperf
index 536127f0dd8..4d2a70f1107 100644
--- a/bench/wtperf/runners/500m-btree-50r50u.wtperf
+++ b/bench/wtperf/runners/500m-btree-50r50u.wtperf
@@ -5,7 +5,7 @@
#
# Set cache to half of memory of AWS perf instance. Enable logging and
# checkpoints. Collect wiredtiger stats for ftdc.
-conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=4)"
+conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=8)"
create=false
compression="snappy"
sess_config="isolation=snapshot"
diff --git a/bench/wtperf/runners/500m-btree-80r20u.wtperf b/bench/wtperf/runners/500m-btree-80r20u.wtperf
index d6218c44af0..6645df835df 100644
--- a/bench/wtperf/runners/500m-btree-80r20u.wtperf
+++ b/bench/wtperf/runners/500m-btree-80r20u.wtperf
@@ -5,7 +5,7 @@
#
# Set cache to half of memory of AWS perf instance. Enable logging and
# checkpoints. Collect wiredtiger stats for ftdc.
-conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=4)"
+conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=8)"
create=false
compression="snappy"
# close_conn as false allows this test to close/finish faster, but if running
diff --git a/bench/wtperf/runners/500m-btree-populate.wtperf b/bench/wtperf/runners/500m-btree-populate.wtperf
index f9aed094aa1..ab7b17ca683 100644
--- a/bench/wtperf/runners/500m-btree-populate.wtperf
+++ b/bench/wtperf/runners/500m-btree-populate.wtperf
@@ -9,7 +9,7 @@
#
# This generates about 80 Gb of uncompressed data. But it should compress
# well and be small on disk.
-conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=4)"
+conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=8)"
compact=true
compression="snappy"
sess_config="isolation=snapshot"
diff --git a/bench/wtperf/runners/500m-btree-rdonly.wtperf b/bench/wtperf/runners/500m-btree-rdonly.wtperf
index 2c9540ff589..e8958d20e2c 100644
--- a/bench/wtperf/runners/500m-btree-rdonly.wtperf
+++ b/bench/wtperf/runners/500m-btree-rdonly.wtperf
@@ -5,7 +5,7 @@
#
# Set cache to half of memory of AWS perf instance. Enable logging and
# checkpoints. Collect wiredtiger stats for ftdc.
-conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=4)"
+conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),statistics=(fast),statistics_log=(wait=30,json),eviction=(threads_max=8)"
create=false
compression="snappy"
sess_config="isolation=snapshot"
diff --git a/bench/wtperf/runners/checkpoint-stress.wtperf b/bench/wtperf/runners/checkpoint-stress.wtperf
index bbd3a3ba5ed..5daa276e622 100644
--- a/bench/wtperf/runners/checkpoint-stress.wtperf
+++ b/bench/wtperf/runners/checkpoint-stress.wtperf
@@ -1,6 +1,6 @@
# A stress configuration to create long running checkpoints while doing a lot
# of updates.
-conn_config="cache_size=16GB,eviction=(threads_max=4),log=(enabled=false)"
+conn_config="cache_size=16GB,eviction=(threads_max=8),log=(enabled=false)"
table_config="leaf_page_max=32k,internal_page_max=16k,allocation_size=4k,split_pct=90,type=file"
# Enough data to fill the cache. 150 million 1k records results in two ~11GB
# tables
diff --git a/bench/wtperf/runners/evict-btree-readonly.wtperf b/bench/wtperf/runners/evict-btree-readonly.wtperf
index 25599fadd8d..972bc371f2d 100644
--- a/bench/wtperf/runners/evict-btree-readonly.wtperf
+++ b/bench/wtperf/runners/evict-btree-readonly.wtperf
@@ -1,5 +1,5 @@
# wtperf options file: evict btree configuration
-conn_config="cache_size=50M,eviction=(threads_max=4),mmap=false"
+conn_config="cache_size=50M,eviction=(threads_max=8),mmap=false"
table_config="type=file"
icount=10000000
report_interval=5
diff --git a/bench/wtperf/runners/evict-btree-stress-multi.wtperf b/bench/wtperf/runners/evict-btree-stress-multi.wtperf
index a5a29f66fa0..5a2cad6d78e 100644
--- a/bench/wtperf/runners/evict-btree-stress-multi.wtperf
+++ b/bench/wtperf/runners/evict-btree-stress-multi.wtperf
@@ -1,4 +1,4 @@
-conn_config="cache_size=1G,eviction=(threads_max=4),session_max=2000"
+conn_config="cache_size=1G,eviction=(threads_max=8),session_max=2000"
table_config="type=file"
table_count=100
close_conn=false
diff --git a/bench/wtperf/runners/evict-btree-stress.wtperf b/bench/wtperf/runners/evict-btree-stress.wtperf
index 740fb88c050..96e3f01b325 100644
--- a/bench/wtperf/runners/evict-btree-stress.wtperf
+++ b/bench/wtperf/runners/evict-btree-stress.wtperf
@@ -1,5 +1,5 @@
# wtperf options file: evict btree configuration
-conn_config="cache_size=50M,eviction=(threads_max=4)"
+conn_config="cache_size=50M,eviction=(threads_max=8)"
table_config="type=file"
icount=10000000
report_interval=5
diff --git a/bench/wtperf/runners/evict-btree.wtperf b/bench/wtperf/runners/evict-btree.wtperf
index e7d967e5c63..3810e6a8294 100644
--- a/bench/wtperf/runners/evict-btree.wtperf
+++ b/bench/wtperf/runners/evict-btree.wtperf
@@ -1,5 +1,5 @@
# wtperf options file: evict btree configuration
-conn_config="cache_size=50M,eviction=(threads_max=4)"
+conn_config="cache_size=50M,eviction=(threads_max=8)"
table_config="type=file"
icount=10000000
report_interval=5
diff --git a/bench/wtperf/runners/evict-lsm-readonly.wtperf b/bench/wtperf/runners/evict-lsm-readonly.wtperf
index 661b8e21924..470dca695dd 100644
--- a/bench/wtperf/runners/evict-lsm-readonly.wtperf
+++ b/bench/wtperf/runners/evict-lsm-readonly.wtperf
@@ -1,5 +1,5 @@
# wtperf options file: evict lsm configuration
-conn_config="cache_size=50M,lsm_manager=(worker_thread_max=6),eviction=(threads_max=4)"
+conn_config="cache_size=50M,lsm_manager=(worker_thread_max=6),eviction=(threads_max=8)"
table_config="type=lsm,lsm=(chunk_size=2M),os_cache_dirty_max=16MB"
compact=true
icount=10000000
diff --git a/bench/wtperf/runners/evict-lsm.wtperf b/bench/wtperf/runners/evict-lsm.wtperf
index b872d429046..a0f2a78d013 100644
--- a/bench/wtperf/runners/evict-lsm.wtperf
+++ b/bench/wtperf/runners/evict-lsm.wtperf
@@ -1,5 +1,5 @@
# wtperf options file: evict lsm configuration
-conn_config="cache_size=50M,lsm_manager=(worker_thread_max=6),eviction=(threads_max=4)"
+conn_config="cache_size=50M,lsm_manager=(worker_thread_max=6),eviction=(threads_max=8)"
table_config="type=lsm,lsm=(chunk_size=2M),os_cache_dirty_max=16MB"
compact=true
icount=10000000
diff --git a/bench/wtperf/runners/log.wtperf b/bench/wtperf/runners/log.wtperf
index 6cf50dfb5a5..4379ba22373 100644
--- a/bench/wtperf/runners/log.wtperf
+++ b/bench/wtperf/runners/log.wtperf
@@ -16,7 +16,7 @@
# - Config + "-C "checkpoint=(wait=0)": no checkpoints
# - Config + "-C "log=(enabled,prealloc=false,file_max=1M)": no pre-allocation
#
-conn_config="cache_size=5G,log=(enabled=true),checkpoint=(log_size=500M),eviction=(threads_max=4)"
+conn_config="cache_size=5G,log=(enabled=true),checkpoint=(log_size=500M),eviction=(threads_max=8)"
table_config="type=file"
icount=1000000
report_interval=5
diff --git a/bench/wtperf/runners/mongodb-secondary-apply.wtperf b/bench/wtperf/runners/mongodb-secondary-apply.wtperf
index f9e41184f95..58bd1a76b97 100644
--- a/bench/wtperf/runners/mongodb-secondary-apply.wtperf
+++ b/bench/wtperf/runners/mongodb-secondary-apply.wtperf
@@ -1,5 +1,5 @@
# Simulate the MongoDB oplog apply threads on a secondary.
-conn_config="cache_size=10GB,session_max=1000,eviction=(threads_min=4,threads_max=4),log=(enabled=false),transaction_sync=(enabled=false),checkpoint_sync=true,checkpoint=(wait=60),statistics=(fast),statistics_log=(json,wait=1)"
+conn_config="cache_size=10GB,session_max=1000,eviction=(threads_min=4,threads_max=8),log=(enabled=false),transaction_sync=(enabled=false),checkpoint_sync=true,checkpoint=(wait=60),statistics=(fast),statistics_log=(json,wait=1)"
table_config="allocation_size=4k,memory_page_max=5MB,prefix_compression=false,split_pct=75,leaf_page_max=32k,internal_page_max=16k,type=file"
# Spread the workload out over several tables.
table_count=4
diff --git a/bench/wtperf/runners/multi-btree-read-heavy-stress.wtperf b/bench/wtperf/runners/multi-btree-read-heavy-stress.wtperf
index d7b27f8fda4..f07e6c80b39 100644
--- a/bench/wtperf/runners/multi-btree-read-heavy-stress.wtperf
+++ b/bench/wtperf/runners/multi-btree-read-heavy-stress.wtperf
@@ -2,7 +2,7 @@
# up by dividing the workload across a lot of threads. This needs to be
# tuned to the particular machine so the workload is close to capacity in the
# steady state, but not overwhelming.
-conn_config="cache_size=20GB,session_max=1000,eviction=(threads_min=4,threads_max=4),log=(enabled=false),transaction_sync=(enabled=false),checkpoint_sync=true,checkpoint=(wait=60),statistics=(fast),statistics_log=(json,wait=1)"
+conn_config="cache_size=20GB,session_max=1000,eviction=(threads_min=4,threads_max=8),log=(enabled=false),transaction_sync=(enabled=false),checkpoint_sync=true,checkpoint=(wait=60),statistics=(fast),statistics_log=(json,wait=1)"
table_config="allocation_size=4k,memory_page_max=10MB,prefix_compression=false,split_pct=90,leaf_page_max=32k,internal_page_max=16k,type=file"
# Divide original icount by database_count.
table_count=8
diff --git a/bench/wtperf/runners/multi-btree-stress.wtperf b/bench/wtperf/runners/multi-btree-stress.wtperf
index b10b08f6035..bee1f431043 100644
--- a/bench/wtperf/runners/multi-btree-stress.wtperf
+++ b/bench/wtperf/runners/multi-btree-stress.wtperf
@@ -1,7 +1,7 @@
# wtperf options file: multi-database configuration attempting to
# trigger slow operations by overloading CPU and disk.
# References Jira WT-2131
-conn_config="cache_size=2GB,eviction=(threads_min=2,threads_max=2),log=(enabled=false),direct_io=(data,checkpoint),buffer_alignment=4096,checkpoint_sync=true,checkpoint=(wait=60)"
+conn_config="cache_size=2GB,eviction=(threads_min=2,threads_max=8),log=(enabled=false),direct_io=(data,checkpoint),buffer_alignment=4096,checkpoint_sync=true,checkpoint=(wait=60)"
table_config="allocation_size=4k,prefix_compression=false,split_pct=75,leaf_page_max=4k,internal_page_max=16k,leaf_item_max=1433,internal_item_max=3100,type=file"
# Divide original icount by database_count.
database_count=5
diff --git a/bench/wtperf/runners/multi-btree-zipfian-populate.wtperf b/bench/wtperf/runners/multi-btree-zipfian-populate.wtperf
index ddd9c055eac..1fdba049779 100644
--- a/bench/wtperf/runners/multi-btree-zipfian-populate.wtperf
+++ b/bench/wtperf/runners/multi-btree-zipfian-populate.wtperf
@@ -1,5 +1,5 @@
# Create a set of tables with uneven distribution of data
-conn_config="cache_size=1G,eviction=(threads_max=4),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics=(fast),statistics_log=(wait=5,json),session_max=1000"
+conn_config="cache_size=1G,eviction=(threads_max=8),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics=(fast),statistics_log=(wait=5,json),session_max=1000"
table_config="type=file"
table_count=100
icount=0
diff --git a/bench/wtperf/runners/multi-btree-zipfian-workload.wtperf b/bench/wtperf/runners/multi-btree-zipfian-workload.wtperf
index 380350c88c8..dfb3306a7a5 100644
--- a/bench/wtperf/runners/multi-btree-zipfian-workload.wtperf
+++ b/bench/wtperf/runners/multi-btree-zipfian-workload.wtperf
@@ -1,5 +1,5 @@
# Read from a set of tables with uneven distribution of data
-conn_config="cache_size=1G,eviction=(threads_max=4),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics=(fast),statistics_log=(wait=5,json),session_max=1000"
+conn_config="cache_size=1G,eviction=(threads_max=8),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics=(fast),statistics_log=(wait=5,json),session_max=1000"
table_config="type=file"
table_count=100
icount=0
diff --git a/bench/wtperf/stress/btree-split-stress.wtperf b/bench/wtperf/stress/btree-split-stress.wtperf
index deb8c70d12f..86bb288fc6d 100644
--- a/bench/wtperf/stress/btree-split-stress.wtperf
+++ b/bench/wtperf/stress/btree-split-stress.wtperf
@@ -1,4 +1,4 @@
-conn_config="cache_size=2GB,statistics=[fast,clear],statistics_log=(wait=10),eviction=(threads_max=4,threads_min=4)"
+conn_config="cache_size=2GB,statistics=[fast,clear],statistics_log=(wait=10),eviction=(threads_max=8,threads_min=4)"
table_config="type=file,leaf_page_max=8k,internal_page_max=8k,memory_page_max=2MB,split_deepen_min_child=250"
icount=200000
report_interval=5
diff --git a/dist/api_data.py b/dist/api_data.py
index 98f9b5a230a..04071a84332 100644
--- a/dist/api_data.py
+++ b/dist/api_data.py
@@ -406,7 +406,7 @@ connection_runtime_config = [
Config('eviction', '', r'''
eviction configuration options''',
type='category', subconfig=[
- Config('threads_max', '1', r'''
+ Config('threads_max', '8', r'''
maximum number of threads WiredTiger will start to help evict
pages from cache. The number of threads started will vary
depending on the current eviction load. Each eviction worker
diff --git a/dist/stat_data.py b/dist/stat_data.py
index c481382dafc..0af5d6d017e 100644
--- a/dist/stat_data.py
+++ b/dist/stat_data.py
@@ -193,6 +193,7 @@ connection_stats = [
CacheStat('cache_bytes_other', 'bytes not belonging to page images in the cache', 'no_clear,no_scale,size'),
CacheStat('cache_bytes_read', 'bytes read into cache', 'size'),
CacheStat('cache_bytes_write', 'bytes written from cache', 'size'),
+ CacheStat('cache_eviction_active_workers', 'eviction worker thread active', 'no_clear'),
CacheStat('cache_eviction_aggressive_set', 'eviction currently operating in aggressive mode', 'no_clear,no_scale'),
CacheStat('cache_eviction_app', 'pages evicted by application threads'),
CacheStat('cache_eviction_app_dirty', 'modified pages evicted by application threads'),
@@ -222,12 +223,15 @@ connection_stats = [
CacheStat('cache_eviction_slow', 'eviction server unable to reach eviction goal'),
CacheStat('cache_eviction_split_internal', 'internal pages split during eviction'),
CacheStat('cache_eviction_split_leaf', 'leaf pages split during eviction'),
+ CacheStat('cache_eviction_stable_state_workers', 'eviction worker thread stable number', 'no_clear'),
CacheStat('cache_eviction_state', 'eviction state', 'no_clear,no_scale'),
CacheStat('cache_eviction_walk', 'pages walked for eviction'),
CacheStat('cache_eviction_walks_abandoned', 'eviction walks abandoned'),
CacheStat('cache_eviction_walks_active', 'files with active eviction walks', 'no_clear,no_scale'),
CacheStat('cache_eviction_walks_started', 'files with new eviction walks started'),
CacheStat('cache_eviction_worker_evicting', 'eviction worker thread evicting pages'),
+ CacheStat('cache_eviction_worker_created', 'eviction worker thread created'),
+ CacheStat('cache_eviction_worker_removed', 'eviction worker thread removed'),
CacheStat('cache_hazard_checks', 'hazard pointer check calls'),
CacheStat('cache_hazard_max', 'hazard pointer maximum array length', 'max_aggregate,no_scale'),
CacheStat('cache_hazard_walks', 'hazard pointer check entries walked'),
diff --git a/src/config/config_def.c b/src/config/config_def.c
index e4fd7937a40..83c1436eade 100644
--- a/src/config/config_def.c
+++ b/src/config/config_def.c
@@ -1050,7 +1050,7 @@ static const WT_CONFIG_ENTRY config_entries[] = {
{ "WT_CONNECTION.reconfigure",
"async=(enabled=false,ops_max=1024,threads=2),cache_overhead=8,"
"cache_size=100MB,checkpoint=(log_size=0,wait=0),error_prefix=,"
- "eviction=(threads_max=1,threads_min=1),"
+ "eviction=(threads_max=8,threads_min=1),"
"eviction_checkpoint_target=5,eviction_dirty_target=5,"
"eviction_dirty_trigger=20,eviction_target=80,eviction_trigger=95"
",file_manager=(close_handle_minimum=250,close_idle_time=30,"
@@ -1261,7 +1261,7 @@ static const WT_CONFIG_ENTRY config_entries[] = {
",builtin_extension_config=,cache_overhead=8,cache_size=100MB,"
"checkpoint=(log_size=0,wait=0),checkpoint_sync=true,"
"config_base=true,create=false,direct_io=,encryption=(keyid=,"
- "name=,secretkey=),error_prefix=,eviction=(threads_max=1,"
+ "name=,secretkey=),error_prefix=,eviction=(threads_max=8,"
"threads_min=1),eviction_checkpoint_target=5,"
"eviction_dirty_target=5,eviction_dirty_trigger=20,"
"eviction_target=80,eviction_trigger=95,exclusive=false,"
@@ -1285,7 +1285,7 @@ static const WT_CONFIG_ENTRY config_entries[] = {
",builtin_extension_config=,cache_overhead=8,cache_size=100MB,"
"checkpoint=(log_size=0,wait=0),checkpoint_sync=true,"
"config_base=true,create=false,direct_io=,encryption=(keyid=,"
- "name=,secretkey=),error_prefix=,eviction=(threads_max=1,"
+ "name=,secretkey=),error_prefix=,eviction=(threads_max=8,"
"threads_min=1),eviction_checkpoint_target=5,"
"eviction_dirty_target=5,eviction_dirty_trigger=20,"
"eviction_target=80,eviction_trigger=95,exclusive=false,"
@@ -1309,7 +1309,7 @@ static const WT_CONFIG_ENTRY config_entries[] = {
",builtin_extension_config=,cache_overhead=8,cache_size=100MB,"
"checkpoint=(log_size=0,wait=0),checkpoint_sync=true,direct_io=,"
"encryption=(keyid=,name=,secretkey=),error_prefix=,"
- "eviction=(threads_max=1,threads_min=1),"
+ "eviction=(threads_max=8,threads_min=1),"
"eviction_checkpoint_target=5,eviction_dirty_target=5,"
"eviction_dirty_trigger=20,eviction_target=80,eviction_trigger=95"
",extensions=,file_extend=,file_manager=(close_handle_minimum=250"
@@ -1330,7 +1330,7 @@ static const WT_CONFIG_ENTRY config_entries[] = {
",builtin_extension_config=,cache_overhead=8,cache_size=100MB,"
"checkpoint=(log_size=0,wait=0),checkpoint_sync=true,direct_io=,"
"encryption=(keyid=,name=,secretkey=),error_prefix=,"
- "eviction=(threads_max=1,threads_min=1),"
+ "eviction=(threads_max=8,threads_min=1),"
"eviction_checkpoint_target=5,eviction_dirty_target=5,"
"eviction_dirty_trigger=20,eviction_target=80,eviction_trigger=95"
",extensions=,file_extend=,file_manager=(close_handle_minimum=250"
diff --git a/src/conn/conn_cache.c b/src/conn/conn_cache.c
index fe5f94ea03d..9b07b46abcd 100644
--- a/src/conn/conn_cache.c
+++ b/src/conn/conn_cache.c
@@ -143,7 +143,9 @@ __wt_cache_config(WT_SESSION_IMPL *session, bool reconfigure, const char *cfg[])
if (reconfigure)
WT_RET(__wt_thread_group_resize(
session, &conn->evict_threads,
- conn->evict_threads_min, conn->evict_threads_max,
+ conn->evict_threads_min,
+ WT_MAX(conn->evict_threads_min,
+ WT_MIN(conn->evict_threads_max, EVICT_GROUP_INCR)),
WT_THREAD_CAN_WAIT | WT_THREAD_PANIC_FAIL));
return (0);
diff --git a/src/evict/evict_lru.c b/src/evict/evict_lru.c
index b4cb2cc229a..485fd0e6d40 100644
--- a/src/evict/evict_lru.c
+++ b/src/evict/evict_lru.c
@@ -15,6 +15,7 @@ static int __evict_lru_walk(WT_SESSION_IMPL *);
static int __evict_page(WT_SESSION_IMPL *, bool);
static int __evict_pass(WT_SESSION_IMPL *);
static int __evict_server(WT_SESSION_IMPL *, bool *);
+static int __evict_tune_workers(WT_SESSION_IMPL *session);
static int __evict_walk(WT_SESSION_IMPL *, WT_EVICT_QUEUE *);
static int __evict_walk_file(
WT_SESSION_IMPL *, WT_EVICT_QUEUE *, u_int, u_int *);
@@ -389,10 +390,19 @@ __wt_evict_create(WT_SESSION_IMPL *session)
/* Set first, the thread might run before we finish up. */
F_SET(conn, WT_CONN_EVICTION_RUN);
- /* Create the eviction thread group */
+ /*
+ * Create the eviction thread group.
+ * We don't set the group size to the maximum allowed sessions,
+ * because this may have adverse memory effects. Instead,
+ * we set the group's maximum to a small value. The code
+ * that tunes the number of workers will increase the
+ * maximum if necessary.
+ */
WT_RET(__wt_thread_group_create(session, &conn->evict_threads,
"eviction-server", conn->evict_threads_min,
- conn->evict_threads_max, WT_THREAD_CAN_WAIT | WT_THREAD_PANIC_FAIL,
+ WT_MAX(conn->evict_threads_min,
+ WT_MIN(conn->evict_threads_max, EVICT_GROUP_INCR)),
+ WT_THREAD_CAN_WAIT | WT_THREAD_PANIC_FAIL,
__wt_evict_thread_run));
/*
@@ -548,6 +558,8 @@ __evict_pass(WT_SESSION_IMPL *session)
if (loop == 0)
prev = now;
+ if (conn->evict_threads.threads[0]->session == session)
+ __evict_tune_workers(session);
/*
* Increment the shared read generation. Do this occasionally
* even if eviction is not currently required, so that pages
@@ -573,14 +585,6 @@ __evict_pass(WT_SESSION_IMPL *session)
if (!__evict_update_work(session))
break;
- /*
- * Try to start a new thread if we have capacity and haven't
- * reached the eviction targets.
- */
- if (F_ISSET(cache, WT_CACHE_EVICT_ALL))
- WT_RET(__wt_thread_group_start_one(
- session, &conn->evict_threads, false));
-
__wt_verbose(session, WT_VERB_EVICTSERVER,
"Eviction pass with: Max: %" PRIu64
" In use: %" PRIu64 " Dirty: %" PRIu64,
@@ -844,6 +848,182 @@ __wt_evict_file_exclusive_off(WT_SESSION_IMPL *session)
__wt_spin_unlock(session, &cache->evict_walk_lock);
}
+#define EVICT_TUNE_BATCH 1 /* Max workers to add each period */
+#define EVICT_TUNE_DATAPT_MIN 3 /* Data points needed before deciding
+ if we should keep adding workers or
+ settle on an earlier value. */
+#define EVICT_TUNE_PERIOD 2 /* Tune period in seconds */
+
+/*
+ * __evict_tune_workers --
+ * Find the right number of eviction workers. Gradually ramp up the number of
+ * workers increasing the number in batches indicated by the setting above.
+ * Store the number of workers that gave us the best throughput so far and
+ * the number of data points we have tried.
+ *
+ * Every once in a while when we have the minimum number of data points
+ * we check whether the eviction throughput achieved with the current number
+ * of workers is the best we have seen so far. If so, we will keep increasing
+ * the number of workers. If not, we are past the infliction point on the
+ * eviction throughput curve. In that case, we will set the number of workers
+ * to the best observed so far and settle into a stable state.
+ */
+static int
+__evict_tune_workers(WT_SESSION_IMPL *session)
+{
+ struct timespec current_time;
+ WT_CACHE *cache;
+ WT_CONNECTION_IMPL *conn;
+ uint64_t cur_threads, delta_msec, delta_pages, i, target_threads;
+ uint64_t pgs_evicted_cur, pgs_evicted_persec_cur;
+ uint32_t new_max, thread_surplus;
+
+ conn = S2C(session);
+ cache = conn->cache;
+
+ WT_ASSERT(session, conn->evict_threads.threads[0]->session == session);
+ pgs_evicted_persec_cur = 0;
+
+ if (conn->evict_tune_stable)
+ return (0);
+
+ __wt_epoch(session, &current_time);
+
+ /*
+ * Every EVICT_TUNE_PERIOD seconds record the number of
+ * pages evicted per second observed in the previous period.
+ */
+ if (WT_TIMEDIFF_SEC(
+ current_time, conn->evict_tune_last_time) < EVICT_TUNE_PERIOD)
+ return (0);
+
+ pgs_evicted_cur = cache->pages_evict;
+
+ /*
+ * If we have recorded the number of pages evicted at the end of
+ * the previous measurement interval, we can compute the eviction
+ * rate in evicted pages per second achieved during the current
+ * measurement interval.
+ * Otherwise, we just record the number of evicted pages and return.
+ */
+ if (conn->evict_tune_pgs_last == 0)
+ goto out;
+
+ delta_msec = WT_TIMEDIFF_MS(current_time, conn->evict_tune_last_time);
+ delta_pages = pgs_evicted_cur - conn->evict_tune_pgs_last;
+ pgs_evicted_persec_cur = (delta_pages * WT_THOUSAND) / delta_msec;
+ conn->evict_tune_num_points++;
+
+ /* Keep track of the maximum eviction throughput seen and the number
+ * of workers corresponding to that throughput.
+ */
+ if (pgs_evicted_persec_cur > conn->evict_tune_pg_sec_max) {
+ conn->evict_tune_pg_sec_max = pgs_evicted_persec_cur;
+ conn->evict_tune_workers_best =
+ conn->evict_threads.current_threads;
+ }
+
+ /*
+ * Compare the current number of data points with the number
+ * needed variable. If they are equal, we will check whether
+ * we are still going up on the performance curve, in which
+ * case we will continue increasing the number of workers, or
+ * we are past the inflection point on the curve, in which case
+ * we will go back to the best observed number of workers and
+ * settle into a stable state.
+ */
+ if (conn->evict_tune_num_points >= conn->evict_tune_datapts_needed) {
+ if ((conn->evict_tune_workers_best ==
+ conn->evict_threads.current_threads) &&
+ (conn->evict_threads.current_threads <
+ conn->evict_threads_max)) {
+ /*
+ * Keep adding workers. We will check again
+ * at the next check point.
+ */
+ conn->evict_tune_datapts_needed +=
+ WT_MIN(EVICT_TUNE_DATAPT_MIN,
+ (conn->evict_threads_max
+ - conn->evict_threads.current_threads)/
+ EVICT_TUNE_BATCH);
+ } else {
+ /*
+ * We are past the inflection point. Choose the
+ * best number of eviction workers observed and
+ * settle into a stable state.
+ */
+ thread_surplus =
+ conn->evict_threads.current_threads -
+ conn->evict_tune_workers_best;
+
+ for (i = 0; i < thread_surplus; i++) {
+ WT_RET(__wt_thread_group_stop_one(session,
+ &conn->evict_threads, true));
+ WT_STAT_CONN_INCR(session,
+ cache_eviction_worker_removed);
+ }
+ WT_STAT_CONN_SET(session,
+ cache_eviction_stable_state_workers,
+ conn->evict_tune_workers_best);
+ conn->evict_tune_stable = true;
+ WT_STAT_CONN_SET(session, cache_eviction_active_workers,
+ conn->evict_threads.current_threads);
+ return (0);
+ }
+ }
+
+ /*
+ * If we have not added any worker threads in the past, we set the
+ * number needed equal to the number of data points that we must
+ * accumulate before deciding if we should keep adding workers or settle
+ * on a previously tried value of workers.
+ */
+ if (conn->evict_tune_last_action_time.tv_sec == 0)
+ conn->evict_tune_datapts_needed = WT_MIN(EVICT_TUNE_DATAPT_MIN,
+ (conn->evict_threads_max -
+ conn->evict_threads.current_threads) / EVICT_TUNE_BATCH);
+
+ if (F_ISSET(cache, WT_CACHE_EVICT_ALL)) {
+ cur_threads = conn->evict_threads.current_threads;
+ target_threads = WT_MIN(cur_threads + EVICT_TUNE_BATCH,
+ conn->evict_threads_max);
+ /*
+ * Resize the group to allow for an additional batch of threads.
+ * We resize the group in increments of a few sessions.
+ * Allocating the group to accommodate the maximum number of
+ * workers has adverse effects on performance due to memory
+ * effects, so we gradually ramp up the allocation.
+ */
+ if (conn->evict_threads.max < target_threads) {
+ new_max = WT_MIN(conn->evict_threads.max +
+ EVICT_GROUP_INCR, conn->evict_threads_max);
+
+ WT_RET(__wt_thread_group_resize(
+ session, &conn->evict_threads,
+ conn->evict_threads_min, new_max,
+ WT_THREAD_CAN_WAIT | WT_THREAD_PANIC_FAIL));
+ }
+
+ /* Now actually start the new threads. */
+ for (i = 0; i < (target_threads - cur_threads); ++i) {
+ WT_RET(__wt_thread_group_start_one(session,
+ &conn->evict_threads, false));
+ WT_STAT_CONN_INCR(session,
+ cache_eviction_worker_created);
+ __wt_verbose(session, WT_VERB_EVICTSERVER,
+ "added worker thread");
+ }
+ conn->evict_tune_last_action_time = current_time;
+ }
+
+ WT_STAT_CONN_SET(session, cache_eviction_active_workers,
+ conn->evict_threads.current_threads);
+
+out: conn->evict_tune_last_time = current_time;
+ conn->evict_tune_pgs_last = pgs_evicted_cur;
+ return (0);
+}
+
/*
* __evict_lru_pages --
* Get pages from the LRU queue to evict.
@@ -1282,8 +1462,8 @@ __evict_push_candidate(WT_SESSION_IMPL *session,
* Get a few page eviction candidates from a single underlying file.
*/
static int
-__evict_walk_file(WT_SESSION_IMPL *session, WT_EVICT_QUEUE *queue,
- u_int max_entries, u_int *slotp)
+__evict_walk_file(WT_SESSION_IMPL *session,
+ WT_EVICT_QUEUE *queue, u_int max_entries, u_int *slotp)
{
WT_BTREE *btree;
WT_CACHE *cache;
diff --git a/src/include/connection.h b/src/include/connection.h
index 6818633d816..665275440cf 100644
--- a/src/include/connection.h
+++ b/src/include/connection.h
@@ -301,6 +301,16 @@ struct __wt_connection_impl {
uint32_t evict_threads_max;/* Max eviction threads */
uint32_t evict_threads_min;/* Min eviction threads */
+#define EVICT_GROUP_INCR 4 /* Evict group size increased in batches */
+ uint32_t evict_tune_datapts_needed;/* Data needed to tune */
+ struct timespec evict_tune_last_action_time;/* Time of last action */
+ struct timespec evict_tune_last_time; /* Time of last check */
+ uint32_t evict_tune_num_points; /* Number of values tried */
+ uint64_t evict_tune_pgs_last; /* Number of pages evicted */
+ uint64_t evict_tune_pg_sec_max; /* Max throughput encountered */
+ bool evict_tune_stable; /* Are we stable? */
+ uint32_t evict_tune_workers_best;/* Best performing value */
+
#define WT_STATLOG_FILENAME "WiredTigerStat.%d.%H"
WT_SESSION_IMPL *stat_session; /* Statistics log session */
wt_thread_t stat_tid; /* Statistics log thread */
@@ -326,11 +336,11 @@ struct __wt_connection_impl {
bool log_tid_set; /* Log server thread set */
WT_CONDVAR *log_file_cond; /* Log file thread wait mutex */
WT_SESSION_IMPL *log_file_session;/* Log file thread session */
- wt_thread_t log_file_tid; /* Log file thread thread */
+ wt_thread_t log_file_tid; /* Log file thread */
bool log_file_tid_set;/* Log file thread set */
WT_CONDVAR *log_wrlsn_cond;/* Log write lsn thread wait mutex */
WT_SESSION_IMPL *log_wrlsn_session;/* Log write lsn thread session */
- wt_thread_t log_wrlsn_tid; /* Log write lsn thread thread */
+ wt_thread_t log_wrlsn_tid; /* Log write lsn thread */
bool log_wrlsn_tid_set;/* Log write lsn thread set */
WT_LOG *log; /* Logging structure */
WT_COMPRESSOR *log_compressor;/* Logging compressor */
diff --git a/src/include/extern.h b/src/include/extern.h
index bcad3580e25..566eb386c29 100644
--- a/src/include/extern.h
+++ b/src/include/extern.h
@@ -728,6 +728,7 @@ extern int __wt_thread_group_resize( WT_SESSION_IMPL *session, WT_THREAD_GROUP *
extern int __wt_thread_group_create( WT_SESSION_IMPL *session, WT_THREAD_GROUP *group, const char *name, uint32_t min, uint32_t max, uint32_t flags, int (*run_func)(WT_SESSION_IMPL *session, WT_THREAD *context)) WT_GCC_FUNC_DECL_ATTRIBUTE((warn_unused_result)) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
extern int __wt_thread_group_destroy(WT_SESSION_IMPL *session, WT_THREAD_GROUP *group) WT_GCC_FUNC_DECL_ATTRIBUTE((warn_unused_result)) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
extern int __wt_thread_group_start_one( WT_SESSION_IMPL *session, WT_THREAD_GROUP *group, bool wait) WT_GCC_FUNC_DECL_ATTRIBUTE((warn_unused_result)) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
+extern int __wt_thread_group_stop_one( WT_SESSION_IMPL *session, WT_THREAD_GROUP *group, bool wait) WT_GCC_FUNC_DECL_ATTRIBUTE((warn_unused_result)) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
extern void __wt_txn_release_snapshot(WT_SESSION_IMPL *session) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
extern void __wt_txn_get_snapshot(WT_SESSION_IMPL *session) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
extern int __wt_txn_update_oldest(WT_SESSION_IMPL *session, uint32_t flags) WT_GCC_FUNC_DECL_ATTRIBUTE((warn_unused_result)) WT_GCC_FUNC_DECL_ATTRIBUTE((visibility("hidden")));
diff --git a/src/include/stat.h b/src/include/stat.h
index 3dcdf68b8d5..fd3e3290d95 100644
--- a/src/include/stat.h
+++ b/src/include/stat.h
@@ -310,7 +310,11 @@ struct __wt_connection_stats {
int64_t cache_eviction_slow;
int64_t cache_eviction_state;
int64_t cache_eviction_walks_abandoned;
+ int64_t cache_eviction_active_workers;
+ int64_t cache_eviction_worker_created;
int64_t cache_eviction_worker_evicting;
+ int64_t cache_eviction_worker_removed;
+ int64_t cache_eviction_stable_state_workers;
int64_t cache_eviction_force_fail;
int64_t cache_eviction_walks_active;
int64_t cache_eviction_walks_started;
diff --git a/src/include/wiredtiger.in b/src/include/wiredtiger.in
index 9ee28317bc4..7c27baa9395 100644
--- a/src/include/wiredtiger.in
+++ b/src/include/wiredtiger.in
@@ -1855,7 +1855,7 @@ struct __wt_connection {
* threads WiredTiger will start to help evict pages from cache. The
* number of threads started will vary depending on the current eviction
* load. Each eviction worker thread uses a session from the configured
- * session_max., an integer between 1 and 20; default \c 1.}
+ * session_max., an integer between 1 and 20; default \c 8.}
* @config{&nbsp;&nbsp;&nbsp;&nbsp;threads_min, minimum number of
* threads WiredTiger will start to help evict pages from cache. The
* number of threads currently running will vary depending on the
@@ -2331,7 +2331,7 @@ struct __wt_connection {
* WiredTiger will start to help evict pages from cache. The number of threads
* started will vary depending on the current eviction load. Each eviction
* worker thread uses a session from the configured session_max., an integer
- * between 1 and 20; default \c 1.}
+ * between 1 and 20; default \c 8.}
* @config{&nbsp;&nbsp;&nbsp;&nbsp;threads_min,
* minimum number of threads WiredTiger will start to help evict pages from
* cache. The number of threads currently running will vary depending on the
@@ -4429,396 +4429,404 @@ extern int wiredtiger_extension_terminate(WT_CONNECTION *connection);
#define WT_STAT_CONN_CACHE_EVICTION_STATE 1051
/*! cache: eviction walks abandoned */
#define WT_STAT_CONN_CACHE_EVICTION_WALKS_ABANDONED 1052
+/*! cache: eviction worker thread active */
+#define WT_STAT_CONN_CACHE_EVICTION_ACTIVE_WORKERS 1053
+/*! cache: eviction worker thread created */
+#define WT_STAT_CONN_CACHE_EVICTION_WORKER_CREATED 1054
/*! cache: eviction worker thread evicting pages */
-#define WT_STAT_CONN_CACHE_EVICTION_WORKER_EVICTING 1053
+#define WT_STAT_CONN_CACHE_EVICTION_WORKER_EVICTING 1055
+/*! cache: eviction worker thread removed */
+#define WT_STAT_CONN_CACHE_EVICTION_WORKER_REMOVED 1056
+/*! cache: eviction worker thread stable number */
+#define WT_STAT_CONN_CACHE_EVICTION_STABLE_STATE_WORKERS 1057
/*! cache: failed eviction of pages that exceeded the in-memory maximum */
-#define WT_STAT_CONN_CACHE_EVICTION_FORCE_FAIL 1054
+#define WT_STAT_CONN_CACHE_EVICTION_FORCE_FAIL 1058
/*! cache: files with active eviction walks */
-#define WT_STAT_CONN_CACHE_EVICTION_WALKS_ACTIVE 1055
+#define WT_STAT_CONN_CACHE_EVICTION_WALKS_ACTIVE 1059
/*! cache: files with new eviction walks started */
-#define WT_STAT_CONN_CACHE_EVICTION_WALKS_STARTED 1056
+#define WT_STAT_CONN_CACHE_EVICTION_WALKS_STARTED 1060
/*! cache: hazard pointer blocked page eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_HAZARD 1057
+#define WT_STAT_CONN_CACHE_EVICTION_HAZARD 1061
/*! cache: hazard pointer check calls */
-#define WT_STAT_CONN_CACHE_HAZARD_CHECKS 1058
+#define WT_STAT_CONN_CACHE_HAZARD_CHECKS 1062
/*! cache: hazard pointer check entries walked */
-#define WT_STAT_CONN_CACHE_HAZARD_WALKS 1059
+#define WT_STAT_CONN_CACHE_HAZARD_WALKS 1063
/*! cache: hazard pointer maximum array length */
-#define WT_STAT_CONN_CACHE_HAZARD_MAX 1060
+#define WT_STAT_CONN_CACHE_HAZARD_MAX 1064
/*! cache: in-memory page passed criteria to be split */
-#define WT_STAT_CONN_CACHE_INMEM_SPLITTABLE 1061
+#define WT_STAT_CONN_CACHE_INMEM_SPLITTABLE 1065
/*! cache: in-memory page splits */
-#define WT_STAT_CONN_CACHE_INMEM_SPLIT 1062
+#define WT_STAT_CONN_CACHE_INMEM_SPLIT 1066
/*! cache: internal pages evicted */
-#define WT_STAT_CONN_CACHE_EVICTION_INTERNAL 1063
+#define WT_STAT_CONN_CACHE_EVICTION_INTERNAL 1067
/*! cache: internal pages split during eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_SPLIT_INTERNAL 1064
+#define WT_STAT_CONN_CACHE_EVICTION_SPLIT_INTERNAL 1068
/*! cache: leaf pages split during eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_SPLIT_LEAF 1065
+#define WT_STAT_CONN_CACHE_EVICTION_SPLIT_LEAF 1069
/*! cache: lookaside table insert calls */
-#define WT_STAT_CONN_CACHE_LOOKASIDE_INSERT 1066
+#define WT_STAT_CONN_CACHE_LOOKASIDE_INSERT 1070
/*! cache: lookaside table remove calls */
-#define WT_STAT_CONN_CACHE_LOOKASIDE_REMOVE 1067
+#define WT_STAT_CONN_CACHE_LOOKASIDE_REMOVE 1071
/*! cache: maximum bytes configured */
-#define WT_STAT_CONN_CACHE_BYTES_MAX 1068
+#define WT_STAT_CONN_CACHE_BYTES_MAX 1072
/*! cache: maximum page size at eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_MAXIMUM_PAGE_SIZE 1069
+#define WT_STAT_CONN_CACHE_EVICTION_MAXIMUM_PAGE_SIZE 1073
/*! cache: modified pages evicted */
-#define WT_STAT_CONN_CACHE_EVICTION_DIRTY 1070
+#define WT_STAT_CONN_CACHE_EVICTION_DIRTY 1074
/*! cache: modified pages evicted by application threads */
-#define WT_STAT_CONN_CACHE_EVICTION_APP_DIRTY 1071
+#define WT_STAT_CONN_CACHE_EVICTION_APP_DIRTY 1075
/*! cache: overflow pages read into cache */
-#define WT_STAT_CONN_CACHE_READ_OVERFLOW 1072
+#define WT_STAT_CONN_CACHE_READ_OVERFLOW 1076
/*! cache: overflow values cached in memory */
-#define WT_STAT_CONN_CACHE_OVERFLOW_VALUE 1073
+#define WT_STAT_CONN_CACHE_OVERFLOW_VALUE 1077
/*! cache: page split during eviction deepened the tree */
-#define WT_STAT_CONN_CACHE_EVICTION_DEEPEN 1074
+#define WT_STAT_CONN_CACHE_EVICTION_DEEPEN 1078
/*! cache: page written requiring lookaside records */
-#define WT_STAT_CONN_CACHE_WRITE_LOOKASIDE 1075
+#define WT_STAT_CONN_CACHE_WRITE_LOOKASIDE 1079
/*! cache: pages currently held in the cache */
-#define WT_STAT_CONN_CACHE_PAGES_INUSE 1076
+#define WT_STAT_CONN_CACHE_PAGES_INUSE 1080
/*! cache: pages evicted because they exceeded the in-memory maximum */
-#define WT_STAT_CONN_CACHE_EVICTION_FORCE 1077
+#define WT_STAT_CONN_CACHE_EVICTION_FORCE 1081
/*! cache: pages evicted because they had chains of deleted items */
-#define WT_STAT_CONN_CACHE_EVICTION_FORCE_DELETE 1078
+#define WT_STAT_CONN_CACHE_EVICTION_FORCE_DELETE 1082
/*! cache: pages evicted by application threads */
-#define WT_STAT_CONN_CACHE_EVICTION_APP 1079
+#define WT_STAT_CONN_CACHE_EVICTION_APP 1083
/*! cache: pages queued for eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED 1080
+#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED 1084
/*! cache: pages queued for urgent eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED_URGENT 1081
+#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED_URGENT 1085
/*! cache: pages queued for urgent eviction during walk */
-#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED_OLDEST 1082
+#define WT_STAT_CONN_CACHE_EVICTION_PAGES_QUEUED_OLDEST 1086
/*! cache: pages read into cache */
-#define WT_STAT_CONN_CACHE_READ 1083
+#define WT_STAT_CONN_CACHE_READ 1087
/*! cache: pages read into cache requiring lookaside entries */
-#define WT_STAT_CONN_CACHE_READ_LOOKASIDE 1084
+#define WT_STAT_CONN_CACHE_READ_LOOKASIDE 1088
/*! cache: pages requested from the cache */
-#define WT_STAT_CONN_CACHE_PAGES_REQUESTED 1085
+#define WT_STAT_CONN_CACHE_PAGES_REQUESTED 1089
/*! cache: pages seen by eviction walk */
-#define WT_STAT_CONN_CACHE_EVICTION_PAGES_SEEN 1086
+#define WT_STAT_CONN_CACHE_EVICTION_PAGES_SEEN 1090
/*! cache: pages selected for eviction unable to be evicted */
-#define WT_STAT_CONN_CACHE_EVICTION_FAIL 1087
+#define WT_STAT_CONN_CACHE_EVICTION_FAIL 1091
/*! cache: pages walked for eviction */
-#define WT_STAT_CONN_CACHE_EVICTION_WALK 1088
+#define WT_STAT_CONN_CACHE_EVICTION_WALK 1092
/*! cache: pages written from cache */
-#define WT_STAT_CONN_CACHE_WRITE 1089
+#define WT_STAT_CONN_CACHE_WRITE 1093
/*! cache: pages written requiring in-memory restoration */
-#define WT_STAT_CONN_CACHE_WRITE_RESTORE 1090
+#define WT_STAT_CONN_CACHE_WRITE_RESTORE 1094
/*! cache: percentage overhead */
-#define WT_STAT_CONN_CACHE_OVERHEAD 1091
+#define WT_STAT_CONN_CACHE_OVERHEAD 1095
/*! cache: tracked bytes belonging to internal pages in the cache */
-#define WT_STAT_CONN_CACHE_BYTES_INTERNAL 1092
+#define WT_STAT_CONN_CACHE_BYTES_INTERNAL 1096
/*! cache: tracked bytes belonging to leaf pages in the cache */
-#define WT_STAT_CONN_CACHE_BYTES_LEAF 1093
+#define WT_STAT_CONN_CACHE_BYTES_LEAF 1097
/*! cache: tracked dirty bytes in the cache */
-#define WT_STAT_CONN_CACHE_BYTES_DIRTY 1094
+#define WT_STAT_CONN_CACHE_BYTES_DIRTY 1098
/*! cache: tracked dirty pages in the cache */
-#define WT_STAT_CONN_CACHE_PAGES_DIRTY 1095
+#define WT_STAT_CONN_CACHE_PAGES_DIRTY 1099
/*! cache: unmodified pages evicted */
-#define WT_STAT_CONN_CACHE_EVICTION_CLEAN 1096
+#define WT_STAT_CONN_CACHE_EVICTION_CLEAN 1100
/*! connection: auto adjusting condition resets */
-#define WT_STAT_CONN_COND_AUTO_WAIT_RESET 1097
+#define WT_STAT_CONN_COND_AUTO_WAIT_RESET 1101
/*! connection: auto adjusting condition wait calls */
-#define WT_STAT_CONN_COND_AUTO_WAIT 1098
+#define WT_STAT_CONN_COND_AUTO_WAIT 1102
/*! connection: files currently open */
-#define WT_STAT_CONN_FILE_OPEN 1099
+#define WT_STAT_CONN_FILE_OPEN 1103
/*! connection: memory allocations */
-#define WT_STAT_CONN_MEMORY_ALLOCATION 1100
+#define WT_STAT_CONN_MEMORY_ALLOCATION 1104
/*! connection: memory frees */
-#define WT_STAT_CONN_MEMORY_FREE 1101
+#define WT_STAT_CONN_MEMORY_FREE 1105
/*! connection: memory re-allocations */
-#define WT_STAT_CONN_MEMORY_GROW 1102
+#define WT_STAT_CONN_MEMORY_GROW 1106
/*! connection: pthread mutex condition wait calls */
-#define WT_STAT_CONN_COND_WAIT 1103
+#define WT_STAT_CONN_COND_WAIT 1107
/*! connection: pthread mutex shared lock read-lock calls */
-#define WT_STAT_CONN_RWLOCK_READ 1104
+#define WT_STAT_CONN_RWLOCK_READ 1108
/*! connection: pthread mutex shared lock write-lock calls */
-#define WT_STAT_CONN_RWLOCK_WRITE 1105
+#define WT_STAT_CONN_RWLOCK_WRITE 1109
/*! connection: total fsync I/Os */
-#define WT_STAT_CONN_FSYNC_IO 1106
+#define WT_STAT_CONN_FSYNC_IO 1110
/*! connection: total read I/Os */
-#define WT_STAT_CONN_READ_IO 1107
+#define WT_STAT_CONN_READ_IO 1111
/*! connection: total write I/Os */
-#define WT_STAT_CONN_WRITE_IO 1108
+#define WT_STAT_CONN_WRITE_IO 1112
/*! cursor: cursor create calls */
-#define WT_STAT_CONN_CURSOR_CREATE 1109
+#define WT_STAT_CONN_CURSOR_CREATE 1113
/*! cursor: cursor insert calls */
-#define WT_STAT_CONN_CURSOR_INSERT 1110
+#define WT_STAT_CONN_CURSOR_INSERT 1114
/*! cursor: cursor next calls */
-#define WT_STAT_CONN_CURSOR_NEXT 1111
+#define WT_STAT_CONN_CURSOR_NEXT 1115
/*! cursor: cursor prev calls */
-#define WT_STAT_CONN_CURSOR_PREV 1112
+#define WT_STAT_CONN_CURSOR_PREV 1116
/*! cursor: cursor remove calls */
-#define WT_STAT_CONN_CURSOR_REMOVE 1113
+#define WT_STAT_CONN_CURSOR_REMOVE 1117
/*! cursor: cursor reset calls */
-#define WT_STAT_CONN_CURSOR_RESET 1114
+#define WT_STAT_CONN_CURSOR_RESET 1118
/*! cursor: cursor restarted searches */
-#define WT_STAT_CONN_CURSOR_RESTART 1115
+#define WT_STAT_CONN_CURSOR_RESTART 1119
/*! cursor: cursor search calls */
-#define WT_STAT_CONN_CURSOR_SEARCH 1116
+#define WT_STAT_CONN_CURSOR_SEARCH 1120
/*! cursor: cursor search near calls */
-#define WT_STAT_CONN_CURSOR_SEARCH_NEAR 1117
+#define WT_STAT_CONN_CURSOR_SEARCH_NEAR 1121
/*! cursor: cursor update calls */
-#define WT_STAT_CONN_CURSOR_UPDATE 1118
+#define WT_STAT_CONN_CURSOR_UPDATE 1122
/*! cursor: truncate calls */
-#define WT_STAT_CONN_CURSOR_TRUNCATE 1119
+#define WT_STAT_CONN_CURSOR_TRUNCATE 1123
/*! data-handle: connection data handles currently active */
-#define WT_STAT_CONN_DH_CONN_HANDLE_COUNT 1120
+#define WT_STAT_CONN_DH_CONN_HANDLE_COUNT 1124
/*! data-handle: connection sweep candidate became referenced */
-#define WT_STAT_CONN_DH_SWEEP_REF 1121
+#define WT_STAT_CONN_DH_SWEEP_REF 1125
/*! data-handle: connection sweep dhandles closed */
-#define WT_STAT_CONN_DH_SWEEP_CLOSE 1122
+#define WT_STAT_CONN_DH_SWEEP_CLOSE 1126
/*! data-handle: connection sweep dhandles removed from hash list */
-#define WT_STAT_CONN_DH_SWEEP_REMOVE 1123
+#define WT_STAT_CONN_DH_SWEEP_REMOVE 1127
/*! data-handle: connection sweep time-of-death sets */
-#define WT_STAT_CONN_DH_SWEEP_TOD 1124
+#define WT_STAT_CONN_DH_SWEEP_TOD 1128
/*! data-handle: connection sweeps */
-#define WT_STAT_CONN_DH_SWEEPS 1125
+#define WT_STAT_CONN_DH_SWEEPS 1129
/*! data-handle: session dhandles swept */
-#define WT_STAT_CONN_DH_SESSION_HANDLES 1126
+#define WT_STAT_CONN_DH_SESSION_HANDLES 1130
/*! data-handle: session sweep attempts */
-#define WT_STAT_CONN_DH_SESSION_SWEEPS 1127
+#define WT_STAT_CONN_DH_SESSION_SWEEPS 1131
/*! lock: checkpoint lock acquisitions */
-#define WT_STAT_CONN_LOCK_CHECKPOINT_COUNT 1128
+#define WT_STAT_CONN_LOCK_CHECKPOINT_COUNT 1132
/*! lock: checkpoint lock application thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_CHECKPOINT_WAIT_APPLICATION 1129
+#define WT_STAT_CONN_LOCK_CHECKPOINT_WAIT_APPLICATION 1133
/*! lock: checkpoint lock internal thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_CHECKPOINT_WAIT_INTERNAL 1130
+#define WT_STAT_CONN_LOCK_CHECKPOINT_WAIT_INTERNAL 1134
/*! lock: handle-list lock acquisitions */
-#define WT_STAT_CONN_LOCK_HANDLE_LIST_COUNT 1131
+#define WT_STAT_CONN_LOCK_HANDLE_LIST_COUNT 1135
/*! lock: handle-list lock application thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_HANDLE_LIST_WAIT_APPLICATION 1132
+#define WT_STAT_CONN_LOCK_HANDLE_LIST_WAIT_APPLICATION 1136
/*! lock: handle-list lock internal thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_HANDLE_LIST_WAIT_INTERNAL 1133
+#define WT_STAT_CONN_LOCK_HANDLE_LIST_WAIT_INTERNAL 1137
/*! lock: metadata lock acquisitions */
-#define WT_STAT_CONN_LOCK_METADATA_COUNT 1134
+#define WT_STAT_CONN_LOCK_METADATA_COUNT 1138
/*! lock: metadata lock application thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_METADATA_WAIT_APPLICATION 1135
+#define WT_STAT_CONN_LOCK_METADATA_WAIT_APPLICATION 1139
/*! lock: metadata lock internal thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_METADATA_WAIT_INTERNAL 1136
+#define WT_STAT_CONN_LOCK_METADATA_WAIT_INTERNAL 1140
/*! lock: schema lock acquisitions */
-#define WT_STAT_CONN_LOCK_SCHEMA_COUNT 1137
+#define WT_STAT_CONN_LOCK_SCHEMA_COUNT 1141
/*! lock: schema lock application thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_SCHEMA_WAIT_APPLICATION 1138
+#define WT_STAT_CONN_LOCK_SCHEMA_WAIT_APPLICATION 1142
/*! lock: schema lock internal thread wait time (usecs) */
-#define WT_STAT_CONN_LOCK_SCHEMA_WAIT_INTERNAL 1139
+#define WT_STAT_CONN_LOCK_SCHEMA_WAIT_INTERNAL 1143
/*! lock: table lock acquisitions */
-#define WT_STAT_CONN_LOCK_TABLE_COUNT 1140
+#define WT_STAT_CONN_LOCK_TABLE_COUNT 1144
/*!
* lock: table lock application thread time waiting for the table lock
* (usecs)
*/
-#define WT_STAT_CONN_LOCK_TABLE_WAIT_APPLICATION 1141
+#define WT_STAT_CONN_LOCK_TABLE_WAIT_APPLICATION 1145
/*!
* lock: table lock internal thread time waiting for the table lock
* (usecs)
*/
-#define WT_STAT_CONN_LOCK_TABLE_WAIT_INTERNAL 1142
+#define WT_STAT_CONN_LOCK_TABLE_WAIT_INTERNAL 1146
/*! log: busy returns attempting to switch slots */
-#define WT_STAT_CONN_LOG_SLOT_SWITCH_BUSY 1143
+#define WT_STAT_CONN_LOG_SLOT_SWITCH_BUSY 1147
/*! log: consolidated slot closures */
-#define WT_STAT_CONN_LOG_SLOT_CLOSES 1144
+#define WT_STAT_CONN_LOG_SLOT_CLOSES 1148
/*! log: consolidated slot join races */
-#define WT_STAT_CONN_LOG_SLOT_RACES 1145
+#define WT_STAT_CONN_LOG_SLOT_RACES 1149
/*! log: consolidated slot join transitions */
-#define WT_STAT_CONN_LOG_SLOT_TRANSITIONS 1146
+#define WT_STAT_CONN_LOG_SLOT_TRANSITIONS 1150
/*! log: consolidated slot joins */
-#define WT_STAT_CONN_LOG_SLOT_JOINS 1147
+#define WT_STAT_CONN_LOG_SLOT_JOINS 1151
/*! log: consolidated slot unbuffered writes */
-#define WT_STAT_CONN_LOG_SLOT_UNBUFFERED 1148
+#define WT_STAT_CONN_LOG_SLOT_UNBUFFERED 1152
/*! log: log bytes of payload data */
-#define WT_STAT_CONN_LOG_BYTES_PAYLOAD 1149
+#define WT_STAT_CONN_LOG_BYTES_PAYLOAD 1153
/*! log: log bytes written */
-#define WT_STAT_CONN_LOG_BYTES_WRITTEN 1150
+#define WT_STAT_CONN_LOG_BYTES_WRITTEN 1154
/*! log: log files manually zero-filled */
-#define WT_STAT_CONN_LOG_ZERO_FILLS 1151
+#define WT_STAT_CONN_LOG_ZERO_FILLS 1155
/*! log: log flush operations */
-#define WT_STAT_CONN_LOG_FLUSH 1152
+#define WT_STAT_CONN_LOG_FLUSH 1156
/*! log: log force write operations */
-#define WT_STAT_CONN_LOG_FORCE_WRITE 1153
+#define WT_STAT_CONN_LOG_FORCE_WRITE 1157
/*! log: log force write operations skipped */
-#define WT_STAT_CONN_LOG_FORCE_WRITE_SKIP 1154
+#define WT_STAT_CONN_LOG_FORCE_WRITE_SKIP 1158
/*! log: log records compressed */
-#define WT_STAT_CONN_LOG_COMPRESS_WRITES 1155
+#define WT_STAT_CONN_LOG_COMPRESS_WRITES 1159
/*! log: log records not compressed */
-#define WT_STAT_CONN_LOG_COMPRESS_WRITE_FAILS 1156
+#define WT_STAT_CONN_LOG_COMPRESS_WRITE_FAILS 1160
/*! log: log records too small to compress */
-#define WT_STAT_CONN_LOG_COMPRESS_SMALL 1157
+#define WT_STAT_CONN_LOG_COMPRESS_SMALL 1161
/*! log: log release advances write LSN */
-#define WT_STAT_CONN_LOG_RELEASE_WRITE_LSN 1158
+#define WT_STAT_CONN_LOG_RELEASE_WRITE_LSN 1162
/*! log: log scan operations */
-#define WT_STAT_CONN_LOG_SCANS 1159
+#define WT_STAT_CONN_LOG_SCANS 1163
/*! log: log scan records requiring two reads */
-#define WT_STAT_CONN_LOG_SCAN_REREADS 1160
+#define WT_STAT_CONN_LOG_SCAN_REREADS 1164
/*! log: log server thread advances write LSN */
-#define WT_STAT_CONN_LOG_WRITE_LSN 1161
+#define WT_STAT_CONN_LOG_WRITE_LSN 1165
/*! log: log server thread write LSN walk skipped */
-#define WT_STAT_CONN_LOG_WRITE_LSN_SKIP 1162
+#define WT_STAT_CONN_LOG_WRITE_LSN_SKIP 1166
/*! log: log sync operations */
-#define WT_STAT_CONN_LOG_SYNC 1163
+#define WT_STAT_CONN_LOG_SYNC 1167
/*! log: log sync time duration (usecs) */
-#define WT_STAT_CONN_LOG_SYNC_DURATION 1164
+#define WT_STAT_CONN_LOG_SYNC_DURATION 1168
/*! log: log sync_dir operations */
-#define WT_STAT_CONN_LOG_SYNC_DIR 1165
+#define WT_STAT_CONN_LOG_SYNC_DIR 1169
/*! log: log sync_dir time duration (usecs) */
-#define WT_STAT_CONN_LOG_SYNC_DIR_DURATION 1166
+#define WT_STAT_CONN_LOG_SYNC_DIR_DURATION 1170
/*! log: log write operations */
-#define WT_STAT_CONN_LOG_WRITES 1167
+#define WT_STAT_CONN_LOG_WRITES 1171
/*! log: logging bytes consolidated */
-#define WT_STAT_CONN_LOG_SLOT_CONSOLIDATED 1168
+#define WT_STAT_CONN_LOG_SLOT_CONSOLIDATED 1172
/*! log: maximum log file size */
-#define WT_STAT_CONN_LOG_MAX_FILESIZE 1169
+#define WT_STAT_CONN_LOG_MAX_FILESIZE 1173
/*! log: number of pre-allocated log files to create */
-#define WT_STAT_CONN_LOG_PREALLOC_MAX 1170
+#define WT_STAT_CONN_LOG_PREALLOC_MAX 1174
/*! log: pre-allocated log files not ready and missed */
-#define WT_STAT_CONN_LOG_PREALLOC_MISSED 1171
+#define WT_STAT_CONN_LOG_PREALLOC_MISSED 1175
/*! log: pre-allocated log files prepared */
-#define WT_STAT_CONN_LOG_PREALLOC_FILES 1172
+#define WT_STAT_CONN_LOG_PREALLOC_FILES 1176
/*! log: pre-allocated log files used */
-#define WT_STAT_CONN_LOG_PREALLOC_USED 1173
+#define WT_STAT_CONN_LOG_PREALLOC_USED 1177
/*! log: records processed by log scan */
-#define WT_STAT_CONN_LOG_SCAN_RECORDS 1174
+#define WT_STAT_CONN_LOG_SCAN_RECORDS 1178
/*! log: total in-memory size of compressed records */
-#define WT_STAT_CONN_LOG_COMPRESS_MEM 1175
+#define WT_STAT_CONN_LOG_COMPRESS_MEM 1179
/*! log: total log buffer size */
-#define WT_STAT_CONN_LOG_BUFFER_SIZE 1176
+#define WT_STAT_CONN_LOG_BUFFER_SIZE 1180
/*! log: total size of compressed records */
-#define WT_STAT_CONN_LOG_COMPRESS_LEN 1177
+#define WT_STAT_CONN_LOG_COMPRESS_LEN 1181
/*! log: written slots coalesced */
-#define WT_STAT_CONN_LOG_SLOT_COALESCED 1178
+#define WT_STAT_CONN_LOG_SLOT_COALESCED 1182
/*! log: yields waiting for previous log file close */
-#define WT_STAT_CONN_LOG_CLOSE_YIELDS 1179
+#define WT_STAT_CONN_LOG_CLOSE_YIELDS 1183
/*! reconciliation: fast-path pages deleted */
-#define WT_STAT_CONN_REC_PAGE_DELETE_FAST 1180
+#define WT_STAT_CONN_REC_PAGE_DELETE_FAST 1184
/*! reconciliation: page reconciliation calls */
-#define WT_STAT_CONN_REC_PAGES 1181
+#define WT_STAT_CONN_REC_PAGES 1185
/*! reconciliation: page reconciliation calls for eviction */
-#define WT_STAT_CONN_REC_PAGES_EVICTION 1182
+#define WT_STAT_CONN_REC_PAGES_EVICTION 1186
/*! reconciliation: pages deleted */
-#define WT_STAT_CONN_REC_PAGE_DELETE 1183
+#define WT_STAT_CONN_REC_PAGE_DELETE 1187
/*! reconciliation: split bytes currently awaiting free */
-#define WT_STAT_CONN_REC_SPLIT_STASHED_BYTES 1184
+#define WT_STAT_CONN_REC_SPLIT_STASHED_BYTES 1188
/*! reconciliation: split objects currently awaiting free */
-#define WT_STAT_CONN_REC_SPLIT_STASHED_OBJECTS 1185
+#define WT_STAT_CONN_REC_SPLIT_STASHED_OBJECTS 1189
/*! session: open cursor count */
-#define WT_STAT_CONN_SESSION_CURSOR_OPEN 1186
+#define WT_STAT_CONN_SESSION_CURSOR_OPEN 1190
/*! session: open session count */
-#define WT_STAT_CONN_SESSION_OPEN 1187
+#define WT_STAT_CONN_SESSION_OPEN 1191
/*! session: table alter failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_ALTER_FAIL 1188
+#define WT_STAT_CONN_SESSION_TABLE_ALTER_FAIL 1192
/*! session: table alter successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_ALTER_SUCCESS 1189
+#define WT_STAT_CONN_SESSION_TABLE_ALTER_SUCCESS 1193
/*! session: table alter unchanged and skipped */
-#define WT_STAT_CONN_SESSION_TABLE_ALTER_SKIP 1190
+#define WT_STAT_CONN_SESSION_TABLE_ALTER_SKIP 1194
/*! session: table compact failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_COMPACT_FAIL 1191
+#define WT_STAT_CONN_SESSION_TABLE_COMPACT_FAIL 1195
/*! session: table compact successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_COMPACT_SUCCESS 1192
+#define WT_STAT_CONN_SESSION_TABLE_COMPACT_SUCCESS 1196
/*! session: table create failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_CREATE_FAIL 1193
+#define WT_STAT_CONN_SESSION_TABLE_CREATE_FAIL 1197
/*! session: table create successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_CREATE_SUCCESS 1194
+#define WT_STAT_CONN_SESSION_TABLE_CREATE_SUCCESS 1198
/*! session: table drop failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_DROP_FAIL 1195
+#define WT_STAT_CONN_SESSION_TABLE_DROP_FAIL 1199
/*! session: table drop successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_DROP_SUCCESS 1196
+#define WT_STAT_CONN_SESSION_TABLE_DROP_SUCCESS 1200
/*! session: table rebalance failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_REBALANCE_FAIL 1197
+#define WT_STAT_CONN_SESSION_TABLE_REBALANCE_FAIL 1201
/*! session: table rebalance successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_REBALANCE_SUCCESS 1198
+#define WT_STAT_CONN_SESSION_TABLE_REBALANCE_SUCCESS 1202
/*! session: table rename failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_RENAME_FAIL 1199
+#define WT_STAT_CONN_SESSION_TABLE_RENAME_FAIL 1203
/*! session: table rename successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_RENAME_SUCCESS 1200
+#define WT_STAT_CONN_SESSION_TABLE_RENAME_SUCCESS 1204
/*! session: table salvage failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_SALVAGE_FAIL 1201
+#define WT_STAT_CONN_SESSION_TABLE_SALVAGE_FAIL 1205
/*! session: table salvage successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_SALVAGE_SUCCESS 1202
+#define WT_STAT_CONN_SESSION_TABLE_SALVAGE_SUCCESS 1206
/*! session: table truncate failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_TRUNCATE_FAIL 1203
+#define WT_STAT_CONN_SESSION_TABLE_TRUNCATE_FAIL 1207
/*! session: table truncate successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_TRUNCATE_SUCCESS 1204
+#define WT_STAT_CONN_SESSION_TABLE_TRUNCATE_SUCCESS 1208
/*! session: table verify failed calls */
-#define WT_STAT_CONN_SESSION_TABLE_VERIFY_FAIL 1205
+#define WT_STAT_CONN_SESSION_TABLE_VERIFY_FAIL 1209
/*! session: table verify successful calls */
-#define WT_STAT_CONN_SESSION_TABLE_VERIFY_SUCCESS 1206
+#define WT_STAT_CONN_SESSION_TABLE_VERIFY_SUCCESS 1210
/*! thread-state: active filesystem fsync calls */
-#define WT_STAT_CONN_THREAD_FSYNC_ACTIVE 1207
+#define WT_STAT_CONN_THREAD_FSYNC_ACTIVE 1211
/*! thread-state: active filesystem read calls */
-#define WT_STAT_CONN_THREAD_READ_ACTIVE 1208
+#define WT_STAT_CONN_THREAD_READ_ACTIVE 1212
/*! thread-state: active filesystem write calls */
-#define WT_STAT_CONN_THREAD_WRITE_ACTIVE 1209
+#define WT_STAT_CONN_THREAD_WRITE_ACTIVE 1213
/*! thread-yield: application thread time evicting (usecs) */
-#define WT_STAT_CONN_APPLICATION_EVICT_TIME 1210
+#define WT_STAT_CONN_APPLICATION_EVICT_TIME 1214
/*! thread-yield: application thread time waiting for cache (usecs) */
-#define WT_STAT_CONN_APPLICATION_CACHE_TIME 1211
+#define WT_STAT_CONN_APPLICATION_CACHE_TIME 1215
/*! thread-yield: page acquire busy blocked */
-#define WT_STAT_CONN_PAGE_BUSY_BLOCKED 1212
+#define WT_STAT_CONN_PAGE_BUSY_BLOCKED 1216
/*! thread-yield: page acquire eviction blocked */
-#define WT_STAT_CONN_PAGE_FORCIBLE_EVICT_BLOCKED 1213
+#define WT_STAT_CONN_PAGE_FORCIBLE_EVICT_BLOCKED 1217
/*! thread-yield: page acquire locked blocked */
-#define WT_STAT_CONN_PAGE_LOCKED_BLOCKED 1214
+#define WT_STAT_CONN_PAGE_LOCKED_BLOCKED 1218
/*! thread-yield: page acquire read blocked */
-#define WT_STAT_CONN_PAGE_READ_BLOCKED 1215
+#define WT_STAT_CONN_PAGE_READ_BLOCKED 1219
/*! thread-yield: page acquire time sleeping (usecs) */
-#define WT_STAT_CONN_PAGE_SLEEP 1216
+#define WT_STAT_CONN_PAGE_SLEEP 1220
/*! transaction: number of named snapshots created */
-#define WT_STAT_CONN_TXN_SNAPSHOTS_CREATED 1217
+#define WT_STAT_CONN_TXN_SNAPSHOTS_CREATED 1221
/*! transaction: number of named snapshots dropped */
-#define WT_STAT_CONN_TXN_SNAPSHOTS_DROPPED 1218
+#define WT_STAT_CONN_TXN_SNAPSHOTS_DROPPED 1222
/*! transaction: transaction begins */
-#define WT_STAT_CONN_TXN_BEGIN 1219
+#define WT_STAT_CONN_TXN_BEGIN 1223
/*! transaction: transaction checkpoint currently running */
-#define WT_STAT_CONN_TXN_CHECKPOINT_RUNNING 1220
+#define WT_STAT_CONN_TXN_CHECKPOINT_RUNNING 1224
/*! transaction: transaction checkpoint generation */
-#define WT_STAT_CONN_TXN_CHECKPOINT_GENERATION 1221
+#define WT_STAT_CONN_TXN_CHECKPOINT_GENERATION 1225
/*! transaction: transaction checkpoint max time (msecs) */
-#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_MAX 1222
+#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_MAX 1226
/*! transaction: transaction checkpoint min time (msecs) */
-#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_MIN 1223
+#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_MIN 1227
/*! transaction: transaction checkpoint most recent time (msecs) */
-#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_RECENT 1224
+#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_RECENT 1228
/*! transaction: transaction checkpoint scrub dirty target */
-#define WT_STAT_CONN_TXN_CHECKPOINT_SCRUB_TARGET 1225
+#define WT_STAT_CONN_TXN_CHECKPOINT_SCRUB_TARGET 1229
/*! transaction: transaction checkpoint scrub time (msecs) */
-#define WT_STAT_CONN_TXN_CHECKPOINT_SCRUB_TIME 1226
+#define WT_STAT_CONN_TXN_CHECKPOINT_SCRUB_TIME 1230
/*! transaction: transaction checkpoint total time (msecs) */
-#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_TOTAL 1227
+#define WT_STAT_CONN_TXN_CHECKPOINT_TIME_TOTAL 1231
/*! transaction: transaction checkpoints */
-#define WT_STAT_CONN_TXN_CHECKPOINT 1228
+#define WT_STAT_CONN_TXN_CHECKPOINT 1232
/*!
* transaction: transaction checkpoints skipped because database was
* clean
*/
-#define WT_STAT_CONN_TXN_CHECKPOINT_SKIPPED 1229
+#define WT_STAT_CONN_TXN_CHECKPOINT_SKIPPED 1233
/*! transaction: transaction failures due to cache overflow */
-#define WT_STAT_CONN_TXN_FAIL_CACHE 1230
+#define WT_STAT_CONN_TXN_FAIL_CACHE 1234
/*!
* transaction: transaction fsync calls for checkpoint after allocating
* the transaction ID
*/
-#define WT_STAT_CONN_TXN_CHECKPOINT_FSYNC_POST 1231
+#define WT_STAT_CONN_TXN_CHECKPOINT_FSYNC_POST 1235
/*!
* transaction: transaction fsync duration for checkpoint after
* allocating the transaction ID (usecs)
*/
-#define WT_STAT_CONN_TXN_CHECKPOINT_FSYNC_POST_DURATION 1232
+#define WT_STAT_CONN_TXN_CHECKPOINT_FSYNC_POST_DURATION 1236
/*! transaction: transaction range of IDs currently pinned */
-#define WT_STAT_CONN_TXN_PINNED_RANGE 1233
+#define WT_STAT_CONN_TXN_PINNED_RANGE 1237
/*! transaction: transaction range of IDs currently pinned by a checkpoint */
-#define WT_STAT_CONN_TXN_PINNED_CHECKPOINT_RANGE 1234
+#define WT_STAT_CONN_TXN_PINNED_CHECKPOINT_RANGE 1238
/*!
* transaction: transaction range of IDs currently pinned by named
* snapshots
*/
-#define WT_STAT_CONN_TXN_PINNED_SNAPSHOT_RANGE 1235
+#define WT_STAT_CONN_TXN_PINNED_SNAPSHOT_RANGE 1239
/*! transaction: transaction sync calls */
-#define WT_STAT_CONN_TXN_SYNC 1236
+#define WT_STAT_CONN_TXN_SYNC 1240
/*! transaction: transactions committed */
-#define WT_STAT_CONN_TXN_COMMIT 1237
+#define WT_STAT_CONN_TXN_COMMIT 1241
/*! transaction: transactions rolled back */
-#define WT_STAT_CONN_TXN_ROLLBACK 1238
+#define WT_STAT_CONN_TXN_ROLLBACK 1242
/*!
* @}
diff --git a/src/support/stat.c b/src/support/stat.c
index 66710473ab9..167d17137ce 100644
--- a/src/support/stat.c
+++ b/src/support/stat.c
@@ -677,7 +677,11 @@ static const char * const __stats_connection_desc[] = {
"cache: eviction server unable to reach eviction goal",
"cache: eviction state",
"cache: eviction walks abandoned",
+ "cache: eviction worker thread active",
+ "cache: eviction worker thread created",
"cache: eviction worker thread evicting pages",
+ "cache: eviction worker thread removed",
+ "cache: eviction worker thread stable number",
"cache: failed eviction of pages that exceeded the in-memory maximum",
"cache: files with active eviction walks",
"cache: files with new eviction walks started",
@@ -958,7 +962,11 @@ __wt_stat_connection_clear_single(WT_CONNECTION_STATS *stats)
stats->cache_eviction_slow = 0;
/* not clearing cache_eviction_state */
stats->cache_eviction_walks_abandoned = 0;
+ /* not clearing cache_eviction_active_workers */
+ stats->cache_eviction_worker_created = 0;
stats->cache_eviction_worker_evicting = 0;
+ stats->cache_eviction_worker_removed = 0;
+ /* not clearing cache_eviction_stable_state_workers */
stats->cache_eviction_force_fail = 0;
/* not clearing cache_eviction_walks_active */
stats->cache_eviction_walks_started = 0;
@@ -1232,8 +1240,16 @@ __wt_stat_connection_aggregate(
to->cache_eviction_state += WT_STAT_READ(from, cache_eviction_state);
to->cache_eviction_walks_abandoned +=
WT_STAT_READ(from, cache_eviction_walks_abandoned);
+ to->cache_eviction_active_workers +=
+ WT_STAT_READ(from, cache_eviction_active_workers);
+ to->cache_eviction_worker_created +=
+ WT_STAT_READ(from, cache_eviction_worker_created);
to->cache_eviction_worker_evicting +=
WT_STAT_READ(from, cache_eviction_worker_evicting);
+ to->cache_eviction_worker_removed +=
+ WT_STAT_READ(from, cache_eviction_worker_removed);
+ to->cache_eviction_stable_state_workers +=
+ WT_STAT_READ(from, cache_eviction_stable_state_workers);
to->cache_eviction_force_fail +=
WT_STAT_READ(from, cache_eviction_force_fail);
to->cache_eviction_walks_active +=
diff --git a/src/support/thread_group.c b/src/support/thread_group.c
index a89468c367a..d04f8977a9a 100644
--- a/src/support/thread_group.c
+++ b/src/support/thread_group.c
@@ -71,12 +71,12 @@ __thread_group_grow(
/*
* __thread_group_shrink --
- * Decrease the number of running threads in the group, and free any
+ * Decrease the number of running threads in the group. Optionally free any
* memory associated with slots larger than the new count.
*/
static int
__thread_group_shrink(WT_SESSION_IMPL *session,
- WT_THREAD_GROUP *group, uint32_t new_count)
+ WT_THREAD_GROUP *group, uint32_t new_count, bool free_thread)
{
WT_DECL_RET;
WT_SESSION *wt_session;
@@ -105,14 +105,15 @@ __thread_group_shrink(WT_SESSION_IMPL *session,
WT_TRET(__wt_thread_join(session, thread->tid));
thread->tid = 0;
}
-
- if (thread->session != NULL) {
- wt_session = (WT_SESSION *)thread->session;
- WT_TRET(wt_session->close(wt_session, NULL));
- thread->session = NULL;
+ if (free_thread) {
+ if (thread->session != NULL) {
+ wt_session = (WT_SESSION *)thread->session;
+ WT_TRET(wt_session->close(wt_session, NULL));
+ thread->session = NULL;
+ }
+ __wt_free(session, thread);
+ group->threads[current_slot] = NULL;
}
- __wt_free(session, thread);
- group->threads[current_slot] = NULL;
}
/* Update the thread group state to match our changes */
@@ -145,11 +146,14 @@ __thread_group_resize(
if (new_min == group->min && new_max == group->max)
return (0);
+ if (new_min > new_max)
+ return (EINVAL);
+
/*
- * Coll shrink to reduce the number of thread structures and running
+ * Call shrink to reduce the number of thread structures and running
* threads if required by the change in group size.
*/
- WT_RET(__thread_group_shrink(session, group, new_max));
+ WT_RET(__thread_group_shrink(session, group, new_max, true));
/*
* Only reallocate the thread array if it is the largest ever, since
@@ -289,7 +293,7 @@ __wt_thread_group_destroy(WT_SESSION_IMPL *session, WT_THREAD_GROUP *group)
WT_ASSERT(session, __wt_rwlock_islocked(session, &group->lock));
/* Shut down all threads and free associated resources. */
- WT_TRET(__thread_group_shrink(session, group, 0));
+ WT_TRET(__thread_group_shrink(session, group, 0, true));
__wt_free(session, group->threads);
@@ -332,3 +336,30 @@ __wt_thread_group_start_one(
return (ret);
}
+
+/*
+ * __wt_thread_group_stop_one --
+ * Stop one thread if possible.
+ */
+int
+__wt_thread_group_stop_one(
+ WT_SESSION_IMPL *session, WT_THREAD_GROUP *group, bool wait)
+{
+ WT_DECL_RET;
+
+ if (group->current_threads <= group->min)
+ return (0);
+
+ if (wait)
+ __wt_writelock(session, &group->lock);
+ else if (__wt_try_writelock(session, &group->lock) != 0)
+ return (0);
+
+ /* Recheck the bounds now that we hold the lock */
+ if (group->current_threads > group->min)
+ WT_TRET(__thread_group_shrink(
+ session, group, group->current_threads - 1, false));
+ __wt_writeunlock(session, &group->lock);
+
+ return (ret);
+}
diff --git a/tools/wtstats/stat_data.py b/tools/wtstats/stat_data.py
index 5d385cda705..a94ce524ae3 100644
--- a/tools/wtstats/stat_data.py
+++ b/tools/wtstats/stat_data.py
@@ -128,6 +128,8 @@ no_clear_list = [
'cache: eviction currently operating in aggressive mode',
'cache: eviction empty score',
'cache: eviction state',
+ 'cache: eviction worker thread active',
+ 'cache: eviction worker thread stable number',
'cache: files with active eviction walks',
'cache: maximum bytes configured',
'cache: maximum page size at eviction',