summaryrefslogtreecommitdiff
path: root/src/third_party/wiredtiger/src/include/stat.h
diff options
context:
space:
mode:
authorLuke Chen <luke.chen@mongodb.com>2019-10-15 00:55:34 +0000
committerevergreen <evergreen@mongodb.com>2019-10-15 00:55:34 +0000
commitd819272026fd1c21b9fb18ae692648cfa590aa68 (patch)
treee2a59514b2c5f84f76b7aec09b2ec850d0f753dc /src/third_party/wiredtiger/src/include/stat.h
parentac0b3c4052e88c6626b4bd3129d4190e2d0aa179 (diff)
downloadmongo-d819272026fd1c21b9fb18ae692648cfa590aa68.tar.gz
Import wiredtiger: af2cb8f052184a94635c3bfc358620dd36df5828 from branch mongodb-4.4
ref: 3818fb14bd..af2cb8f052 for: 4.3.1 WT-4972 Add new RHEL Evergreen build variant WT-4975 Migrate Jenkins “wiredtiger-pull-request-asan” job to Evergreen WT-5123 Fill multi-paragraph comments WT-5150 LAS sweep is not removing the entries that are no longer required WT-5161 Remove deprecated git.apply_patch WT-5163 Fix ignored failure return in WT_CURSOR.next when random retrieval configured WT-5164 Fix inconsistent underscore/dash output WT-5166 Allow configuring configure flags in evergreen builds
Diffstat (limited to 'src/third_party/wiredtiger/src/include/stat.h')
-rw-r--r--src/third_party/wiredtiger/src/include/stat.h119
1 files changed, 54 insertions, 65 deletions
diff --git a/src/third_party/wiredtiger/src/include/stat.h b/src/third_party/wiredtiger/src/include/stat.h
index 53d3f2126ae..14665c4df75 100644
--- a/src/third_party/wiredtiger/src/include/stat.h
+++ b/src/third_party/wiredtiger/src/include/stat.h
@@ -9,74 +9,67 @@
/*
* Statistics counters:
*
- * We use an array of statistics structures; threads write different structures
- * to avoid writing the same cache line and incurring cache coherency overheads,
- * which can dramatically slow fast and otherwise read-mostly workloads.
+ * We use an array of statistics structures; threads write different structures to avoid writing the
+ * same cache line and incurring cache coherency overheads, which can dramatically slow fast and
+ * otherwise read-mostly workloads.
*
- * With an 8B statistics value and 64B cache-line alignment, 8 values share the
- * same cache line. There are collisions when different threads choose the same
- * statistics structure and update values that live on the cache line. There is
- * likely some locality however: a thread updating the cursor search statistic
- * is likely to update other cursor statistics with a chance of hitting already
- * cached values.
+ * With an 8B statistics value and 64B cache-line alignment, 8 values share the same cache line.
+ * There are collisions when different threads choose the same statistics structure and update
+ * values that live on the cache line. There is likely some locality however: a thread updating the
+ * cursor search statistic is likely to update other cursor statistics with a chance of hitting
+ * already cached values.
*
- * The actual statistic value must be signed, because one thread might increment
- * the value in its structure, and then another thread might decrement the same
- * value in another structure (where the value was initially zero), so the value
- * in the second thread's slot will go negative.
+ * The actual statistic value must be signed, because one thread might increment the value in its
+ * structure, and then another thread might decrement the same value in another structure (where the
+ * value was initially zero), so the value in the second thread's slot will go negative.
*
- * When reading a statistics value, the array values are summed and returned to
- * the caller. The summation is performed without locking, so the value read
- * may be inconsistent (and might be negative, if increments/decrements race
- * with the reader).
+ * When reading a statistics value, the array values are summed and returned to the caller. The
+ * summation is performed without locking, so the value read may be inconsistent (and might be
+ * negative, if increments/decrements race with the reader).
*
- * Choosing how many structures isn't easy: obviously, a smaller number creates
- * more conflicts while a larger number uses more memory.
+ * Choosing how many structures isn't easy: obviously, a smaller number creates more conflicts while
+ * a larger number uses more memory.
*
- * Ideally, if the application running on the system is CPU-intensive, and using
- * all CPUs on the system, we want to use the same number of slots as there are
- * CPUs (because their L1 caches are the units of coherency). However, in
- * practice we cannot easily determine how many CPUs are actually available to
- * the application.
+ * Ideally, if the application running on the system is CPU-intensive, and using all CPUs on the
+ * system, we want to use the same number of slots as there are CPUs (because their L1 caches are
+ * the units of coherency). However, in practice we cannot easily determine how many CPUs are
+ * actually available to the application.
*
- * Our next best option is to use the number of threads in the application as a
- * heuristic for the number of CPUs (presumably, the application architect has
- * figured out how many CPUs are available). However, inside WiredTiger we don't
- * know when the application creates its threads.
+ * Our next best option is to use the number of threads in the application as a heuristic for the
+ * number of CPUs (presumably, the application architect has figured out how many CPUs are
+ * available). However, inside WiredTiger we don't know when the application creates its threads.
*
- * For now, we use a fixed number of slots. Ideally, we would approximate the
- * largest number of cores we expect on any machine where WiredTiger might be
- * run, however, we don't want to waste that much memory on smaller machines.
- * As of 2015, machines with more than 24 CPUs are relatively rare.
+ * For now, we use a fixed number of slots. Ideally, we would approximate the largest number of
+ * cores we expect on any machine where WiredTiger might be run, however, we don't want to waste
+ * that much memory on smaller machines. As of 2015, machines with more than 24 CPUs are relatively
+ * rare.
*
- * Default hash table size; use a prime number of buckets rather than assuming
- * a good hash (Reference Sedgewick, Algorithms in C, "Hash Functions").
+ * Default hash table size; use a prime number of buckets rather than assuming a good hash
+ * (Reference Sedgewick, Algorithms in C, "Hash Functions").
*/
#define WT_COUNTER_SLOTS 23
/*
* WT_STATS_SLOT_ID is the thread's slot ID for the array of structures.
*
- * Ideally, we want a slot per CPU, and we want each thread to index the slot
- * corresponding to the CPU it runs on. Unfortunately, getting the ID of the
- * current CPU is difficult: some operating systems provide a system call to
- * acquire a CPU ID, but not all (regardless, making a system call to increment
- * a statistics value is far too expensive).
+ * Ideally, we want a slot per CPU, and we want each thread to index the slot corresponding to the
+ * CPU it runs on. Unfortunately, getting the ID of the current CPU is difficult: some operating
+ * systems provide a system call to acquire a CPU ID, but not all (regardless, making a system call
+ * to increment a statistics value is far too expensive).
*
- * Our second-best option is to use the thread ID. Unfortunately, there is no
- * portable way to obtain a unique thread ID that's a small-enough number to
- * be used as an array index (portable thread IDs are usually a pointer or an
- * opaque chunk, not a simple integer).
+ * Our second-best option is to use the thread ID. Unfortunately, there is no portable way to obtain
+ * a unique thread ID that's a small-enough number to be used as an array index (portable thread IDs
+ * are usually a pointer or an opaque chunk, not a simple integer).
*
- * Our solution is to use the session ID; there is normally a session per thread
- * and the session ID is a small, monotonically increasing number.
+ * Our solution is to use the session ID; there is normally a session per thread and the session ID
+ * is a small, monotonically increasing number.
*/
#define WT_STATS_SLOT_ID(session) (((session)->id) % WT_COUNTER_SLOTS)
/*
- * Statistic structures are arrays of int64_t's. We have functions to read/write
- * those structures regardless of the specific statistic structure we're working
- * with, by translating statistics structure field names to structure offsets.
+ * Statistic structures are arrays of int64_t's. We have functions to read/write those structures
+ * regardless of the specific statistic structure we're working with, by translating statistics
+ * structure field names to structure offsets.
*
* Translate a statistic's value name to an offset in the array.
*/
@@ -109,20 +102,17 @@ __wt_stats_aggregate(void *stats_arg, int slot)
aggr_v += stats[i][slot];
/*
- * This can race. However, any implementation with a single value can
- * race as well, different threads could set the same counter value
- * simultaneously. While we are making races more likely, we are not
- * fundamentally weakening the isolation semantics found in updating a
- * single value.
+ * This can race. However, any implementation with a single value can race as well, different
+ * threads could set the same counter value simultaneously. While we are making races more
+ * likely, we are not fundamentally weakening the isolation semantics found in updating a single
+ * value.
*
- * Additionally, the aggregation can go negative (imagine a thread
- * incrementing a value after aggregation has passed its slot and a
- * second thread decrementing a value before aggregation has reached
- * its slot).
+ * Additionally, the aggregation can go negative (imagine a thread incrementing a value after
+ * aggregation has passed its slot and a second thread decrementing a value before aggregation
+ * has reached its slot).
*
- * For historic API compatibility, the external type is a uint64_t;
- * limit our return to positive values, negative numbers would just
- * look really, really large.
+ * For historic API compatibility, the external type is a uint64_t; limit our return to positive
+ * values, negative numbers would just look really, really large.
*/
if (aggr_v < 0)
aggr_v = 0;
@@ -223,12 +213,11 @@ __wt_stats_clear(void *stats_arg, int slot)
#define WT_STAT_CONN_SET(session, fld, value) WT_STAT_SET(session, S2C(session)->stats, fld, value)
/*
- * Update data-source handle statistics if statistics gathering is enabled
- * and the data-source handle is set.
+ * Update data-source handle statistics if statistics gathering is enabled and the data-source
+ * handle is set.
*
- * XXX
- * We shouldn't have to check if the data-source handle is NULL, but it's
- * necessary until everything is converted to using data-source handles.
+ * XXX We shouldn't have to check if the data-source handle is NULL, but it's necessary until
+ * everything is converted to using data-source handles.
*/
#define WT_STAT_DATA_DECRV(session, fld, value) \
do { \