summaryrefslogtreecommitdiff
path: root/HOWTO/BENCHMARKS.md
diff options
context:
space:
mode:
authorThomas Depierre <depierre.thomas@gmail.com>2021-01-14 15:17:04 +0100
committerThomas Depierre <depierre.thomas@gmail.com>2021-01-14 15:18:04 +0100
commit8abd67fa2ef741d935380784eab4849916021533 (patch)
tree70ef62fee0cdc72ff347279053740afae4b38687 /HOWTO/BENCHMARKS.md
parente0e9d8bad472fa94f4919e6db798e45913dc9eec (diff)
downloaderlang-8abd67fa2ef741d935380784eab4849916021533.tar.gz
Bring HOWTO/BENCHMARKS.md formatting to modern markdown
Diffstat (limited to 'HOWTO/BENCHMARKS.md')
-rw-r--r--HOWTO/BENCHMARKS.md48
1 files changed, 25 insertions, 23 deletions
diff --git a/HOWTO/BENCHMARKS.md b/HOWTO/BENCHMARKS.md
index 28a590dfd2..b82f57ed24 100644
--- a/HOWTO/BENCHMARKS.md
+++ b/HOWTO/BENCHMARKS.md
@@ -7,25 +7,25 @@ run benchmarks you have to [release the tests][] just as you normally would.
Note that many of these benchmarks were developed to test a specific feature
under a specific setting. We strive to keep the benchmarks up-to-date, but alas
-time is not an endless resource so some benchmarks will be outdated and
+time is not an endless resource so some benchmarks will be outdated and
irrelevant.
Running the benchmarks
----------------------
-As with testing, `ts` is used to run the benchmarks. Before running any
-benchmarks you have to [install the tests][]. To get a listing of all
+As with testing, `ts` is used to run the benchmarks. Before running any
+benchmarks you have to [install the tests][]. To get a listing of all
benchmarks you have available call `ts:benchmarks()`.
-To run all benchmarks call `ts:bench()`. This will run all benchmarks using
+To run all benchmarks call `ts:bench()`. This will run all benchmarks using
the emulator which is in your `$PATH` (Note that this does not have to be the
-same as from which the benchmarks were built from). All the results of the
-benchmarks are put in a folder in `$TESTROOT/test_server/` called
-`YYYY_MO_DDTHH_MI_SS`.
+same as from which the benchmarks were built from). All the results of the
+benchmarks are put in a folder in `$TESTROOT/test_server/` called
+`YYYY_MO_DDTHH_MI_SS`.
Each benchmark is run multiple times and the data for all runs is collected in
-the files within the benchmark folder. All benchmarks are written so that
-higher values are better.
+the files within the benchmark folder. All benchmarks are written so that
+higher values are better.
Writing benchmarks
------------------
@@ -37,24 +37,28 @@ might want to add a skip clause to `AppName.spec` for the benchmarks if you do
not want them to be run in the nightly tests.
Results of benchmarks are sent using the ct_event mechanism and automatically
-collected and formatted by ts.
+collected and formatted by ts.
- ct_event:notify(
- #event{name = benchmark_data,
- data = [{value,TPS}]}).
+```erlang
+ct_event:notify(
+ #event{name = benchmark_data,
+ data = [{value,TPS}]}).
+```
The application, suite and testcase associated with the value is automatically
detected. If you want to supply your own you can include `suite` andor `name`
with the data. i.e.
- ct_event:notify(
- #event{name = benchmark_data,
- data = [{suite,"erts_bench"},
- {name,"ets_transactions_per_sec"},
- {value,TPS}]}).
-
-The reason for using the internal ct_event and not ct is because the benchmark
-code has to be backwards compatible with at least R14.
+```erlang
+ct_event:notify(
+ #event{name = benchmark_data,
+ data = [{suite,"erts_bench"},
+ {name,"ets_transactions_per_sec"},
+ {value,TPS}]}).
+```
+
+The reason for using the internal ct_event and not ct is because the benchmark
+code has to be backwards compatible with at least R14.
The value which is reported should be as raw as possible. i.e. you should not
do any averaging of the value before reporting. The tools we use to collect the
@@ -67,7 +71,5 @@ Viewing benchmarks
At the moment of writing this HOWTO the tool for viewing benchmark results is
not available as opensource. This will hopefully change in the near future.
-
[release the tests]: TESTING.md#releasing-tests
[install the tests]: TESTING.md#configuring-the-test-environment
-