summaryrefslogtreecommitdiff
path: root/perf/README
diff options
context:
space:
mode:
authorChris Wilson <chris@chris-wilson.co.uk>2009-06-10 08:49:39 +0100
committerChris Wilson <chris@chris-wilson.co.uk>2009-06-10 08:52:50 +0100
commit81b5dc42b0e754d602506a8ccd231df9afd71593 (patch)
tree5a08dae49652b52bbede6a05b4658d59472d52a9 /perf/README
parentec92e633edd377747155b60aa225b266c38bc498 (diff)
downloadcairo-81b5dc42b0e754d602506a8ccd231df9afd71593.tar.gz
[perf] Expand the section on cairo-perf-trace in the README
Promote the information on how to use cairo-perf-trace and include it immediately after the details on cairo-perf. This should make it much clearer on how to replay the traces, and the difference between the two benchmarks.
Diffstat (limited to 'perf/README')
-rw-r--r--perf/README37
1 files changed, 34 insertions, 3 deletions
diff --git a/perf/README b/perf/README
index abf0687d1..fd6eb308d 100644
--- a/perf/README
+++ b/perf/README
@@ -9,8 +9,20 @@ more details on other options for running the suite below.
Running the cairo performance suite
-----------------------------------
-The lowest-level means of running the test suite is with the
-cairo-perf program, (which is what "make perf" executes). Some
+The performance suite is composed of two types of tests, micro- and
+macro-benchmarks. The micro-benchmarks are a series of hand-written,
+short, synthetic tests that measure the speed of doing a simple
+operation such as painting a surface or showing glyphs. These aim to
+give very good feedback on whether a performance related patch is
+successful without causing any performance degradations elsewhere. The
+second type of benchmark consists of replaying a cairo-trace from a
+large application during typical usage. These aim to give an overall
+feel as to whether cairo is faster for everyday use.
+
+Running the micro-benchmarks
+----------------------------
+The micro-benchmarks are compiled into a single executable called
+cairo-perf, which is what "make perf" executes. Some
examples of running it:
# Report on all tests with default number of iterations:
@@ -29,6 +41,25 @@ when using cairo-perf-diff to compare separate runs (see more
below). The advantage of using the raw mode is that test runs can be
generated incrementally and appended to existing reports.
+Running the macro-benchmarks
+----------------------------
+The macro-benchmarks are run by a single program called
+cairo-perf-trace, which is also executed by "make perf".
+cairo-perf-trace loops over the series of traces stored beneath
+cairo-traces/. cairo-perf-trace produces the same output and takes the
+same arguments as cairo-perf. Some examples of running it:
+
+ # Report on all tests with default number of iterations:
+ ./cairo-perf-trace
+
+ # Report on 100 iterations of all firefox tests:
+ ./cairo-perf-trace -i 100 firefox
+
+ # Generate raw results for 10 iterations into cairo.perf
+ ./cairo-perf-trace -r -i 10 > cairo.perf
+ # Append 10 more iterations of the poppler tests
+ ./cairo-perf-trace -r -i 10 poppler >> cairo.perf
+
Generating comparisons of separate runs
---------------------------------------
It's often useful to generate a chart showing the comparison of two
@@ -180,7 +211,7 @@ added:
64x64.
-How to benchmark traces
+How to record new traces
-----------------------
Using cairo-trace you can record the exact sequence of graphic operations
made by an application and replay them later. These traces can then be