summaryrefslogtreecommitdiff
path: root/t/perf/testsuite-recheck.sh
diff options
context:
space:
mode:
authorStefano Lattarini <stefano.lattarini@gmail.com>2012-05-29 11:58:02 +0200
committerStefano Lattarini <stefano.lattarini@gmail.com>2012-05-29 12:04:54 +0200
commitff022f46d098098bc17100332e2c96ebef715dff (patch)
treeb32b43823c0412ff4d889a697d40205db68b0b0d /t/perf/testsuite-recheck.sh
parente6184b2ce04c2cb2aab55377b0fbd615a99d62c8 (diff)
downloadautomake-ff022f46d098098bc17100332e2c96ebef715dff.tar.gz
perf: beginning of a performance testsuite
Some tests in the Automake testsuite already aims only at verifying the performance, rather than the correctness, of some operations. Still, they are somewhat shoehorned and forced into the PASS/FAIL framework (say, with the 'ulimit' shell builtin used to verify some operation doesn't take up too much time or memory), but that is conceptually a stretch, and has already caused problems in practice (see automake bug#11512 for an example). So we start moving the "performance tests" out of the testsuite proper, and make them run only "on demand" (when the user exports the variable 'AM_TESTSUITE_PERF' to "yes"). Ideally, we should provide those tests with a custom runner/driver that measures and displays the relevant performance information, but doing that correctly and with the right APIs is definitely more difficult, so we leave it for a later step (an hope we'll take such a step eventually). * t/cond29.sh: Move ... * t/perf/cond.sh: ... here, and adjust. * t/testsuite-recheck-speed.sh: Move ... * t/perf/testsuite-recheck.sh: ... here. * t/testsuite-summary-speed.sh: Move ... * t/perf/testsuite-summary.sh: ... here. * t/list-of-tests.mk (perf_TESTS): New variable, listing the tests in the 't/perf' directory. (handwritten_TESTS): Adjust. * defs: Skip any tests in the 't/perf/' subdirectory unless the 'AM_TESTSUITE_PERF' variable is set to "yes" or "y". * .gitignore: Update. Signed-off-by: Stefano Lattarini <stefano.lattarini@gmail.com>
Diffstat (limited to 't/perf/testsuite-recheck.sh')
-rwxr-xr-xt/perf/testsuite-recheck.sh98
1 files changed, 98 insertions, 0 deletions
diff --git a/t/perf/testsuite-recheck.sh b/t/perf/testsuite-recheck.sh
new file mode 100755
index 000000000..50cc03ba2
--- /dev/null
+++ b/t/perf/testsuite-recheck.sh
@@ -0,0 +1,98 @@
+#! /bin/sh
+# Copyright (C) 2012 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# Check performance of recheck target in the face of many failed tests.
+# FIXME: this test is not currently able to detect whether the measured
+# FIXME: performance is too low, and FAIL accordingly; it just offers an
+# FIXME: easy way to verify how effective a performance optimization is.
+
+. ./defs || Exit 1
+
+count=5000
+
+cat >> configure.ac <<'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am <<END
+count_expected = $count
+TEST_EXTENSIONS = .t
+## Updated later.
+TESTS =
+END
+
+# This should ensure that our timing won't be disturbed by the time
+# that would be actually required to run any of:
+# - the test cases
+# - the test driver executing them
+# - the recipe to create the final test-suite log.
+cat >> Makefile.am << 'END'
+AUTOMAKE_OPTIONS = -Wno-override
+## These should never be run.
+T_LOG_COMPILER = false
+T_LOG_DRIVER = false
+
+# The recipe of this also serves as a sanity check.
+$(TEST_SUITE_LOG):
+## For debugging.
+ @echo "RE-RUN:"; for i in $(TEST_LOGS); do echo " $$i"; done
+## All the test cases should have been re-run.
+ @count_got=`for i in $(TEST_LOGS); do echo $$i; done | wc -l` \
+ && echo "Count expected: $(count_expected)" \
+ && echo "Count obtained: $$count_got" \
+ && test $$count_got -eq $(count_expected)
+## Pre-existing log files of the tests to re-run should have been
+## removed by the 'recheck' target
+ @for i in $(TEST_LOGS); do \
+ test ! -f $$i.log || { echo "$$i.log exists!"; exit 1; }; \
+ done
+## Actually create the target file, for extra safety.
+ @echo dummy > $@
+END
+
+# Updated later.
+: > all
+
+# Temporarily disable shell traces, to avoid bloating the log file.
+set +x
+
+for i in `seq_ 1 $count`; do
+ echo dummy $i > $i.log
+ echo :global-test-result: PASS > $i.trs
+ echo :test-result: PASS >> $i.trs
+ echo :recheck: yes >> $i.trs
+ echo TESTS += $i.t >> Makefile.am
+ echo $i >> all
+done
+
+# Re-enable shell traces.
+set -x
+
+# So that we don't need to create a ton of dummy tests.
+echo '$(TESTS):' >> Makefile.am
+
+head -n 100 Makefile.am || : # For debugging.
+tail -n 100 Makefile.am || : # Likewise.
+cat $count.trs # Likewise, just the last specimen though.
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE -a
+
+./configure
+$MAKE recheck
+
+: