1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
|
The Automake test suite
User interface
==============
Running the tests
-----------------
To run all tests:
make -k check
By default, verbose output of a test 't/foo.sh' or 't/foo.tap' is retained
in the log file 't/foo.log'. Also, a summary log is created in the file
'test-suite.log' (in the top-level directory).
You can use '-jN' for faster completion (it even helps on a uniprocessor
system, due to unavoidable sleep delays, as noted below):
make -k -j4
To rerun only failed tests:
make -k recheck
To run only tests that are newer than their last results:
make -k check RECHECK_LOGS=
To run only selected tests:
make -k check TESTS="t/foo.sh t/bar.tap" (GNU make)
env TESTS="t/foo.sh t/bar.tap" make -e -k check (non-GNU make)
To run the tests in cross-compilation mode, you should first configure
the automake source tree to a cross-compilation setup. For example, to
run with a Linux-to-MinGW cross compiler, you will need something like
this:
./configure --host i586-mingw32msvc --build i686-pc-linux-gnu
To avoid possible spurious error, you really have to *explicitly* specify
'--build' in addition to '--host'; the 'lib/config.guess' script can help
determine the correct value to pass to '--build'.
Then you can just run the testsuite in the usual way, and the test cases
using a compiler should automatically use a cross-compilation setup.
Interpretation
--------------
Successes:
PASS - success
XFAIL - expected failure
Failures:
FAIL - failure
XPASS - unexpected success
Other:
SKIP - skipped tests (third party tools not available)
ERROR - some unexpected error condition
About the tests
---------------
There are two kinds of tests in the Automake testsuite (both implemented
as shell scripts). The scripts with the '.sh' suffix are "simple"
tests, their outcome completely determined by their exit status. Those
with the '.tap' suffix use the TAP protocol. If you want to run a test
by hand, you can do so directly if it is a simple test:
./t/nogzip.sh
(it will be verbose by default), while if it is a TAP test you can pass
it to your preferred TAP runner, as in e.g.:
prove --verbose --merge ./t/add-missing.tap
The tests can also be run directly in a VPATH build, as with:
/path/to/srcdir/t/nogzip.sh
prove --verbose --merge /path/to/srcdir/t/add-missing.tap
Supported shells
----------------
By default, the tests are run by the $SHELL detected at configure
time. They also take care to re-execute themselves with that shell,
unless told not to. So, to run the tests with a different shell, say
'/path/to/another/sh', the user must use:
AM_TESTS_REEXEC=no /path/to/another/sh ./t/foo.sh
AM_TESTS_REEXEC=no prove -v -e /path/to/another/sh ./t/bar.tap
to run a test directly, and:
make check LOG_COMPILER=/path/to/sh (GNU make)
LOG_COMPILER=/path/to/sh make -e check (non-GNU make)
to run the test(s) through the makefile test driver.
The test scripts are written with portability in mind, so that they
should run with any decent Bourne-compatible shell. However, it is
worth nothing that older versions of Zsh (pre-4.3) exhibited several
bugs and incompatibilities with our uses, and are thus not supported
for running Automake's test scripts.
Reporting failures
------------------
Send verbose output, i.e., the contents of test-suite.log, of failing
tests to <bug-automake@gnu.org>, along with the usual version numbers
(which Automake, which Autoconf, which operating system, which make
version, which shell, etc.)
Writing test cases
==================
* If you plan to fix a bug, write the test case first. This way you'll
make sure the test catches the bug, and that it succeeds once you have
fixed the bug.
* Add a copyright/license paragraph.
* Explain what the test does, i.e., which features it checks, which
invariants it verifies, or what bugs/issues it guard against.
* Cite the PR number (if any), and the original reporter (if any), so
we can find or ask for information if needed.
* If a test checks examples or idioms given in the documentation, make
sure the documentation reference them appropriately in comments, as
with:
@c Keep in sync with autodist-config-headers.sh
@example
...
@end example
* Use "required=..." for required tools. Do not explicitly require
tools which can be taken for granted because they're listed in the
GNU Coding Standards (for example, 'gzip').
* Include ./defs in every test script (see existing tests for examples
of how to do this).
* Use the 'skip_' function to skip tests, with a meaningful message if
possible. Where convenient, use the 'warn_' function to print generic
warnings, the 'fail_' function for test failures, and the 'fatal_'
function for hard errors. In case a hard error is due to a failed
set-up of a test scenario, you can use the 'framework_fail_' function
instead.
* For those tests checking the Automake-provided test harnesses that
are expected to work also when the 'serial-tests' Automake option
is used (thus causing the serial testsuite harness to be used in the
generated Makefile), place a line containing "try-with-serial-tests"
somewhere in the file (usually in a comment).
That will ensure that the 'gen-testsuite-part' script generates a
sibling of that test which uses the serial harness instead of the
parallel one. For those tests that are *not* meant to work with the
parallel testsuite harness at all (these should be very very few),
set the shell variable 'am_serial_tests' to "yes" before including
./defs.
* Some tests in the Automake testsuite are auto-generated; those tests
might have custom extensions, but their basename (that is, with such
extension stripped) is expected to end with "-w" string, optionally
followed by decimal digits. For example, the name of a valid
auto-generated test can be 'color-w.sh' or 'tap-signal-w09.tap'.
Please don't name hand-written tests in a way that could cause them
to be confused with auto-generated tests; for example, 'u-v-w.sh'
or 'option-w0.tap' are *not* valid name for hand-written tests.
* ./defs brings in some commonly required files, and sets a skeleton
configure.ac. If possible, append to this file. In some cases
you'll have to overwrite it, but this should be the exception. Note
that configure.ac registers Makefile.in but do not output anything by
default. If you need ./configure to create Makefile, append AC_OUTPUT
to configure.ac. In case you don't want ./defs to pre-populate your
test directory (which is a rare occurrence), set the 'am_create_testdir'
shell variable to "empty" before sourcing ./defs.
* By default, the testcases are run with the errexit shell flag on,
to make it easier to catch failures you might not have thought of.
If this is undesirable in some testcase, you can use "set +e" to
disable the errexit flag (but please do so only if you have a very
good reason).
* End the test script with a ':' command. Otherwise, when somebody
changes the test by adding a failing command after the last command,
the test will spuriously fail because '$?' is nonzero at the end.
Note that this is relevant even if the errexit shell flag is on, in
case the test contains commands like "grep ... Makefile.in && exit 1"
(and there are indeed a lot of such tests).
* Use $ACLOCAL, $AUTOMAKE, $AUTOCONF, $AUTOUPDATE, $AUTOHEADER,
$PERL, $MAKE, $EGREP, and $FGREP, instead of the corresponding
commands.
* Use '$sleep' when you have to make sure that some file is newer
than another.
* Use cat or grep or similar commands to display (part of) files that
may be interesting for debugging, so that when a user send a verbose
output we don't have to ask him for more details. Display stderr
output on the stderr file descriptor. If some redirected command is
likely to fail, display its output even in the failure case, before
exiting.
* Use '$PATH_SEPARATOR', not hard-coded ':', as the separator of
PATH's entries.
* It's more important to make sure that a feature works, than make
sure that Automake's output looks correct. It might look correct
and still fail to work. In other words, prefer running 'make' over
grepping Makefile.in (or do both).
* If you run $ACLOCAL, $AUTOMAKE or $AUTOCONF several times in the
same test and change configure.ac by the meantime, do
rm -rf autom4te*.cache
before the following runs. On fast machines the new configure.ac
could otherwise have the same timestamp as the old autom4te.cache.
* Use filenames with two consecutive spaces when testing that some
code preserves filenames with spaces. This will catch errors like
`echo $filename | ...`.
* Make sure your test script can be used to faithfully check an
installed version of automake (as with "make installcheck"). For
example, if you need to copy or grep an automake-provided script,
do not assume that they can be found in the '$top_srcdir/lib'
directory, but use '$am_scriptdir' instead. The complete list of
such "$am_...dir" variables can be found in the 'defs-static.in'
file.
* When writing input for lex, include the following in the definitions
section:
%{
#define YY_NO_UNISTD_H 1
%}
to accommodate non-ANSI systems, since GNU flex generates code that
includes unistd.h otherwise. Also add:
int isatty (int fd) { return 0; }
to the definitions section if the generated code is to be compiled
by a C++ compiler, for similar reasons (i.e., the isatty(3) function
from that same unistd.h header would be required otherwise).
* Before commit: make sure the test is executable, add the tests to
TESTS in Makefile.am, add it to XFAIL_TESTS in addition if needed,
write a ChangeLog entry, send the diff to <automake-patches@gnu.org>.
* In test scripts, prefer using POSIX constructs over their old
Bourne-only equivalents:
- use $(...), not `...`, for command substitution;
- use $((...), not `expr ...`, for arithmetic processing;
- liberally use '!' to invert the exit status of a command, e.g.,
in idioms like "if ! CMD; then ...", instead of relying on clumsy
paraphrases like "if CMD; then :; else ...".
- prefer use of ${param%pattern} and ${param#pattern} parameter
expansions over processing by 'sed' or 'expr'.
* Note however that, when writing Makefile recipes or shell code in a
configure.ac, you should still use `...` instead, because the Autoconf
generated configure scripts do not ensure they will find a truly POSIX
shell (even though they will prefer and use it *if* it's found).
* Do not test an Automake error with "$AUTOMAKE && exit 1", or in three
years we'll discover that this test failed for some other bogus reason.
This happened many times. Better use something like
AUTOMAKE_fails
grep 'expected diagnostic' stderr
Note this doesn't prevent the test from failing for another reason,
but at least it makes sure the original error is still here.
* Do not override Makefile variables using make arguments, as in e.g.:
$MAKE prefix=/opt install
This is not portable for recursive targets (targets that call a
sub-make may not pass "prefix=/opt" along). Use the following
instead:
prefix=/opt $MAKE -e install
|