summaryrefslogtreecommitdiff
path: root/doc/check.texi
diff options
context:
space:
mode:
Diffstat (limited to 'doc/check.texi')
-rw-r--r--doc/check.texi1422
1 files changed, 1422 insertions, 0 deletions
diff --git a/doc/check.texi b/doc/check.texi
new file mode 100644
index 0000000..22d419e
--- /dev/null
+++ b/doc/check.texi
@@ -0,0 +1,1422 @@
+\input texinfo @c -*-texinfo-*-
+@c %**start of header
+@setfilename check.info
+@include version.texi
+@settitle Check @value{VERSION}
+@syncodeindex fn cp
+@syncodeindex tp cp
+@syncodeindex vr cp
+@c %**end of header
+
+@copying
+This manual is for Check
+(version @value{VERSION}, @value{UPDATED}),
+a unit testing framework for C.
+
+Copyright @copyright{} 2001--2009 Arien Malec, Chris Pickett, Fredrik
+Hugosson, and Robert Lemmen.
+
+@quotation
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the @acronym{GNU} Free Documentation License,
+Version 1.2 or any later version published by the Free Software
+Foundation; with no Invariant Sections, no Front-Cover texts, and no
+Back-Cover Texts. A copy of the license is included in the section
+entitled ``@acronym{GNU} Free Documentation License.''
+@end quotation
+@end copying
+
+@dircategory Software development
+@direntry
+* Check: (check)Introduction.
+@end direntry
+
+@titlepage
+@title Check
+@subtitle A Unit Testing Framework for C
+@subtitle for version @value{VERSION}, @value{UPDATED}
+@author Arien Malec
+@author Chris Pickett
+@author Fredrik Hugosson
+@author Robert Lemmen
+@author Robert Collins
+
+@c The following two commands start the copyright page.
+@page
+@vskip 0pt plus 1filll
+@insertcopying
+@end titlepage
+
+@c Output the table of contents at the beginning.
+@contents
+
+@ifnottex
+@node Top, Introduction, (dir), (dir)
+@top Check
+
+@insertcopying
+
+Please send corrections to this manual to
+@email{check-devel AT lists.sourceforge.net}. We'd prefer it if you can
+send a unified diff (@command{diff -u}) against the
+@file{doc/check.texi} file that ships with Check, but something is
+still better than nothing if you can't manage that.
+@end ifnottex
+
+@menu
+* Introduction::
+* Unit Testing in C::
+* Tutorial::
+* Advanced Features::
+* Conclusion and References::
+* AM_PATH_CHECK::
+* Copying This Manual::
+* Index::
+
+@detailmenu
+ --- The Detailed Node Listing ---
+
+Unit Testing in C
+
+* Other Frameworks for C::
+
+Tutorial: Basic Unit Testing
+
+* How to Write a Test::
+* Setting Up the Money Build::
+* Test a Little::
+* Creating a Suite::
+* SRunner Output::
+
+Advanced Features
+
+* Running Multiple Cases::
+* No Fork Mode::
+* Test Fixtures::
+* Multiple Suites in one SRunner::
+* Testing Signal Handling and Exit Values::
+* Looping Tests::
+* Test Timeouts::
+* Determining Test Coverage::
+* Test Logging::
+* Subunit Support::
+
+Test Fixtures
+
+* Test Fixture Examples::
+* Checked vs Unchecked Fixtures::
+
+Test Logging
+
+* XML Logging::
+
+Copying This Manual
+
+* GNU Free Documentation License:: License for copying this manual.
+
+@end detailmenu
+@end menu
+
+@node Introduction, Unit Testing in C, Top, Top
+@chapter Introduction
+@cindex introduction
+
+Check is a unit testing framework for C. It was inspired by similar
+frameworks that currently exist for most programming languages; the
+most famous example being @uref{http://www.junit.org, JUnit} for Java.
+There is a list of unit test frameworks for multiple languages at
+@uref{http://www.xprogramming.com/software.htm}. Unit testing has a
+long history as part of formal quality assurance methodologies, but
+has recently been associated with the lightweight methodology called
+Extreme Programming. In that methodology, the characteristic practice
+involves interspersing unit test writing with coding (``test a
+little, code a little''). While the incremental unit test/code
+approach is indispensable to Extreme Programming, it is also
+applicable, and perhaps indispensable, outside of that methodology.
+
+The incremental test/code approach provides three main benefits to the
+developer:
+
+@enumerate
+@item
+Because the unit tests use the interface to the unit being tested,
+they allow the developer to think about how the interface should be
+designed for usage early in the coding process.
+
+@item
+They help the developer think early about aberrant cases, and code
+accordingly.
+
+@item
+By providing a documented level of correctness, they allow the
+developer to refactor (see @uref{http://www.refactoring.com})
+aggressively.
+@end enumerate
+
+That third reason is the one that turns people into unit testing
+addicts. There is nothing so satisfying as doing a wholesale
+replacement of an implementation, and having the unit tests reassure
+you at each step of that change that all is well. It is like the
+difference between exploring the wilderness with and without a good
+map and compass: without the proper gear, you are more likely to
+proceed cautiously and stick to the marked trails; with it, you can
+take the most direct path to where you want to go.
+
+Look at the Check homepage for the latest information on Check:
+@uref{http://check.sourceforge.net}.
+
+The Check project page is at:
+@uref{http://sourceforge.net/projects/check/}.
+
+@node Unit Testing in C, Tutorial, Introduction, Top
+@chapter Unit Testing in C
+@ C unit testing
+
+The approach to unit testing frameworks used for Check originated with
+Smalltalk, which is a late binding object-oriented language supporting
+reflection. Writing a framework for C requires solving some special
+problems that frameworks for Smalltalk, Java or Python don't have to
+face. In all of those language, the worst that a unit test can do is
+fail miserably, throwing an exception of some sort. In C, a unit test
+is just as likely to trash its address space as it is to fail to meet
+its test requirements, and if the test framework sits in the same
+address space, goodbye test framework.
+
+To solve this problem, Check uses the @code{fork()} system call to
+create a new address space in which to run each unit test, and then
+uses message queues to send information on the testing process back to
+the test framework. That way, your unit test can do all sorts of
+nasty things with pointers, and throw a segmentation fault, and the
+test framework will happily note a unit test error, and chug along.
+
+The Check framework is also designed to play happily with common
+development environments for C programming. The author designed Check
+around Autoconf/Automake (thus the name Check: @command{make check} is
+the idiom used for testing with Autoconf/Automake), and the test
+failure messages thrown up by Check use the common idiom of
+@samp{filename:linenumber:message} used by @command{gcc} and family to
+report problems in source code. With (X)Emacs, the output of Check
+allows one to quickly navigate to the location of the unit test that
+failed; presumably that also works in VI and IDEs.
+
+@menu
+* Other Frameworks for C::
+@end menu
+
+@node Other Frameworks for C, , Unit Testing in C, Unit Testing in C
+@section Other Frameworks for C
+@cindex other frameworks
+@cindex frameworks
+
+The authors know of the following additional unit testing frameworks
+for C:
+
+@table @asis
+
+@item AceUnit
+AceUnit (Advanced C and Embedded Unit) bills itself as a comfortable C
+code unit test framework. It tries to mimick JUnit 4.x and includes
+reflection-like capabilities. AceUnit can be used in resource
+constraint environments, e.g. embedded software development, and
+importantly it runs fine in environments where you cannot include a
+single standard header file and cannot invoke a single standard C
+function from the ANSI / ISO C libraries. It also has a Windows port.
+It does not use forks to trap signals, although the authors have
+expressed interest in adding such a feature. See the
+@uref{http://aceunit.sourceforge.net/, AceUnit homepage}.
+
+@item GNU Autounit
+Much along the same lines as Check, including forking to run unit
+tests in a separate address space (in fact, the original author of
+Check borrowed the idea from @acronym{GNU} Autounit). @acronym{GNU}
+Autounit uses GLib extensively, which means that linking and such need
+special options, but this may not be a big problem to you, especially
+if you are already using GTK or GLib. See the
+@uref{http://www.recursism.com/s2004/zp/products/gnu+autounit, GNU
+Autounit homepage}.
+
+@item cUnit
+Also uses GLib, but does not fork to protect the address space of unit
+tests. See the
+@uref{http://web.archive.org/web/*/http://people.codefactory.se/~spotty/cunit/,
+archived cUnit homepage}.
+
+@item CUnit
+Standard C, with plans for a Win32 GUI implementation. Does not
+currently fork or otherwise protect the address space of unit tests.
+In early development. See the @uref{http://cunit.sourceforge.net,
+CUnit homepage}.
+
+@item CppUnit
+The premier unit testing framework for C++; you can also use it to test C
+code. It is stable, actively developed, and has a GUI interface. The
+primary reasons not to use CppUnit for C are first that it is quite
+big, and second you have to write your tests in C++, which means you
+need a C++ compiler. If these don't sound like concerns, it is
+definitely worth considering, along with other C++ unit testing
+frameworks. See the
+@uref{http://cppunit.sourceforge.net/cppunit-wiki, CppUnit homepage}.
+
+@item embUnit
+embUnit (Embedded Unit) is another unit test framework for embedded
+systems. This one appears to be superseded by AceUnit.
+@uref{https://sourceforge.net/projects/embunit/, Embedded Unit
+homepage}.
+
+@item MinUnit
+A minimal set of macros and that's it! The point is to
+show how easy it is to unit test your code. See the
+@uref{http://www.jera.com/techinfo/jtns/jtn002.html, MinUnit
+homepage}.
+
+@item CUnit for Mr. Ando
+A CUnit implementation that is fairly new, and apparently still in
+early development. See the
+@uref{http://park.ruru.ne.jp/ando/work/CUnitForAndo/html/, CUnit for
+Mr. Ando homepage}.
+@end table
+
+This list was last updated in March 2008. If you know of other C unit
+test frameworks, please send an email plus description to
+@email{check-devel AT lists.sourceforge.net} and we will add the entry
+to this list.
+
+It is the authors' considered opinion that forking or otherwise
+trapping and reporting signals is indispensable for unit testing (but
+it probably wouldn't be hard to add that to frameworks without that
+feature). Try 'em all out: adapt this tutorial to use all of the
+frameworks above, and use whichever you like. Contribute, spread the
+word, and make one a standard. Languages such as Java and Python are
+fortunate to have standard unit testing frameworks; it would be desirable
+that C have one as well.
+
+@node Tutorial, Advanced Features, Unit Testing in C, Top
+@chapter Tutorial: Basic Unit Testing
+
+This tutorial will use the JUnit
+@uref{http://junit.sourceforge.net/doc/testinfected/testing.htm, Test
+Infected} article as a starting point. We will be creating a library
+to represent money, @code{libmoney}, that allows conversions between
+different currency types. The development style will be ``test a
+little, code a little'', with unit test writing preceding coding.
+This constantly gives us insights into module usage, and also makes
+sure we are constantly thinking about how to test our code.
+
+@menu
+* How to Write a Test::
+* Setting Up the Money Build::
+* Test a Little::
+* Creating a Suite::
+* SRunner Output::
+@end menu
+
+@node How to Write a Test, Setting Up the Money Build, Tutorial, Tutorial
+@section How to Write a Test
+
+Test writing using Check is very simple. The file in which the checks
+are defined must include @file{check.h} as so:
+@example
+@verbatim
+#include <check.h>
+@end verbatim
+@end example
+
+The basic unit test looks as follows:
+@example
+@verbatim
+START_TEST (test_name)
+{
+ /* unit test code */
+}
+END_TEST
+@end verbatim
+@end example
+
+The @code{START_TEST}/@code{END_TEST} pair are macros that setup basic
+structures to permit testing. It is a mistake to leave off the
+@code{END_TEST} marker; doing so produces all sorts of strange errors
+when the check is compiled.
+
+@node Setting Up the Money Build, Test a Little, How to Write a Test, Tutorial
+@section Setting Up the Money Build
+
+Since we are creating a library to handle money, we will first create
+an interface in @file{money.h}, an implementation in @file{money.c},
+and a place to store our unit tests, @file{check_money.c}. We want to
+integrate these core files into our build system, and will need some
+additional structure. To manage everything we'll use Autoconf,
+Automake, and friends (collectively known as Autotools) for this
+example. One could do something similar with ordinary Makefiles, but
+in the authors' opinion, it is generally easier to use Autotools than
+bare Makefiles, and they provide built-in support for running tests.
+
+Note that this is not the place to explain how Autotools works. If
+you need help understanding what's going on beyond the explanations
+here, the best place to start is probably Alexandre Duret-Lutz's
+excellent
+@uref{http://www.lrde.epita.fr/~adl/autotools.html,
+Autotools tutorial}.
+
+The examples in this section are part of the Check distribution; you
+don't need to spend time cutting and pasting or (worse) retyping them.
+Locate the Check documentation on your system and look in the
+@samp{example} directory. The standard directory for GNU/Linux
+distributions should be @samp{/usr/share/doc/check/example}. This
+directory contains the final version reached the end of the tutorial. If
+you want to follow along, create backups of @file{money.h},
+@file{money.c}, and @file{check_money.c}, and then delete the originals.
+
+We set up a directory structure as follows:
+@example
+@verbatim
+.
+|-- Makefile.am
+|-- README
+|-- configure.ac
+|-- src
+| |-- Makefile.am
+| |-- main.c
+| |-- money.c
+| `-- money.h
+`-- tests
+ |-- Makefile.am
+ `-- check_money.c
+@end verbatim
+@end example
+
+Note that this is the output of @command{tree}, a great directory
+visualization tool. The top-level @file{Makefile.am} is simple; it
+merely tells Automake how to process subdirectories:
+@example
+@verbatim
+SUBDIRS = src . tests
+@end verbatim
+@end example
+
+Note that @code{tests} comes last, because the code should be testing
+an already compiled library. @file{configure.ac} is standard Autoconf
+boilerplate, as specified by the Autotools tutorial and as suggested
+by @command{autoscan}. The @code{AM_PATH_CHECK()} is the only line
+particular to Check @pxref{AM_PATH_CHECK}.
+
+@file{src/Makefile.am} builds @samp{libmoney} as a Libtool archive,
+and links it to an application simply called @command{main}. The
+application's behaviour is not important to this tutorial; what's
+important is that none of the functions we want to unit test appear in
+@file{main.c}; this probably means that the only function in
+@file{main.c} should be @code{main()} itself. In order to test the
+whole application, unit testing is not appropriate: you should use a
+system testing tool like Autotest. If you really want to test
+@code{main()} using Check, rename it to something like
+@code{_myproject_main()} and write a wrapper around it.
+
+The primary build instructions for our unit tests are in
+@file{tests/Makefile.am}:
+
+@example
+@verbatiminclude example/tests/Makefile.am
+@end example
+
+@code{TESTS} tells Automake which test programs to run for
+@command{make check}. Similarly, the @code{check_} prefix in
+@code{check_PROGRAMS} actually comes from Automake; it says to build
+these programs only when @command{make check} is run. (Recall that
+Automake's @code{check} target is the origin of Check's name.) The
+@command{check_money} test is a program that we will build from
+@file{tests/check_money.c}, linking it against both
+@file{src/libmoney.la} and the installed @file{libcheck.la} on our
+system. The appropriate compiler and linker flags for using Check are
+found in @code{@@CHECK_CFLAGS@@} and @code{@@CHECK_LIBS@@}, values
+defined by the @code{AM_PATH_CHECK} macro.
+
+Now that all this infrastructure is out of the way, we can get on with
+development. @file{src/money.h} should only contain standard C header
+boilerplate:
+
+@example
+@verbatiminclude example/src/money.1.h
+@end example
+
+@file{src/money.c} should be empty, and @file{tests/check_money.c}
+should only contain an empty @code{main()} function:
+
+@example
+@verbatiminclude example/tests/check_money.1.c
+@end example
+
+Create the GNU Build System for the project and then build @file{main}
+and @file{libmoney.la} as follows:
+@example
+@verbatim
+$ autoreconf --install
+$ ./configure
+$ make
+@end verbatim
+@end example
+
+(@command{autoreconf} determines which commands are needed in order
+for @command{configure} to be created or brought up to date.
+Previously one would use a script called @command{autogen.sh} or
+@command{bootstrap}, but that practice is unnecessary now.)
+
+Now build and run the @command{check_money} test with @command{make
+check}. If all goes well, @command{make} should report that our tests
+passed. No surprise, because there aren't any tests to fail. If you
+have problems, make sure to see @ref{AM_PATH_CHECK}.
+
+This was tested on the i386 ``testing'' distribution of Debian
+GNU/Linux (etch) in March 2006, using Autoconf 2.59, Automake 1.9.6,
+and Libtool 1.5.22. Please report any problems to
+@email{check-devel AT lists.sourceforge.net}.
+
+@node Test a Little, Creating a Suite, Setting Up the Money Build, Tutorial
+@section Test a Little, Code a Little
+
+The @uref{http://junit.sourceforge.net/doc/testinfected/testing.htm,
+Test Infected} article starts out with a @code{Money} class, and so
+will we. Of course, we can't do classes with C, but we don't really
+need to. The Test Infected approach to writing code says that we
+should write the unit test @emph{before} we write the code, and in
+this case, we will be even more dogmatic and doctrinaire than the
+authors of Test Infected (who clearly don't really get this stuff,
+only being some of the originators of the Patterns approach to
+software development and OO design).
+
+Here are the changes to @file{check_money.c} for our first unit test:
+
+@example
+@verbatiminclude check_money.1-2.c.diff
+@end example
+
+@findex fail_unless()
+A unit test should just chug along and complete. If it exits early,
+or is signaled, it will fail with a generic error message. (Note: it
+is conceivable that you expect an early exit, or a signal and there is
+functionality in Check to specifically assert that we should expect a
+signal or an early exit.) If we want to get some information
+about what failed, we need to use the @code{fail_unless()} function. The
+function (actually a macro) takes a first Boolean argument, and an error
+message to send if the condition is not true.
+
+@findex fail()
+If the Boolean argument is too complicated to elegantly express within
+@code{fail_unless()}, there is an alternate function @code{fail()}
+that unconditionally fails. The second test inside
+@code{test_money_create} above could be rewritten as follows:
+@example
+@verbatim
+if (strcmp (money_currency (m), "USD") != 0)
+ {
+ fail ("Currency not set correctly on creation");
+ }
+@end verbatim
+@end example
+
+@findex fail_if()
+There is also a @code{fail_if()} function, which is the
+inverse of @code{fail_unless()}. Using it, the above test then
+looks like this:
+@example
+@verbatim
+fail_if (strcmp (money_currency (m), "USD") != 0,
+ "Currency not set correctly on creation");
+@end verbatim
+@end example
+
+For your convenience all fail functions also accepts NULL as the msg
+argument and substitutes a suitable message for you. So you could also
+write a test as follows:
+@example
+@verbatim
+fail_unless (money_amount (m) == 5, NULL);
+@end verbatim
+@end example
+
+This is equivalent to:
+@example
+@verbatim
+fail_unless (money_amount (m) == 5,
+ "Assertion 'money_amount (m) == 5' failed");
+@end verbatim
+@end example
+
+All fail functions also support @code{varargs} and accept
+@code{printf}-style format strings and arguments. This is especially
+useful while debugging. With @code{printf}-style formatting the
+message could look like this:
+@example
+@verbatim
+fail_unless(money_amount (m) == 5,
+ "Amount was %d, instead of 5", money_amount (m));
+@end verbatim
+@end example
+
+When we try to compile and run the test suite now using @command{make
+check}, we get a whole host of compilation errors. It may seem a bit
+strange to deliberately write code that won't compile, but notice what
+we are doing: in creating the unit test, we are also defining
+requirements for the money interface. Compilation errors are, in a
+way, unit test failures of their own, telling us that the
+implementation does not match the specification. If all we do is edit
+the sources so that the unit test compiles, we are actually making
+progress, guided by the unit tests, so that's what we will now do.
+
+We will patch our header @file{money.h} as follows:
+
+@example
+@verbatiminclude money.1-2.h.diff
+@end example
+
+Our code compiles now, and again passes all of the tests. However,
+once we try to @emph{use} the functions in @code{libmoney} in the
+@code{main()} of @code{check_money}, we'll run into more problems, as
+they haven't actually been implemented yet.
+
+@node Creating a Suite, SRunner Output, Test a Little, Tutorial
+@section Creating a Suite
+
+To run unit tests with Check, we must create some test cases,
+aggregate them into a suite, and run them with a suite runner. That's
+a bit of overhead, but it is mostly one-off. Here's a diff for the
+new version of @file{check_money.c}. Note that we include stdlib.h to
+get the definitions of @code{EXIT_SUCCESS} and @code{EXIT_FAILURE}.
+
+@example
+@verbatiminclude check_money.2-3.c.diff
+@end example
+
+Most of the @code{money_suite()} code should be self-explanatory. We are
+creating a suite, creating a test case, adding the test case to the
+suite, and adding the unit test we created above to the test case.
+Why separate this off into a separate function, rather than inline it
+in @code{main()}? Because any new tests will get added in
+@code{money_suite()}, but nothing will need to change in @code{main()}
+for the rest of this example, so main will stay relatively clean and
+simple.
+
+Unit tests are internally defined as static functions. This means
+that the code to add unit tests to test cases must be in the same
+compilation unit as the unit tests themselves. This provides another
+reason to put the creation of the test suite in a separate function:
+you may later want to keep one source file per suite; defining a
+uniquely named suite creation function allows you later to define a
+header file giving prototypes for all the suite creation functions,
+and encapsulate the details of where and how unit tests are defined
+behind those functions. See the test program defined for Check itself
+for an example of this strategy.
+
+The code in @code{main()} bears some explanation. We are creating a
+suite runner object of type @code{SRunner} from the @code{Suite} we
+created in @code{money_suite()}. We then run the suite, using the
+@code{CK_NORMAL} flag to specify that we should print a summary of the
+run, and list any failures that may have occurred. We capture the
+number of failures that occurred during the run, and use that to
+decide how to return. The @code{check} target created by Automake
+uses the return value to decide whether the tests passed or failed.
+
+Now that the tests are actually being run by @command{check_money}, we
+encounter linker errors again we try out @code{make check}. Try it
+for yourself and see. The reason is that the @file{money.c}
+implementation of the @file{money.h} interface hasn't been created
+yet. Let's go with the fastest solution possible and implement stubs
+for each of the functions in @code{money.c}:
+
+@example
+@verbatiminclude money.1-3.c.diff
+@end example
+
+Note that we @code{#include <stdlib.h>} to get the definition of
+@code{NULL}. Now, the code compiles and links when we run @code{make
+check}, but our unit test fails. Still, this is progress, and we can
+focus on making the test pass.
+
+@node SRunner Output, , Creating a Suite, Tutorial
+@section SRunner Output
+
+@findex srunner_run_all()
+The function to run tests in an @code{SRunner} is defined as follows:
+@example
+@verbatim
+void srunner_run_all (SRunner * sr, enum print_output print_mode);
+@end verbatim
+@end example
+
+This function does two things:
+
+@enumerate
+@item
+It runs all of the unit tests for all of the test cases defined for all
+of the suites in the SRunner, and collects the results in the SRunner
+
+@item
+It prints the results according to the @code{print_mode} specified.
+@end enumerate
+
+For SRunners that have already been run, there is also a separate
+printing function defined as follows:
+@example
+@verbatim
+void srunner_print (SRunner *sr, enum print_output print_mode);
+@end verbatim
+@end example
+
+The enumeration values of @code{print_output} defined in Check that
+parameter @code{print_mode} can assume are as follows:
+
+@table @code
+@vindex CK_SILENT
+@item CK_SILENT
+Specifies that no output is to be generated. If you use this flag, you
+either need to programmatically examine the SRunner object, print
+separately, or use test logging (@pxref{Test Logging}.)
+
+@vindex CK_MINIMAL
+@item CK_MINIMAL
+Only a summary of the test run will be printed (number run, passed,
+failed, errors).
+
+@vindex CK_NORMAL
+@item CK_NORMAL
+Prints the summary of the run, and prints one message per failed
+test.
+
+@vindex CK_VERBOSE
+@item CK_VERBOSE
+Prints the summary, and one message per test (passed or failed)
+
+@vindex CK_ENV
+@vindex CK_VERBOSITY
+@item CK_ENV
+Gets the print mode from the environment variable @code{CK_VERBOSITY},
+which can have the values "silent", "minimal", "normal", "verbose". If
+the variable is not found or the value is not recognized, the print
+mode is set to @code{CK_NORMAL}.
+
+@vindex CK_SUBUNIT
+@item CK_SUBUNIT
+Prints running progress through the @uref{https://launchpad.net/subunit/,
+subunit} test runner protocol. See 'subunit support' under the Advanced Features section for more information.
+@end table
+
+With the @code{CK_NORMAL} flag specified in our @code{main()}, let's
+rerun make check now. As before, we get the following satisfying
+output:
+@example
+@verbatim
+Running suite(s): Money
+0%: Checks: 1, Failures: 1, Errors: 0
+check_money.c:10:F:Core:test_money_create: Amount not set correctly on
+creation
+FAIL: check_money
+==================================================
+1 of 1 tests failed
+Please report to check-devel@lists.sourceforge.net
+==================================================
+@end verbatim
+@end example
+
+The first number in the summary line tells us that 0% of our tests
+passed, and the rest of the line tells us that there was one check in
+total, and of those checks, one failure and zero errors. The next
+line tells us exactly where that failure occurred, and what kind of
+failure it was (P for pass, F for failure, E for error).
+
+After that we have some higher level output generated by Automake: the
+@code{check_money} program failed, and the bug-report address given in
+@file{configure.ac} is printed.
+
+Let's implement the @code{money_amount} function, so that it will pass
+its tests. We first have to create a Money structure to hold the
+amount, and then implement the function to return the correct amount:
+
+@example
+@verbatiminclude money.3-4.c.diff
+@end example
+
+We will now rerun make check and@dots{} what's this? The output is
+now as follows:
+@example
+@verbatim
+Running suite(s): Money
+0%: Checks: 1, Failures: 0, Errors: 1
+check_money.c:5:E:Core:test_money_create: (after this point) Received
+signal 11 (Segmentation fault)
+@end verbatim
+@end example
+
+@findex mark_point()
+What does this mean? Note that we now have an error, rather than a
+failure. This means that our unit test either exited early, or was
+signaled. Next note that the failure message says ``after this
+point''; This means that somewhere after the point noted
+(@file{check_money.c}, line 5) there was a problem: signal 11 (a.k.a.
+segmentation fault). The last point reached is set on entry to the
+unit test, and after every call to @code{fail_unless()},
+@code{fail()}, or the special function @code{mark_point()}. For
+example, if we wrote some test code as follows:
+@example
+@verbatim
+stuff_that_works ();
+mark_point ();
+stuff_that_dies ();
+@end verbatim
+@end example
+
+then the point returned will be that marked by @code{mark_point()}.
+
+The reason our test failed so horribly is that we haven't implemented
+@code{money_create()} to create any @code{Money}. We'll go ahead and
+implement that, the symmetric @code{money_free()}, and
+@code{money_currency()} too, in order to make our unit test pass again:
+
+@example
+@verbatiminclude money.4-5.c.diff
+@end example
+
+@node Advanced Features, Conclusion and References, Tutorial, Top
+@chapter Advanced Features
+
+What you've seen so far is all you need for basic unit testing. The
+features described in this section are additions to Check that make it
+easier for the developer to write, run, and analyse tests.
+
+@menu
+* Running Multiple Cases::
+* No Fork Mode::
+* Test Fixtures::
+* Multiple Suites in one SRunner::
+* Testing Signal Handling and Exit Values::
+* Looping Tests::
+* Test Timeouts::
+* Determining Test Coverage::
+* Test Logging::
+* Subunit Support::
+@end menu
+
+@node Running Multiple Cases, No Fork Mode, Advanced Features, Advanced Features
+@section Running Multiple Cases
+
+What happens if we pass @code{-1} as the @code{amount} in
+@code{money_create()}? What should happen? Let's write a unit test.
+Since we are now testing limits, we should also test what happens when
+we create @code{Money} where @code{amount == 0}. Let's put these in a
+separate test case called ``Limits'' so that @code{money_suite} is
+changed like so:
+
+@example
+@verbatiminclude check_money.3-6.c.diff
+@end example
+
+Now we can rerun our suite, and fix the problem(s). Note that errors
+in the ``Core'' test case will be reported as ``Core'', and errors in
+the ``Limits'' test case will be reported as ``Limits'', giving you
+additional information about where things broke.
+
+@example
+@verbatiminclude money.5-6.c.diff
+@end example
+
+@node No Fork Mode, Test Fixtures, Running Multiple Cases, Advanced Features
+@section No Fork Mode
+
+Check normally forks to create a separate address space. This allows
+a signal or early exit to be caught and reported, rather than taking
+down the entire test program, and is normally very useful. However,
+when you are trying to debug why the segmentation fault or other
+program error occurred, forking makes it difficult to use debugging
+tools. To define fork mode for an @code{SRunner} object, you can do
+one of the following:
+
+@vindex CK_FORK
+@findex srunner_set_fork_status()
+@enumerate
+@item
+Define the CK_FORK environment variable to equal ``no''.
+
+@item
+Explicitly define the fork status through the use of the following
+function:
+
+@verbatim
+void srunner_set_fork_status (SRunner * sr, enum fork_status fstat);
+@end verbatim
+@end enumerate
+
+The enum @code{fork_status} allows the @code{fstat} parameter to
+assume the following values: @code{CK_FORK} and @code{CK_NOFORK}. An
+explicit call to @code{srunner_set_fork_status()} overrides the
+@code{CK_FORK} environment variable.
+
+@node Test Fixtures, Multiple Suites in one SRunner, No Fork Mode, Advanced Features
+@section Test Fixtures
+
+We may want multiple tests that all use the same Money. In such
+cases, rather than setting up and tearing down objects for each unit
+test, it may be convenient to add some setup that is constant across
+all the tests in a test case. Each such setup/teardown pair is called
+a @dfn{test fixture} in test-driven development jargon.
+
+A fixture is created by defining a setup and/or a teardown function,
+and associating it with a test case. There are two kinds of test
+fixtures in Check: checked and unchecked fixtures. These are defined
+as follows:
+
+@table @asis
+@item Checked fixtures
+are run inside the address space created by the fork to create the
+unit test. Before each unit test in a test case, the @code{setup()}
+function is run, if defined. After each unit test, the
+@code{teardown()} function is run, if defined. Since they run inside
+the forked address space, if checked fixtures signal or otherwise
+fail, they will be caught and reported by the @code{SRunner}. A
+checked @code{teardown()} fixture will run even if the unit test
+fails.
+
+@item Unchecked fixtures
+are run in the same address space as the test program. Therefore they
+may not signal or exit, but may use the fail functions. The unchecked
+@code{setup()}, if defined, is run before the test case is
+started. The unchecked @code{teardown()}, if defined, is run after the
+test case is done.
+@end table
+
+So for a test case that contains @code{check_one()} and
+@code{check_two()} unit tests,
+@code{checked_setup()}/@code{checked_teardown()} checked fixtures, and
+@code{unchecked_setup()}/@code{unchecked_teardown()} unchecked
+fixtures, the control flow would be:
+@example
+@verbatim
+unchecked_setup();
+fork();
+checked_setup();
+check_one();
+checked_teardown();
+wait();
+fork();
+checked_setup();
+check_two();
+checked_teardown();
+wait();
+unchecked_teardown();
+@end verbatim
+@end example
+
+@menu
+* Test Fixture Examples::
+* Checked vs Unchecked Fixtures::
+@end menu
+
+@node Test Fixture Examples, Checked vs Unchecked Fixtures, Test Fixtures, Test Fixtures
+@subsection Test Fixture Examples
+
+We create a test fixture in Check as follows:
+
+@enumerate
+@item
+Define global variables, and functions to setup and teardown the
+globals. The functions both take @code{void} and return @code{void}.
+In our example, we'll make @code{five_dollars} be a global created and
+freed by @code{setup()} and @code{teardown()} respectively.
+
+@item
+@findex tcase_add_checked_fixture()
+Add the @code{setup()} and @code{teardown()} functions to the test
+case with @code{tcase_add_checked_fixture()}. In our example, this
+belongs in the suite setup function @code{money_suite}.
+
+@item
+Rewrite tests to use the globals. We'll rewrite our first to use
+@code{five_dollars}.
+@end enumerate
+
+Note that the functions used for setup and teardown do not need to be
+named @code{setup()} and @code{teardown()}, but they must take
+@code{void} and return @code{void}. We'll update @file{check_money.c}
+as follows:
+
+@example
+@verbatiminclude check_money.6-7.c.diff
+@end example
+
+@node Checked vs Unchecked Fixtures, , Test Fixture Examples, Test Fixtures
+@subsection Checked vs Unchecked Fixtures
+
+Checked fixtures run once for each unit test in a test case, and so
+they should not be used for expensive setup. However, if a checked
+fixture fails and @code{CK_FORK} mode is being used, it will not bring
+down the entire framework.
+
+On the other hand, unchecked fixtures run once for an entire test
+case, as opposed to once per unit test, and so can be used for
+expensive setup. However, since they may take down the entire test
+program, they should only be used if they are known to be safe.
+
+Additionally, the isolation of objects created by unchecked fixtures
+is not guaranteed by @code{CK_NOFORK} mode. Normally, in
+@code{CK_FORK} mode, unit tests may abuse the objects created in an
+unchecked fixture with impunity, without affecting other unit tests in
+the same test case, because the fork creates a separate address space.
+However, in @code{CK_NOFORK} mode, all tests live in the same address
+space, and side effects in one test will affect the unchecked fixture
+for the other tests.
+
+A checked fixture will generally not be affected by unit test side
+effects, since the @code{setup()} is run before each unit test. There
+is an exception for side effects to the total environment in which the
+test program lives: for example, if the @code{setup()} function
+initializes a file that a unit test then changes, the combination of
+the @code{teardown()} function and @code{setup()} fuction must be able
+to restore the environment for the next unit test.
+
+If the @code{setup()} function in a fixture fails, in either checked
+or unchecked fixtures, the unit tests for the test case, and the
+@code{teardown()} function for the fixture will not be run. A fixture
+error will be created and reported to the @code{SRunner}.
+
+@node Multiple Suites in one SRunner, Testing Signal Handling and Exit Values, Test Fixtures, Advanced Features
+@section Multiple Suites in one SRunner
+
+In a large program, it will be convenient to create multiple suites,
+each testing a module of the program. While one can create several
+test programs, each running one @code{Suite}, it may be convenient to
+create one main test program, and use it to run multiple suites. The
+Check test suite provides an example of how to do this. The main
+testing program is called @code{check_check}, and has a header file
+that declares suite creation functions for all the module tests:
+@example
+@verbatim
+Suite *make_sub_suite (void);
+Suite *make_sub2_suite (void);
+Suite *make_master_suite (void);
+Suite *make_list_suite (void);
+Suite *make_msg_suite (void);
+Suite *make_log_suite (void);
+Suite *make_limit_suite (void);
+Suite *make_fork_suite (void);
+Suite *make_fixture_suite (void);
+Suite *make_pack_suite (void);
+@end verbatim
+@end example
+
+@findex srunner_add_suite()
+The function @code{srunner_add_suite()} is used to add additional
+suites to an @code{SRunner}. Here is the code that sets up and runs
+the @code{SRunner} in the @code{main()} function in
+@file{check_check_main.c}:
+@example
+@verbatim
+SRunner *sr;
+sr = srunner_create (make_master_suite ());
+srunner_add_suite (sr, make_list_suite ());
+srunner_add_suite (sr, make_msg_suite ());
+srunner_add_suite (sr, make_log_suite ());
+srunner_add_suite (sr, make_limit_suite ());
+srunner_add_suite (sr, make_fork_suite ());
+srunner_add_suite (sr, make_fixture_suite ());
+srunner_add_suite (sr, make_pack_suite ());
+@end verbatim
+@end example
+
+@node Testing Signal Handling and Exit Values, Looping Tests, Multiple Suites in one SRunner, Advanced Features
+@section Testing Signal Handling and Exit Values
+
+@findex tcase_add_test_raise_signal()
+
+To enable testing of signal handling, there is a function
+@code{tcase_add_test_raise_signal()} which is used instead of
+@code{tcase_add_test()}. This function takes an additional signal
+argument, specifying a signal that the test expects to receive. If no
+signal is received this is logged as a failure. If a different signal
+is received this is logged as an error.
+
+The signal handling functionality only works in CK_FORK mode.
+
+@findex tcase_add_exit_test()
+
+To enable testing of expected exits, there is a function
+@code{tcase_add_exit_test()} which is used instead of @code{tcase_add_test()}.
+This function takes an additional expected exit value argument,
+specifying a value that the test is expected to exit with. If the test
+exits with any other value this is logged as a failure. If the test exits
+early this is logged as an error.
+
+The exit handling functionality only works in CK_FORK mode.
+
+@node Looping Tests, Test Timeouts, Testing Signal Handling and Exit Values, Advanced Features
+@section Looping Tests
+
+Looping tests are tests that are called with a new context for each
+loop iteration. This makes them ideal for table based tests. If
+loops are used inside ordinary tests to test multiple values, only the
+first error will be shown before the test exits. However, looping
+tests allow for all errors to be shown at once, which can help out
+with debugging.
+
+@findex tcase_add_loop_test()
+Adding a normal test with @code{tcase_add_loop_test()} instead of
+@code{tcase_add_test()} will make the test function the body of a
+@code{for} loop, with the addition of a fork before each call. The
+loop variable @code{_i} is available for use inside the test function;
+for example, it could serve as an index into a table. For failures,
+the iteration which caused the failure is available in error messages
+and logs.
+
+Start and end values for the loop are supplied when adding the test.
+The values are used as in a normal @code{for} loop. Below is some
+pseudo-code to show the concept:
+@example
+@verbatim
+for (_i = tfun->loop_start; _i < tfun->loop_end; _i++)
+{
+ fork(); /* New context */
+ tfun->f(_i); /* Call test function */
+ wait(); /* Wait for child to terminate */
+}
+@end verbatim
+@end example
+
+An example of looping test usage follows:
+@example
+@verbatim
+static const int primes[5] = {2,3,5,7,11};
+
+START_TEST (check_is_prime)
+{
+ fail_unless (is_prime (primes[_i]));
+}
+END_TEST
+
+...
+
+tcase_add_loop_test (tcase, check_is_prime, 0, 5);
+@end verbatim
+@end example
+
+Looping tests work in @code{CK_NOFORK} mode as well, but without the
+forking. This means that only the first error will be shown.
+
+@node Test Timeouts, Determining Test Coverage, Looping Tests, Advanced Features
+@section Test Timeouts
+
+@findex tcase_set_timeout()
+@vindex CK_DEFAULT_TIMEOUT
+@vindex CK_TIMEOUT_MULTIPLIER
+To be certain that a test won't hang indefinitely, all tests are run
+with a timeout, the default being 4 seconds. If the test is not
+finished within that time, it is killed and logged as an error.
+
+The timeout for a specific test case, which may contain multiple unit
+tests, can be changed with the @code{tcase_set_timeout()} function.
+The default timeout used for all test cases can be changed with the
+environment variable @code{CK_DEFAULT_TIMEOUT}, but this will not
+override an explicitly set timeout. Another way to change the timeout
+length is to use the @code{CK_TIMEOUT_MULTIPLIER} environment variable,
+which multiplies all timeouts, including those set with
+@code{tcase_set_timeout()}, with the supplied integer value. All timeout
+arguments are in seconds and a timeout of 0 seconds turns off the timeout
+functionality.
+
+Test timeouts are only available in CK_FORK mode.
+
+@node Determining Test Coverage, Test Logging, Test Timeouts, Advanced Features
+@section Determining Test Coverage
+
+The term @dfn{code coverage} refers to the extent that the statements
+of a program are executed during a run. Thus, @dfn{test coverage}
+refers to code coverage when executing unit tests. This information
+can help you to do two things:
+
+@itemize
+@item
+Write better tests that more fully exercise your code, thereby
+improving confidence in it.
+
+@item
+Detect dead code that could be factored away.
+@end itemize
+
+Check itself does not provide any means to determine this test
+coverage; rather, this is the job of the compiler and its related
+tools. In the case of @command{gcc} this information is easy to
+obtain, and other compilers should provide similar facilities.
+
+Using @command{gcc}, first enable test coverage profiling when
+building your source by specifying the @option{-fprofile-arcs} and
+@option{-ftest-coverage} switches:
+@example
+@verbatim
+$ gcc -g -Wall -fprofile-arcs -ftest-coverage -o foo foo.c foo_check.c
+@end verbatim
+@end example
+
+You will see that an additional @file{.gcno} file is created for each
+@file{.c} input file. After running your tests the normal way, a
+@file{.gcda} file is created for each @file{.gcno} file. These
+contain the coverage data in a raw format. To combine this
+information and a source file into a more readable format you can use
+the @command{gcov} utility:
+@example
+@verbatim
+$ gcov foo.c
+@end verbatim
+@end example
+
+This will produce the file @file{foo.c.gcov} which looks like this:
+@example
+@verbatim
+ -: 41: * object */
+ 18: 42: if (ht->table[p] != NULL) {
+ -: 43: /* replaces the current entry */
+ #####: 44: ht->count--;
+ #####: 45: ht->size -= ht->table[p]->size +
+ #####: 46: sizeof(struct hashtable_entry);
+@end verbatim
+@end example
+
+As you can see this is an annotated source file with three columns:
+usage information, line numbers, and the original source. The usage
+information in the first column can either be '-', which means that
+this line does not contain code that could be executed; '#####', which
+means this line was never executed although it does contain
+code---these are the lines that are probably most interesting for you;
+or a number, which indicates how often that line was executed.
+
+This is of course only a very brief overview, but it should illustrate
+how determining test coverage generally works, and how it can help
+you. For more information or help with other compilers, please refer
+to the relevant manuals.
+
+@node Test Logging, Subunit Support, Determining Test Coverage, Advanced Features
+@section Test Logging
+
+@findex srunner_set_log()
+Check supports an operation to log the results of a test run. To use
+test logging, call the @code{srunner_set_log()} function with the name
+of the log file you wish to create:
+@example
+@verbatim
+SRunner *sr;
+sr = srunner_create (make_s1_suite ());
+srunner_add_suite (sr, make_s2_suite ());
+srunner_set_log (sr, "test.log");
+srunner_run_all (sr, CK_NORMAL);
+@end verbatim
+@end example
+
+In this example, Check will write the results of the run to
+@file{test.log}. The @code{print_mode} argument to
+@code{srunner_run_all()} is ignored during test logging; the log will
+contain a result entry, organized by suite, for every test run. Here
+is an example of test log output:
+@example
+@verbatim
+Running suite S1
+ex_log_output.c:8:P:Core:test_pass: Test passed
+ex_log_output.c:14:F:Core:test_fail: Failure
+ex_log_output.c:18:E:Core:test_exit: (after this point) Early exit
+with return value 1
+Running suite S2
+ex_log_output.c:26:P:Core:test_pass2: Test passed
+Results for all suites run:
+50%: Checks: 4, Failures: 1, Errors: 1
+@end verbatim
+@end example
+
+@menu
+* XML Logging::
+@end menu
+
+@node XML Logging, , Test Logging, Test Logging
+@subsection XML Logging
+
+@findex srunner_set_xml()
+@findex srunner_has_xml()
+@findex srunner_xml_fname()
+The log can also be written in XML. The following functions define
+the interface for XML logs:
+@example
+@verbatim
+void srunner_set_xml (SRunner *sr, const char *fname);
+int srunner_has_xml (SRunner *sr);
+const char *srunner_xml_fname (SRunner *sr);
+@end verbatim
+@end example
+
+The only thing you need to do to get XML output is call
+@code{srunner_set_xml()} before the tests are run. Here is an example
+of the same log output as before but in XML:
+@example
+@verbatim
+<?xml version="1.0"?>
+<testsuites xmlns="http://check.sourceforge.net/ns">
+ <datetime>2004-08-20 12:53:32</datetime>
+ <suite>
+ <title>S1</title>
+ <test result="success">
+ <path>.</path>
+ <fn>ex_xml_output.c:8</fn>
+ <id>test_pass</id>
+ <description>Core</description>
+ <message>Passed</message>
+ </test>
+ <test result="failure">
+ <path>.</path>
+ <fn>ex_xml_output.c:14</fn>
+ <id>test_fail</id>
+ <description>Core</description>
+ <message>Failure</message>
+ </test>
+ <test result="error">
+ <path>.</path>
+ <fn>ex_xml_output.c:18</fn>
+ <id>test_exit</id>
+ <description>Core</description>
+ <message>Early exit with return value 1</message>
+ </test>
+ </suite>
+ <suite>
+ <title>S2</title>
+ <test result="success">
+ <path>.</path>
+ <fn>ex_xml_output.c:26</fn>
+ <id>test_pass2</id>
+ <description>Core</description>
+ <message>Passed</message>
+ </test>
+ </suite>
+ <duration>0.304875</duration>
+</testsuites>
+@end verbatim
+@end example
+
+@node Subunit Support, , Test Logging, Advanced Features
+@section Subunit Support
+
+Check supports running test suites with subunit output. This can be useful to
+combine test results from multiple languages, or to perform programmatic
+analysis on the results of multiple check test suites or otherise handle test
+results in a programmatic manner. Using subunit with check is very straight
+forward. There are two steps:
+1) In your check test suite driver pass 'CK_SUBUNIT' as the output mode
+for your srunner.
+@example
+@verbatim
+SRunner *sr;
+sr = srunner_create (make_s1_suite ());
+srunner_add_suite (sr, make_s2_suite ());
+srunner_run_all (sr, CK_SUBUNIT);
+@end verbatim
+@end example
+2) Setup your main language test runner to run your check based test
+executable. For instance using python:
+@example
+@verbatim
+
+import subunit
+
+class ShellTests(subunit.ExecTestCase):
+ """Run some tests from the C codebase."""
+
+ def test_group_one(self):
+ """./foo/check_driver"""
+
+ def test_group_two(self):
+ """./foo/other_driver"""
+@end verbatim
+@end example
+
+In this example, running the test suite ShellTests in python (using any test
+runner - unittest.py, tribunal, trial, nose or others) will run
+./foo/check_driver and ./foo/other_driver and report on their result.
+
+Subunit is hosted on launchpad - the @uref{https://launchpad.net/subunit/,
+subunit} project there contains bug tracker, future plans, and source code
+control details.
+
+@node Conclusion and References, AM_PATH_CHECK, Advanced Features, Top
+@chapter Conclusion and References
+The tutorial and description of advanced features has provided an
+introduction to all of the functionality available in Check.
+Hopefully, this is enough to get you started writing unit tests with
+Check. All the rest is simply application of what has been learned so
+far with repeated application of the ``test a little, code a little''
+strategy.
+
+For further reference, see Kent Beck, ``Test-Driven Development: By
+Example'', 1st ed., Addison-Wesley, 2003. ISBN 0-321-14653-0.
+
+If you know of other authoritative references to unit testing and
+test-driven development, please send us a patch to this manual.
+
+@node AM_PATH_CHECK, Copying This Manual, Conclusion and References, Top
+@chapter AM_PATH_CHECK
+@findex AM_PATH_CHECK()
+
+The @code{AM_PATH_CHECK()} macro is defined in the file
+@file{check.m4} which is installed by Check. It has some optional
+parameters that you might find useful in your @file{configure.ac}:
+@verbatim
+AM_PATH_CHECK([MINIMUM-VERSION,
+ [ACTION-IF-FOUND[,ACTION-IF-NOT-FOUND]]])
+@end verbatim
+
+@code{AM_PATH_CHECK} does several things:
+
+@enumerate
+@item
+It ensures check.h is available
+
+@item
+It ensures a compatible version of Check is installed
+
+@item
+It sets @env{CHECK_CFLAGS} and @env{CHECK_LIBS} for use by Automake.
+@end enumerate
+
+If you include @code{AM_PATH_CHECK()} in @file{configure.ac} and
+subsequently see warnings when attempting to create
+@command{configure}, it probably means one of the following things:
+
+@enumerate
+@item
+You forgot to call @command{aclocal}. @command{autoreconf} will do
+this for you.
+
+@item
+@command{aclocal} can't find @file{check.m4}. Here are some possible
+solutions:
+
+@enumerate a
+@item
+Call @command{aclocal} with @option{-I} set to the location of
+@file{check.m4}. This means you have to call both @command{aclocal} and
+@command{autoreconf}.
+
+@item
+Add the location of @file{check.m4} to the @samp{dirlist} used by
+@command{aclocal} and then call @command{autoreconf}. This means you
+need permission to modify the @samp{dirlist}.
+
+@item
+Set @code{ACLOCAL_AMFLAGS} in your top-level @file{Makefile.am} to
+include @option{-I DIR} with @code{DIR} being the location of
+@file{check.m4}. Then call @command{autoreconf}.
+@end enumerate
+@end enumerate
+
+@node Copying This Manual, Index, AM_PATH_CHECK, Top
+@appendix Copying This Manual
+
+@menu
+* GNU Free Documentation License:: License for copying this manual.
+@end menu
+
+@include fdl.texi
+
+@node Index, , Copying This Manual, Top
+@unnumbered Index
+
+@printindex cp
+
+@bye