summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorWilliam Deegan <bill@baddogconsulting.com>2023-03-19 16:37:46 -0700
committerGitHub <noreply@github.com>2023-03-19 16:37:46 -0700
commit8d56b13c07c1e3c054a5d1167d9cfa4d5df7aea3 (patch)
treee084c40d041db4ffac9b892979b4229ed2da77f9
parente46589fc002e84871e9f95399d739b056866a0ae (diff)
parentca8619bb9a1031301245e00d780e13d093618627 (diff)
downloadscons-git-8d56b13c07c1e3c054a5d1167d9cfa4d5df7aea3.tar.gz
Merge pull request #4308 from mwichmann/doc/testfw-update
Update test framework tutorial
-rw-r--r--.editorconfig26
-rw-r--r--testing/framework/test-framework.rst153
2 files changed, 140 insertions, 39 deletions
diff --git a/.editorconfig b/.editorconfig
new file mode 100644
index 000000000..3d2b699a7
--- /dev/null
+++ b/.editorconfig
@@ -0,0 +1,26 @@
+# EditorConfig file for SCons project
+
+root = true
+
+[*]
+indent_style = space
+indent_size = 4
+insert_final_newline = true
+trim_trailing_whitespace = true
+end_of_line = lf
+charset = utf-8
+
+[*.py]
+max_line_length = 88
+ensure_newline_before_comments = true
+include_trailing_comma = true
+use_parentheses = true
+
+[*.xml]
+indent_size = 2
+
+[*.rst]
+indent_size = 3
+
+[*.bat]
+end_of_line = crlf
diff --git a/testing/framework/test-framework.rst b/testing/framework/test-framework.rst
index 265b82e74..01893ee61 100644
--- a/testing/framework/test-framework.rst
+++ b/testing/framework/test-framework.rst
@@ -61,7 +61,7 @@ scripts as we find them.)
End-to-end tests are by their nature harder to debug.
You can drop straight into the Python debugger on the unit test
scripts by using the ``runtest.py --pdb`` option, but the end-to-end
-tests treat an SCons invocation as a "black box" and just look for
+tests treat an SCons invocation as a *black box* and just look for
external effects; simple methods like inserting ``print`` statements
in the SCons code itself can disrupt those external effects.
See `Debugging end-to-end tests`_ for some more thoughts.
@@ -72,7 +72,7 @@ Naming conventions
The end-to-end tests, more or less, stick to the following naming
conventions:
-#. All tests end with a .py suffix.
+#. All tests end with a ``.py`` suffix.
#. In the *General* form we use
``Feature.py``
@@ -161,7 +161,7 @@ a function which takes a path-component argument and returns the path to
that path-component in the testing directory.
The use of an ephemeral test directory means that you can't simply change
-into a directory to "debug things" after a test has gone wrong.
+into a directory to debug after a test has gone wrong.
For a way around this, check out the ``PRESERVE`` environment variable.
It can be seen in action in `How to convert old tests to use fixures`_ below.
@@ -170,7 +170,7 @@ Not running tests
If you simply want to check which tests would get executed, you can call
the ``runtest.py`` script with the ``-l`` option combined with whichever
-test finding options (see below) you intend to use. Example::
+test selection options (see below) you intend to use. Example::
$ python runtest.py -l test/scons-time
@@ -179,8 +179,8 @@ each test which would have been run, but doesn't actually run them::
$ python runtest.py -n -a
-Finding Tests
-=============
+Selecting tests
+===============
When started in *standard* mode::
@@ -209,11 +209,11 @@ The same rules apply when testing external Tools when using the ``-e``
option.
-Example End-to-End Test Script
+Example End-to-End test script
==============================
To illustrate how the end-to-end test scripts work, let's walk through
-a simple "Hello, world!" example::
+a simple *Hello, world!* example::
#!python
import TestSCons
@@ -241,6 +241,8 @@ a simple "Hello, world!" example::
test.pass_test()
+Explanation
+-----------
``import TestSCons``
Imports the main infrastructure for writing SCons tests. This is
@@ -249,7 +251,7 @@ a simple "Hello, world!" example::
imported before this line.
``test = TestSCons.TestSCons()``
- This initializes an object for testing. A fair amount happens under
+ Initializes an object for testing. A fair amount happens under
the covers when the object is created, including:
* A temporary directory is created for all the in-line files that will
@@ -302,8 +304,8 @@ Working with fixtures
In the simple example above, the files to set up the test are created
on the fly by the test program. We give a filename to the ``TestSCons.write()``
-method, and a string holding its contents, and it gets written to the test
-directory right before starting..
+method, plus a string holding its contents, and it gets written to the test
+directory right before starting.
This simple technique can be seen throughout most of the end-to-end
tests as it was the original technique provided to test developers,
@@ -321,12 +323,12 @@ for code, so the effect is lost on them.
In testing parlance, a fixture is a repeatable test setup. The SCons
test harness allows the use of saved files or directories to be used
-in that sense: "the fixture for this test is foo", instead of writing
+in that sense: *the fixture for this test is foo*, instead of writing
a whole bunch of strings to create files. Since these setups can be
reusable across multiple tests, the *fixture* terminology applies well.
Note: fixtures must not be treated by SCons as runnable tests. To exclude
-them, see instructions in the above section named "Finding Tests".
+them, see instructions in the above section named `Selecting tests`_.
Directory fixtures
------------------
@@ -390,9 +392,11 @@ would have been placed in the top level of the test directory.
Again, a reference example can be found in the current revision
of SCons, see ``test/packaging/sandbox-test/sandbox-test.py``.
-For even more examples you should check out
-one of the external Tools, e.g. the *Qt4* Tool at
-https://bitbucket.org/dirkbaechle/scons_qt4. Also visit the SCons Tools
+For even more examples you should check out one of the external Tools,
+e.g. the *Qt5* Tool at
+https://github.com/SCons/scons-contrib/tree/master/sconscontrib/SCons/Tool/qt5.
+There are many other tools in the contrib repository,
+and you can also visit the SCons Tools
Index at https://github.com/SCons/scons/wiki/ToolsIndex for a complete
list of available Tools, though not all may have tests yet.
@@ -453,23 +457,24 @@ kind of usage that does not lend itself to a fixture::
import TestSCons
_python_ = TestSCons._python_
- test.write('SConstruct', """
+ test.write('SConstruct', f"""
cc = Environment().Dictionary('CC')
env = Environment(
- LINK=r'%(_python_)s mylink.py',
+ LINK=r'{_python_} mylink.py',
LINKFLAGS=[],
- CC=r'%(_python_)s mycc.py',
+ CC=r'{_python_} mycc.py',
CXX=cc,
CXXFLAGS=[],
)
env.Program(target='test1', source='test1.c')
- """ % locals())
+ """
-Here the value of ``_python_`` is picked out of the script's
-``locals`` dictionary - which works because we've set it above -
-and interpolated using a mapping key into the string that will
-be written to ``SConstruct``. A fixture would be hard to use
-here because we don't know the value of ``_python_`` until runtime.
+Here the value of ``_python_`` from the test program is
+pasted in via f-string formatting. A fixture would be hard to use
+here because we don't know the value of ``_python_`` until runtime
+(also note that as it will be a full pathname, it's entered as a
+Python rawstring to avoid interpretation problems on Windows,
+where the path separator is a backslash).
The other files created in this test may still be candidates for
use as fixture files, however.
@@ -518,7 +523,7 @@ for debugging purposes. If you have a failing test, try::
You can now go to the save directory reported from this run
and invoke the test manually to see what it is doing, without
the presence of the test infrastructure which would otherwise
-"swallow" output you may be interested in. In this case,
+consume output you may be interested in. In this case,
adding debug prints may be more useful.
@@ -528,17 +533,17 @@ Test infrastructure
The main test API is defined in the ``TestSCons`` class. ``TestSCons``
is a subclass of ``TestCommon``, which is a subclass of ``TestCmd``.
All those classes are defined in Python files of the same name
-in ``testing/framework``.
+in ``testing/framework``.
Start in ``testing/framework/TestCmd.py`` for the base API definitions, like how
to create files (``test.write()``) and run commands (``test.run()``).
Use ``TestSCons`` for the end-to-end tests in ``test``, but use
-``TestCmd`` for the unit tests in the ``src`` directory.
+``TestCmd`` for the unit tests in the ``SCons`` directory.
The match functions work like this:
``TestSCons.match_re``
- match each line with a RE
+ match each line with an RE
* Splits the lines into a list (unless they already are)
* splits the REs at newlines (unless already a list)
@@ -614,14 +619,84 @@ plumbed into the environment. These things can be tested by mocking the
behavior of the executable. Many examples of this can be found in the
``test`` directory. See for example ``test/subdivide.py``.
-This leads to a suggestion for E2E test organization because the framework
-doesn't have a way to indicate a partial skip - if you executed
-200 lines of test, then found a condition which caused you to skip the
-last 20 lines, the whole test is marked as a skip;
-it also doesn't have a way to indicate a partial pass.
-To improve on this, keep tool tests which don't need the
-underlying program in separate files from ones which do -
-that way one can see in the test results that the "plumbing"
-tests worked even if the the ones using the underlying program
-maybe were skipped.
+Testing DOs and DONTs
+=====================
+
+There's no question that having to write tests in order to get a change
+approved - even an apparently trivial change - does make it a little harder
+to contribute to the SCons code base - but the requirement to have features
+and bugfixes testable is a necessary part of ensuring SCons quality.
+Thinking of SCons development in terms of the red/green model from
+Test Driven Development should make things a little easier.
+
+If you are working on an SCons bug, try to come up with a simple
+reproducer first. Bug reports (even your own!) are often like *I tried
+to do this but it surprisingly failed*, and a reproducer is normally an
+``SConstruct`` along with, probably, some supporting files such as source
+files, data files, subsidiary SConscripts, etc. Try to make this example
+as simple and clean as possible. No, this isn't necessarily easy to do,
+but winnowing down what triggers a problem and removing the stuff that
+doesn't actually contribute to triggering the problem it is a step that
+lets you (and later readers) more clearly understand what is going on.
+You don't have to turn this into a formal testcase yet, but keep this
+reproducer around, and document with it what you expect to happen,
+and what actually happens. This material will help produce an E2E
+test later, and this is something you *may* be able to get help with,
+if the way the tests are usually written and the test harness proves
+too confusing. With a clean test in hand (make sure it's failing!)
+you can go ahead an code up a fix and make sure it passes with the fix
+in place. Jumping straight to a fix without working on a testcase like
+this will often lead to a disappointing *how do I come up with a test
+so the maintainer will be willing to merge* phase. Asking questions on
+a public forum can be productive here.
+
+E2E-specific Suggestions:
+
+* Do not require the use of an external tool unless necessary.
+ Usually the SCons behavior is the thing we want to test,
+ not the behavior of the external tool. *Necessary* is not a precise term -
+ sometimes it would be too time-consuming to write a script to mock
+ a compiler with an extensive set of options, and sometimes it's
+ not a good idea to assume you know what all those will do vs what
+ the real tool does; there may be other good reasons for just going
+ ahead and calling the external tool.
+* If using an external tool, be prepared to skip the test if it is unavailable.
+* Do not combine tests that need an external tool with ones that
+ do not - divide these into separate test files. There is no concept
+ of partial skip for e2e tests, so if you successfully complete seven
+ of eight tests, and then come to a conditional "skip if tool missing"
+ or "skip if on Windows", and that branch is taken, then the
+ whole test file ends up skipped, and the seven that ran will
+ never be recorded. Some tests follow the convention of creating a
+ second test file with the ending ``-live`` for the part that requires
+ actually running the external tool.
+* In testing, *fail fast* is not always the best policy - if you can think
+ of many scenarios that could go wrong and they are all run linearly in
+ a single test file, then you only hear about the first one that fails.
+ In some cases it may make sense to split them out a bit more, so you
+ can see several fails at once, which may show a helpful failure pattern
+ you wouldn't spot from a single fail.
+* Use test fixtures where it makes sense, and in particular, try to
+ make use of shareable mocked tools, which, by getting lots of use,
+ will be better debugged (that is, don't have each test produce its
+ own ``myfortan.py`` or ``mylex.py`` etc. unless they need drastically
+ different behaviors).
+
+Unittest-specific hints:
+
+- Let the ``unittest`` module help! Lots of the existing tests just
+ use a bare ``assert`` call for checks, which works fine, but then
+ you are responsible for preparing the message if it fails. The base
+ ``TestCase`` class has methods which know how to display many things,
+ for example ``self.assertEqual()`` displays in what way the two arguments
+ differ if they are *not* equal. Checking for am expected exception can
+ be done with ``self.assertRaises()`` rather than crafting a stub of
+ code using a try block for this situation.
+- The *fail fast* consideration applies here, too: try not to fail a whole
+ testcase on the first problem, if there are more checks to go.
+ Again, existing tests may use elaborate tricks for this, but modern
+ ``unittest`` has a ``subTest`` context manager that can be used to wrap
+ each distinct piece and not abort the testcase for a failing subtest
+ (to be fair, this functionality is a recent addition, after most SCons
+ unit tests were written - but it should be used going forward).