summaryrefslogtreecommitdiff
path: root/doc/source/hacking/using_the_testsuite.rst
blob: 487e3caf43508cd524fdc9d58223ea02dae32ebf (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269


.. _contributing_testing:

Using the test suite
--------------------
BuildStream uses `tox <https://tox.readthedocs.org/>`_ as a frontend to run the
tests which are implemented using `pytest <https://pytest.org/>`_. We use
pytest for regression tests and testing out the behavior of newly added
components.

The elaborate documentation for pytest can be found here: http://doc.pytest.org/en/latest/contents.html

Don't get lost in the docs if you don't need to, follow existing examples instead.


.. _contributing_build_deps:

Installing build dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of BuildStream's dependencies have non-python build dependencies. When
running tests with ``tox``, you will first need to install these dependencies.
Exact steps to install these will depend on your operating system. Commands
for installing them for some common distributions are listed below.

For Fedora-based systems::

  dnf install gcc python3-devel


For Debian-based systems::

  apt install gcc python3-dev


Installing runtime dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To be able to run BuildStream as part of the test suite, BuildStream's runtime
dependencies must also be installed. Instructions on how to do so can be found
in :ref:`install-dependencies`.

If you are not interested in running the integration tests, you can skip the
installation of ``buildbox-run``.


Running tests
~~~~~~~~~~~~~
To run the tests, simply navigate to the toplevel directory of your BuildStream
checkout and run::

  tox

By default, the test suite will be run against every supported python version
found on your host. If you have multiple python versions installed, you may
want to run tests against only one version and you can do that using the ``-e``
option when running tox::

  tox -e py37

If you would like to test and lint at the same time, or if you do have multiple
python versions installed and would like to test against multiple versions, then
we recommend using `detox <https://github.com/tox-dev/detox>`_, just run it with
the same arguments you would give `tox`::

  detox -e lint,py36,py37

The output of all failing tests will always be printed in the summary, but
if you want to observe the stdout and stderr generated by a passing test,
you can pass the ``-s`` option to pytest as such::

  tox -- -s

.. tip::

   The ``-s`` option is `a pytest option <https://docs.pytest.org/latest/usage.html>`_.

   Any options specified before the ``--`` separator are consumed by ``tox``,
   and any options after the ``--`` separator will be passed along to pytest.

You can always abort on the first failure by running::

  tox -- -x

Similarly, you may also be interested in the ``--last-failed`` and
``--failed-first`` options as per the
`pytest cache <https://docs.pytest.org/en/latest/cache.html>`_ documentation.

If you want to run a specific test or a group of tests, you
can specify a prefix to match. E.g. if you want to run all of
the frontend tests you can do::

  tox -- tests/frontend/

Specific tests can be chosen by using the :: delimiter after the test module.
If you wanted to run the test_build_track test within frontend/buildtrack.py you could do::

  tox -- tests/frontend/buildtrack.py::test_build_track

When running only a few tests, you may find the coverage and timing output
excessive, there are options to trim them. Note that coverage step will fail.
Here is an example::

  tox -- --no-cov --durations=1 tests/frontend/buildtrack.py::test_build_track

We also have a set of slow integration tests that are disabled by
default - you will notice most of them marked with SKIP in the pytest
output. To run them, you can use::

  tox -- --integration

In case BuildStream's dependencies were updated since you last ran the
tests, you might see some errors like
``pytest: error: unrecognized arguments: --codestyle``. If this happens, you
will need to force ``tox`` to recreate the test environment(s). To do so, you
can run ``tox`` with ``-r`` or  ``--recreate`` option.

.. note::

   By default, we do not allow use of site packages in our ``tox``
   configuration to enable running the tests in an isolated environment.
   If you need to enable use of site packages for whatever reason, you can
   do so by passing the ``--sitepackages`` option to ``tox``. Also, you will
   not need to install any of the build dependencies mentioned above if you
   use this approach.

.. note::

   While using ``tox`` is practical for developers running tests in
   more predictable execution environments, it is still possible to
   execute the test suite against a specific installation environment
   using pytest directly::

     pytest

   If you want to run coverage, you will need need to add ``BST_CYTHON_TRACE=1``
   to your environment if you also want coverage on cython files. You could then
   get coverage by running::

     BST_CYTHON_TRACE=1 coverage run pytest

   Note that you will have to have all dependencies installed already, when
   running tests directly via ``pytest``. This includes the following:

   * Cython and Setuptools, as build dependencies
   * Runtime dependencies and test dependencies are specified in requirements
     files, present in the ``requirements`` subdirectory. Refer to the ``.in``
     files for loose dependencies and ``.txt`` files for fixed version of all
     dependencies that are known to work.
   * Additionally, if you are running tests that involve external plugins, you
     will need to have those installed as well.

   You can also have a look at our tox configuration in ``tox.ini`` file if you
   are unsure about dependencies.

.. tip::

   We also have an environment called 'venv' which takes any arguments
   you give it and runs them inside the same virtualenv we use for our
   tests::

     tox -e venv -- <your command(s) here>

   Any commands after ``--`` will be run a virtualenv managed by tox.

Running linters
~~~~~~~~~~~~~~~
Linting is performed separately from testing. In order to run the linting step which
consists of running the ``pylint`` tool, run the following::

  tox -e lint

.. tip::

   The project specific pylint configuration is stored in the toplevel
   buildstream directory in the ``.pylintrc`` file. This configuration can be
   interesting to use with IDEs and other developer tooling.

.. _contributing_formatting_code:

Formatting code
~~~~~~~~~~~~~~~
Similar to linting, code formatting is also done via a ``tox`` environment. To
format the code using the ``black`` tool, run the following::

   tox -e format

Observing coverage
~~~~~~~~~~~~~~~~~~
Once you have run the tests using `tox` (or `detox`), some coverage reports will
have been left behind.

To view the coverage report of the last test run, simply run::

  tox -e coverage

This will collate any reports from separate python environments that may be
under test before displaying the combined coverage.


Adding tests
~~~~~~~~~~~~
Tests are found in the tests subdirectory, inside of which
there is a separate directory for each *domain* of tests.
All tests are collected as::

  tests/*/*.py

If the new test is not appropriate for the existing test domains,
then simply create a new directory for it under the tests subdirectory.

Various tests may include data files to test on, there are examples
of this in the existing tests. When adding data for a test, create
a subdirectory beside your test in which to store data.

When creating a test that needs data, use the datafiles extension
to decorate your test case (again, examples exist in the existing
tests for this), documentation on the datafiles extension can
be found here: https://pypi.python.org/pypi/pytest-datafiles.

Tests that run a sandbox should be decorated with::

  @pytest.mark.integration

and use the integration cli helper.

You must test your changes in an end-to-end fashion. Consider the first end to
be the appropriate user interface, and the other end to be the change you have
made.

The aim for our tests is to make assertions about how you impact and define the
outward user experience. You should be able to exercise all code paths via the
user interface, just as one can test the strength of rivets by sailing dozens
of ocean liners. Keep in mind that your ocean liners could be sailing properly
*because* of a malfunctioning rivet. End-to-end testing will warn you that
fixing the rivet will sink the ships.

The primary user interface is the cli, so that should be the first target 'end'
for testing. Most of the value of BuildStream comes from what you can achieve
with the cli.

We also have what we call a *"Public API Surface"*, as previously mentioned in
:ref:`contributing_documenting_symbols`. You should consider this a secondary
target. This is mainly for advanced users to implement their plugins against.

Note that both of these targets for testing are guaranteed to continue working
in the same way across versions. This means that tests written in terms of them
will be robust to large changes to the code. This important property means that
BuildStream developers can make large refactorings without needing to rewrite
fragile tests.

Another user to consider is the BuildStream developer, therefore internal API
surfaces are also targets for testing. For example the YAML loading code, and
the CasCache. Remember that these surfaces are still just a means to the end of
providing value through the cli and the *"Public API Surface"*.

It may be impractical to sufficiently examine some changes in an end-to-end
fashion. The number of cases to test, and the running time of each test, may be
too high. Such typically low-level things, e.g. parsers, may also be tested
with unit tests; alongside the mandatory end-to-end tests.

It is important to write unit tests that are not fragile, i.e. in such a way
that they do not break due to changes unrelated to what they are meant to test.
For example, if the test relies on a lot of BuildStream internals, a large
refactoring will likely require the test to be rewritten. Pure functions that
only rely on the Python Standard Library are excellent candidates for unit
testing.

Unit tests only make it easier to implement things correctly, end-to-end tests
make it easier to implement the right thing.