summaryrefslogtreecommitdiff
path: root/README
blob: e5d097071bf3dd79295cead5142948faabdaef0d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
*****************************************************************
testscenarios: extensions to python unittest to support scenarios
*****************************************************************

Copyright (C) 2009  Robert Collins <robertc@robertcollins.net>

  This program is free software; you can redistribute it and/or modify
  it under the terms of the GNU General Public License as published by
  the Free Software Foundation; either version 2 of the License, or
  (at your option) any later version.

  This program is distributed in the hope that it will be useful,
  but WITHOUT ANY WARRANTY; without even the implied warranty of
  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  GNU General Public License for more details.

  You should have received a copy of the GNU General Public License
  along with this program; if not, write to the Free Software
  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA


testscenarios provides clean dependency injection for python unittest style
tests. This can be used for interface testing (testing many implementations via
a single test suite) or for classic dependency injection (provide tests with
dependencies externally to the test code itself, allowing easy testing in
different situations).

Dependencies
============

* Python 2.4+
* testtools <https://launchpad.net/testtools>

  >>> import testtools


Why TestScenarios
=================

Standard Python unittest.py provides on obvious method for running a single
test_foo method with two (or more) scenarios: by creating a mix-in that
provides the functions, objects or settings that make up the scenario. This is
however limited and unsatisfying. Firstly, when two projects are cooperating
on a test suite (for instance, a plugin to a larger project may want to run
the standard tests for a given interface on its implementation), then it is
easy for them to get out of sync with each other: when the list of TestCase
classes to mix-in with changes, the plugin will either fail to run some tests
or error trying to run deleted tests. Secondly, its not as easy to work with
runtime-created-subclasses (a way of dealing with the aforementioned skew)
because they require more indirection to locate the source of the test, and will
often be ignored by e.g. pyflakes pylint etc.

It is the intent of testscenarios to make dynamically running a single test
in multiple scenarios clear, easy to debug and work with even when the list
of scenarios is dynamically generated.


Defining Scenarios
==================

A **scenario** is a tuple of a string name for the scenario, and a dict of
parameters describing the scenario.  The name is appended to the test name, and
the parameters are made available to the test instance when it's run.

Scenarios are presented in **scenario lists** which are typically Python lists
but may be any iterable.


Getting Scenarios applied
=========================

At its heart the concept is simple. For a given test object with a list of
scenarios we prepare a new test object for each scenario. This involves:

* Clone the test to a new test with a new id uniquely distinguishing it.
* Apply the scenario to the test by setting each key, value in the scenario
  as attributes on the test object.

There are some complicating factors around making this happen seamlessly. These
factors are in two areas:

* Choosing what scenarios to use. (See Setting Scenarios For A Test).
* Getting the multiplication to happen. 

Subclasssing
++++++++++++

If you can subclass TestWithScenarios, then the ``run()`` method in
TestWithScenarios will take care of test multiplication. It will at test
execution act as a generator causing multiple tests to execute. For this to 
work reliably TestWithScenarios must be first in the MRO and you cannot
override run() or __call__. This is the most robust method, in the sense
that any test runner or test loader that obeys the python unittest protocol
will run all your scenarios.

Manual generation
+++++++++++++++++

If you cannot subclass TestWithScenarios (e.g. because you are using
TwistedTestCase, or TestCaseWithResources, or any one of a number of other
useful test base classes, or need to override run() or __call__ yourself) then 
you can cause scenario application to happen later by calling
``testscenarios.generate_scenarios()``. For instance::

  >>> import unittest
  >>> from testscenarios.scenarios import generate_scenarios

This can work with loaders and runners from the standard library, or possibly other
implementations::

  >>> loader = unittest.TestLoader()
  >>> test_suite = unittest.TestSuite()
  >>> runner = unittest.TextTestRunner()

  >>> mytests = loader.loadTestsFromNames(['example.test_sample'])
  >>> test_suite.addTests(generate_scenarios(mytests))
  >>> runner.run(test_suite)
  <unittest._TextTestResult run=1 errors=0 failures=0>

Testloaders
+++++++++++

Some test loaders support hooks like ``load_tests`` and ``test_suite``.
Ensuring your tests have had scenario application done through these hooks can
be a good idea - it means that external test runners (which support these hooks
like ``nose``, ``trial``, ``tribunal``) will still run your scenarios. (Of
course, if you are using the subclassing approach this is already a surety).
With ``load_tests``::

  >>> def load_tests(standard_tests, module, loader):
  ...     result = loader.suiteClass()
  ...     result.addTests(generate_scenarios(standard_tests))
  ...     return result

With ``test_suite``::

  >>> def test_suite():
  ...     loader = TestLoader()
  ...     tests = loader.loadTestsFromName(__name__)
  ...     result = loader.suiteClass()
  ...     result.addTests(generate_scenarios(tests))
  ...     return result


Setting Scenarios for a test
============================

A sample test using scenarios can be found in the doc/ folder.

See `pydoc testscenarios` for details.

On the TestCase
+++++++++++++++

You can set a scenarios attribute on the test case::

  >>> class MyTest(unittest.TestCase):
  ...
  ...     scenarios = [
  ...         ('scenario1', dict(param=1)),
  ...         ('scenario2', dict(param=2)),]

This provides the main interface by which scenarios are found for a given test.
Subclasses will inherit the scenarios (unless they override the attribute).

After loading
+++++++++++++

Test scenarios can also be generated arbitrarily later, as long as the test has
not yet run. Simply replace (or alter, but be aware that many tests may share a
single scenarios attribute) the scenarios attribute. For instance in this
example some third party tests are extended to run with a custom scenario. ::

  >>> class TestTransport:
  ...     """Hypothetical test case for bzrlib transport tests"""
  ...     pass
  ...
  >>> stock_library_tests = unittest.TestLoader().loadTestsFromNames(
  ...     ['example.test_sample'])
  ...
  >>> for test in testtools.iterate_tests(stock_library_tests):
  ...     if isinstance(test, TestTransport):
  ...         test.scenarios = test.scenarios + [my_vfs_scenario]
  ...
  >>> suite = unittest.TestSuite()
  >>> suite.addTests(generate_scenarios(stock_library_tests))

Generated tests don't have a ``scenarios`` list, because they don't normally
require any more expansion.  However, you can add a ``scenarios`` list back on
to them, and then run them through ``generate_scenarios`` again to generate the
cross product of tests. ::

  >>> class CrossProductDemo(unittest.TestCase):
  ...     scenarios = [('scenario_0_0', {}),
  ...                  ('scenario_0_1', {})]
  ...     def test_foo(self):
  ...         return
  ...
  >>> suite = unittest.TestSuite()
  >>> suite.addTests(generate_scenarios(CrossProductDemo("test_foo")))
  >>> for test in testtools.iterate_tests(suite):
  ...     test.scenarios = [
  ...         ('scenario_1_0', {}), 
  ...         ('scenario_1_1', {})]
  ...
  >>> suite2 = unittest.TestSuite()
  >>> suite2.addTests(generate_scenarios(suite))
  >>> print suite2.countTestCases()
  4

Dynamic Scenarios
+++++++++++++++++

A common use case is to have the list of scenarios be dynamic based on plugins
and available libraries. An easy way to do this is to provide a global scope
scenarios somewhere relevant to the tests that will use it, and then that can
be customised, or dynamically populate your scenarios from a registry etc.
For instance::

  >>> hash_scenarios = []
  >>> try:
  ...     from hashlib import md5
  ... except ImportError:
  ...     pass
  ... else:
  ...     hash_scenarios.append(("md5", dict(hash=md5)))
  >>> try:
  ...     from hashlib import sha1
  ... except ImportError:
  ...     pass
  ... else:
  ...     hash_scenarios.append(("sha1", dict(hash=sha1)))
  ...
  >>> class TestHashContract(unittest.TestCase):
  ...
  ...     scenarios = hash_scenarios
  ...
  >>> class TestHashPerformance(unittest.TestCase):
  ...
  ...     scenarios = hash_scenarios


Forcing Scenarios
+++++++++++++++++

The ``apply_scenarios`` function can be useful to apply scenarios to a test
that has none applied. ``apply_scenarios`` is the workhorse for
``generate_scenarios``, except it takes the scenarios passed in rather than
introspecting the test object to determine the scenarios. The
``apply_scenarios`` function does not reset the test scenarios attribute,
allowing it to be used to layer scenarios without affecting existing scenario
selection.


Advice on Writing Scenarios
===========================

If a parameterised test is because of a bug run without being parameterized,
it should fail rather than running with defaults, because this can hide bugs.