summaryrefslogtreecommitdiff
path: root/yarns.webapp
diff options
context:
space:
mode:
authorLars Wirzenius <lars.wirzenius@codethink.co.uk>2014-01-20 14:24:27 +0000
committerLars Wirzenius <lars.wirzenius@codethink.co.uk>2014-04-15 13:29:27 +0000
commit4fc162b07b2e9d8489e16ed647e5d96f5c66e10a (patch)
treeac2a2a5b86a5d789bd28b383851b28d7f293b928 /yarns.webapp
parent716ad28c18ac00c52797dc42c843569b1834fb88 (diff)
downloadlorry-controller-4fc162b07b2e9d8489e16ed647e5d96f5c66e10a.tar.gz
Add new Lorry Controller
Diffstat (limited to 'yarns.webapp')
-rw-r--r--yarns.webapp/010-introduction.yarn77
-rw-r--r--yarns.webapp/020-status.yarn27
-rw-r--r--yarns.webapp/030-queue-management.yarn106
-rw-r--r--yarns.webapp/040-running-jobs.yarn260
-rw-r--r--yarns.webapp/050-troves.yarn76
-rw-r--r--yarns.webapp/060-validation.yarn190
-rw-r--r--yarns.webapp/900-implementations.yarn484
-rw-r--r--yarns.webapp/yarn.sh56
8 files changed, 1276 insertions, 0 deletions
diff --git a/yarns.webapp/010-introduction.yarn b/yarns.webapp/010-introduction.yarn
new file mode 100644
index 0000000..ae3af58
--- /dev/null
+++ b/yarns.webapp/010-introduction.yarn
@@ -0,0 +1,77 @@
+% Lorry Controller WEBAPP integration test suite
+% Codethink Ltd
+
+
+Introduction
+============
+
+This is an integration test suite for the WEBAPP component of Lorry
+Controller. It is implemented using the [yarn] tool and uses a style
+of automated testing called "scenario testing" by the tool authors.
+
+[yarn]: http://liw.fi/cmdtest/README.yarn/
+
+As an example, here is a scenario that verifies that the Lorry
+Controller WEBAPP can be started at all:
+
+ SCENARIO WEBAPP can be started at all
+ WHEN WEBAPP --help is requested
+ THEN WEBAPP --help exited with a zero exit code
+
+A scenario consists of a sequence of steps that can be executed by a
+computer. The steps are then defined using IMPLEMENTS:
+
+ IMPLEMENTS WHEN WEBAPP --help is requested
+ if "$SRCDIR/lorry-controller-webapp" --help
+ then
+ exit=0
+ else
+ exit=$?
+ fi
+ echo "$exit" > "$DATADIR/webapp.exit"
+
+And another:
+
+ IMPLEMENTS THEN WEBAPP --help exited with a zero exit code
+ grep -Fx 0 "$DATADIR/webapp.exit"
+
+Yarn will run each scenario in the order it finds them. If all steps
+in a scenario succeed, the scenario succeeds.
+
+Scenarios, though not their implementations, are intended to be
+understandable by people who aren't programmers, though some
+understanding of the technology is required.
+
+For more information, see the documentation for yarn.
+
+
+Test environment and setup
+==========================
+
+In this chapter, we discuss how the environment is set up for tests to
+run in. Yarn provides a temporary directory in which tests can create
+temporary directories, and sets the environment variable `$DATADIR` to
+point at that directory. Yarn also deletes the directory and all of
+its contents at the end, so the test suite itself does not need to do
+that.
+
+We put several files into `$DATADIR`.
+
+* The WEBAPP STATEDB database file.
+* Responses from HTTP queries to WEBAPP.
+* PID of the running WEBAPP.
+
+The purpose of each file is documented with the IMPLEMENTS sections
+that use it, typically with the one that creates it.
+
+Since many scenarios will start an instance of WEBAPP, they also need
+to make sure it gets killed. There are steps for these (`GIVEN a
+running WEBAPP` and `FINALLY WEBAPP is terminated`), which MUST be
+used as a pair in each scenario: having only one of these steps is
+always a bug in the scenario, whereas having neither is OK.
+
+WEBAPP has stores its persistent state in STATEDB, which is an Sqlite
+database on disk. Our tests do _not_ touch it directly, only via WEBAPP,
+so that we do not encode in our tests internals of the database, such
+as the database schema. We do not care: we only care that WEBAPP
+works, and the database schema of STATEDB is _not_ a public interface.
diff --git a/yarns.webapp/020-status.yarn b/yarns.webapp/020-status.yarn
new file mode 100644
index 0000000..5749920
--- /dev/null
+++ b/yarns.webapp/020-status.yarn
@@ -0,0 +1,27 @@
+WEBAPP status reporting
+=======================
+
+WEBAPP reports it status via an HTTP request. We verify that when it
+starts up, the status is that it is doing nothing: there are no jobs,
+it has no Lorry or Trove specs.
+
+ SCENARIO WEBAPP is idle when it starts
+ GIVEN a running WEBAPP
+ WHEN admin makes request GET /1.0/status
+ THEN response is application/json
+ AND response has running_queue set to true
+ AND response has disk_free set
+ AND response has disk_free_mib set
+ AND response has disk_free_gib set
+ AND static status page got updated
+ FINALLY WEBAPP terminates
+
+As an alternative, we can request the HTML rendering of the status
+directly with `/1.0/status-html`.
+
+ SCENARIO WEBAPP provide HTML status directly
+ GIVEN a running WEBAPP
+ WHEN admin makes request GET /1.0/status-html
+ THEN response is text/html
+ AND static status page got updated
+ FINALLY WEBAPP terminates
diff --git a/yarns.webapp/030-queue-management.yarn b/yarns.webapp/030-queue-management.yarn
new file mode 100644
index 0000000..91a8511
--- /dev/null
+++ b/yarns.webapp/030-queue-management.yarn
@@ -0,0 +1,106 @@
+Run queue management
+====================
+
+This chapter contains tests meant for managing the run-queue in
+WEBAPP.
+
+Start and stop job scheduling
+-----------------------------
+
+The administrator needs to be able to stop WEBAPP from scheduling any
+new jobs, and later to start it again.
+
+ SCENARIO admin can start and stop WEBAPP job scheduling
+ GIVEN a running WEBAPP
+ WHEN admin makes request GET /1.0/status
+ THEN response has running_queue set to true
+
+ WHEN admin makes request POST /1.0/stop-queue with dummy=value
+ AND admin makes request GET /1.0/status
+ THEN response has running_queue set to false
+
+Further, the state change needs to be persistent across WEBAPP
+instances, so we kill the WEBAPP that's currently running, and start a
+new one, and verify that the `running-queue` status is still `true`.
+
+ WHEN WEBAPP is terminated
+ THEN WEBAPP isn't running
+
+ GIVEN a running WEBAPP
+ WHEN admin makes request GET /1.0/status
+ THEN response has running_queue set to false
+
+Start the queue again.
+
+ WHEN admin makes request POST /1.0/start-queue with dummy=value
+ AND admin makes request GET /1.0/status
+ THEN response has running_queue set to true
+
+Finally, clean up.
+
+ FINALLY WEBAPP terminates
+
+
+Read CONFGIT
+------------
+
+We need to be able to get Lorry Controller, specifically WEBAPP, to
+update its configuration and run-queue from CONFGIT using the
+`/1.0/read-configuration` HTTP API request.
+
+First, set up WEBAPP.
+
+ SCENARIO WEBAPP updates its configuration from CONFGIT
+ GIVEN a new git repository in CONFGIT
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+
+We'll start with an empty configuration. This is the default state
+when WEBAPP has never read its configuration.
+
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Make WEBAPP read an empty configuration. Or rather, a configuration
+that does not match any existing `.lorry` files.
+
+ GIVEN an empty lorry-controller.conf in CONFGIT
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` file, with one Lorry spec, and make sure reading the
+configuration makes `/list-queue` report it.
+
+ GIVEN Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["upstream/foo"]
+
+If the `.lorry` file is removed, the queue should again become empty.
+
+ GIVEN file CONFGIT/foo.lorry is removed
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add two Lorries, then make sure they can reordered at will.
+
+ GIVEN Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ AND Lorry file CONFGIT/bar.lorry with {"bar":{"type":"git","url":"git://bar"}}
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["upstream/bar", "upstream/foo"]
+
+ WHEN admin makes request POST /1.0/move-to-top with path=upstream/foo
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["upstream/foo", "upstream/bar"]
+
+ WHEN admin makes request POST /1.0/move-to-bottom with path=upstream/foo
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["upstream/bar", "upstream/foo"]
+
+Finally, clean up.
+
+ FINALLY WEBAPP terminates
diff --git a/yarns.webapp/040-running-jobs.yarn b/yarns.webapp/040-running-jobs.yarn
new file mode 100644
index 0000000..1ffe79d
--- /dev/null
+++ b/yarns.webapp/040-running-jobs.yarn
@@ -0,0 +1,260 @@
+Running jobs
+============
+
+This chapter contains tests that verify that WEBAPP schedules jobs,
+accepts job output, and lets the admin kill running jobs.
+
+Run a job successfully
+----------------------
+
+To start with, with an empty run-queue, nothing should be scheduled.
+
+ SCENARIO run a job
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+
+We stop the queue first.
+
+ WHEN admin makes request POST /1.0/stop-queue with dummy=value
+
+Then make sure we don't get a job when we reuqest one.
+
+ WHEN admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to null
+
+ WHEN admin makes request GET /1.0/list-running-jobs
+ THEN response has running_jobs set to []
+
+Add a Lorry spec to the run-queue, and request a job. We still
+shouldn't get a job, since the queue isn't set to run yet.
+
+ GIVEN Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to null
+
+Enable the queue, and off we go.
+
+ WHEN admin makes request POST /1.0/start-queue with dummy=value
+ AND admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to 1
+ AND response has path set to "upstream/foo"
+
+ WHEN admin makes request GET /1.0/lorry/upstream/foo
+ THEN response has running_job set to 1
+
+ WHEN admin makes request GET /1.0/list-running-jobs
+ THEN response has running_jobs set to [1]
+
+Requesting another job should now again return null.
+
+ WHEN admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to null
+
+Inform WEBAPP the job is finished.
+
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=0
+ THEN response has kill_job set to false
+ WHEN admin makes request GET /1.0/lorry/upstream/foo
+ THEN response has running_job set to null
+ WHEN admin makes request GET /1.0/list-running-jobs
+ THEN response has running_jobs set to []
+
+Cleanup.
+
+ FINALLY WEBAPP terminates
+
+
+Limit number of jobs running at the same time
+---------------------------------------------
+
+WEBAPP can be told to limit the number of jobs running at the same
+time.
+
+Set things up. Note that we have two local Lorry files, so that we
+could, in principle, run two jobs at the same time.
+
+ SCENARIO limit concurrent jobs
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ AND Lorry file CONFGIT/bar.lorry with {"bar":{"type":"git","url":"git://bar"}}
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+
+Check the current set of the `max_jobs` setting.
+
+ WHEN admin makes request GET /1.0/get-max-jobs
+ THEN response has max_jobs set to null
+
+Set the limit to 1.
+
+ WHEN admin makes request POST /1.0/set-max-jobs with max_jobs=1
+ THEN response has max_jobs set to 1
+ WHEN admin makes request GET /1.0/get-max-jobs
+ THEN response has max_jobs set to 1
+
+Get a job. This should succeed.
+
+ WHEN MINION makes request POST /1.0/give-me-job with host=testhost&pid=1
+ THEN response has job_id set to 1
+
+Get a second job. This should not succeed.
+
+ WHEN MINION makes request POST /1.0/give-me-job with host=testhost&pid=2
+ THEN response has job_id set to null
+
+Finish the first job. Then get a new job. This should succeed.
+
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=0
+ AND MINION makes request POST /1.0/give-me-job with host=testhost&pid=2
+ THEN response has job_id set to 2
+
+Stop job in the middle
+----------------------
+
+We need to be able to stop jobs while they're running as well. We
+start by setting up everything so that a job is running, the same way
+we did for the successful job scenario.
+
+ SCENARIO stop a job while it's running
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+ AND Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request POST /1.0/start-queue with dummy=value
+ AND admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to 1
+ AND response has path set to "upstream/foo"
+
+Admin will now ask WEBAPP to kill the job. This changes sets a field
+in the STATEDB only.
+
+ WHEN admin makes request POST /1.0/stop-job with job_id=1
+ AND admin makes request GET /1.0/lorry/upstream/foo
+ THEN response has kill_job set to true
+
+Now, when MINION updates the job, WEBAPP will tell it to kill it.
+MINION will do so, and then update the job again.
+
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=no
+ THEN response has kill_job set to true
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=1
+
+Admin will now see that the job has, indeed, been killed.
+
+ WHEN admin makes request GET /1.0/lorry/upstream/foo
+ THEN response has running_job set to null
+
+ WHEN admin makes request GET /1.0/list-running-jobs
+ THEN response has running_jobs set to []
+
+Cleanup.
+
+ FINALLY WEBAPP terminates
+
+Stop a job that runs too long
+-----------------------------
+
+Sometimes a job gets "stuck" and should be killed. The
+`lorry-controller.conf` has an optional `lorry-timeout` field for
+this, to set the timeout, and WEBAPP will tell MINION to kill a job
+when it has been running too long.
+
+Some setup. Set the `lorry-timeout` to a know value. It doesn't
+matter what it is since we'll be telling WEBAPP to fake its sense of
+time, so that the test suite is not timing sensitive. We wouldn't want
+to have the test suite fail when running on slow devices.
+
+ SCENARIO stop stuck job
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND lorry-controller.conf in CONFGIT has lorry-timeout set to 1 for everything
+ AND Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+
+Pretend it is the start of time.
+
+ WHEN admin makes request POST /1.0/pretend-time with now=0
+ WHEN admin makes request GET /1.0/status
+ THEN response has timestamp set to "1970-01-01 00:00:00 UTC"
+
+Start the job.
+
+ WHEN admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to 1
+
+Check that the job info contains a start time.
+
+ WHEN admin makes request GET /1.0/job/1
+ THEN response has job_started set
+
+Pretend it is now much later, or at least later than the timeout specified.
+
+ WHEN admin makes request POST /1.0/pretend-time with now=2
+
+Pretend to be a MINION that reports an update on the job. WEBAPP
+should now be telling us to kill the job.
+
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=no
+ THEN response has kill_job set to true
+
+Cleanup.
+
+ FINALLY WEBAPP terminates
+
+Remove a terminated jobs
+------------------------
+
+WEBAPP doesn't remove jobs automatically, it needs to be told to
+remove jobs.
+
+ SCENARIO remove job
+
+Setup.
+
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+ GIVEN Lorry file CONFGIT/foo.lorry with {"foo":{"type":"git","url":"git://foo"}}
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+
+Start job 1.
+
+ WHEN admin makes request POST /1.0/give-me-job with host=testhost&pid=123
+ THEN response has job_id set to 1
+
+Try to remove job 1 while it is running. This should fail.
+
+ WHEN admin makes request POST /1.0/remove-job with job_id=1
+ THEN response has reason set to "still running"
+
+Finish the job.
+
+ WHEN MINION makes request POST /1.0/job-update with job_id=1&exit=0
+ WHEN admin makes request GET /1.0/list-jobs
+ THEN response has job_ids set to [1]
+
+Remove it.
+
+ WHEN admin makes request POST /1.0/remove-job with job_id=1
+ AND admin makes request GET /1.0/list-jobs
+ THEN response has job_ids set to []
+
+Cleanup.
+
+ FINALLY WEBAPP terminates
diff --git a/yarns.webapp/050-troves.yarn b/yarns.webapp/050-troves.yarn
new file mode 100644
index 0000000..8737306
--- /dev/null
+++ b/yarns.webapp/050-troves.yarn
@@ -0,0 +1,76 @@
+Handling of remote Troves
+=========================
+
+This chapter has tests for WEBAPP's handling of remote Troves: getting
+the listing of repositories to mirror from the Trove, and creating
+entries in the run-queue for them.
+
+
+Reading a remote Trove specification from CONFGIT
+-------------------------------------------------
+
+When there's a `troves` section in the Lorry Controller configuration
+file, the WEBAPP should include that in the list of Troves when
+reported.
+
+ SCENARIO a Trove is listed in CONFGIT
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND WEBAPP uses CONFGIT as its configuration directory
+
+Note that we need to fake a remote Trove, using static files, to keep
+test setup simpler.
+
+ AND WEBAPP fakes Trove example-trove
+ AND a running WEBAPP
+
+Initially WEBAPP should report no known Troves, and have an empty
+run-queue.
+
+ WHEN admin makes request GET /1.0/status
+ THEN response has run_queue set to []
+ AND response has troves set to []
+
+Let's add a `troves` section to the configuration file. After WEBAPP
+reads that, it should list the added Trove in status.
+
+ GIVEN lorry-controller.conf in CONFGIT adds trove example-trove
+ AND lorry-controller.conf in CONFGIT has prefixmap example:example for example-trove
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/status
+ THEN response has troves item 0 field trovehost set to "example-trove"
+
+However, this should not have made WEBAPP to fetch a new list of
+repositories from the remote Trove.
+
+ AND response has run_queue set to []
+
+If we tell WEBAPP to fetch the list, we should see repositories.
+
+ GIVEN remote Trove example-trove has repository example/foo
+ WHEN admin makes request POST /1.0/ls-troves with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["example/foo"]
+
+If we re-read the configuration again, without any changes to it or to
+the fake Trove's repository list, the same Troves and Lorry specs
+should remain in STATEDB. (It wasn't always thus, due to a bug.)
+
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ AND admin makes request GET /1.0/status
+ THEN response has troves item 0 field trovehost set to "example-trove"
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["example/foo"]
+
+If the Trove deletes a repository, we should still keep it locally, to
+avoid disasters. However, it will be removed from the Trove's STATEDB,
+and it won't be lorried anymore.
+
+ GIVEN remote Trove example-trove doesn't have repository example/foo
+ WHEN admin makes request POST /1.0/ls-troves with dummy=value
+ AND admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Cleanup.
+
+ FINALLY WEBAPP terminates
diff --git a/yarns.webapp/060-validation.yarn b/yarns.webapp/060-validation.yarn
new file mode 100644
index 0000000..989c80b
--- /dev/null
+++ b/yarns.webapp/060-validation.yarn
@@ -0,0 +1,190 @@
+Validation of CONFGIT
+=====================
+
+The CONFGIT repository contains two types of files we should validate:
+the `lorry-controller.conf` file, and the local Lorry files (specified
+by the former file in `lorries` sections).
+
+Validate `lorry-controller.conf`
+--------------------------------
+
+We'll start by validating the `lorry-controller.conf` file. There's
+several aspects here that need to be tested:
+
+* JSON syntax correctness: if the file doesn't parse as JSON, the
+ WEBAPP should cope and shouldn't change STATEDB in any way.
+* Semantic correctness: the file should contain a list of dicts, and
+ each dict should have the right fields with the right kind of
+ values. See the `README` for details. Other fields are also allowed,
+ though ignored. Again, if there's an error, WEBAPP should cope, and
+ probably shouldn't update STATEDB if there are any problems.
+
+The approach for testing this is to set up an empty STATEDB, then get
+WEBAPP to read a `lorry-controller.conf` with various kinds of
+brokenness, and after each read verify that STATEDB is still empty.
+This doesn't test that if the STATEDB wasn't empty it doesn't change
+existing data, but it seems like a reasonable assumption that an
+update happens regardless of previous contents of STATEDB, given how
+SQL transactions work.
+
+In summary:
+
+* Start WEBAPP without a STATEDB, and have it read its config. Verify
+ STATEDB is empty.
+* Add a `lorry-controller.conf` that is broken in some specific way.
+* Tell WEBAPP to re-read its config.
+* Verify that WEBAPP gives an error message.
+* Verify that STATEDB is still empty.
+
+Repeat this for each type of brokenness we want to ensure WEBAPP
+validates for.
+
+ SCENARIO validate lorry-controller.conf
+ GIVEN a new git repository in CONFGIT
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+
+First of all, have WEBAPP read CONFGIT. This should succeed even if
+the `lorry-controller.conf` file doesn't actually exist.
+
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "Configuration has been updated"
+ AND STATEDB is empty
+
+Add an empty configuration file. This is different from a file
+containing an empty JSON list. It should be treated as an error.
+
+ GIVEN a lorry-controller.conf in CONFGIT containing ""
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "ERROR"
+ AND STATEDB is empty
+
+Add a syntactically invalid JSON file.
+
+ GIVEN a lorry-controller.conf in CONFGIT containing "blah blah blah"
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "ERROR"
+ AND STATEDB is empty
+
+Replace the bad JSON file with one that has an unknown section (no
+`type` field). Please excuse the non-escaping of double quotes: it's
+an artifact of how yarn steps are implemented and is OK.
+
+ GIVEN a lorry-controller.conf in CONFGIT containing "[{"foo": "bar"}]"
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "ERROR"
+ AND STATEDB is empty
+
+What about a section that has a `type` field, but it's set to a
+non-sensical value?
+
+ GIVEN a lorry-controller.conf in CONFGIT containing "[{"type": "BACKUPS!"}]"
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "ERROR"
+ AND STATEDB is empty
+
+Now we're getting to real sections. A `troves` section must have
+`trovehost`, `interval`, `ls-interval`, and `prefixmap` set, and may
+optionally have `ignore` set. The `trovehost` field can't really be
+checked, and `interval` and `ls-interval` don't need much checking: if
+they don't parse as sensible intervals, Lorry Controller will just use
+a default value.
+
+`prefixmap`, however, can have a reasonable check: it shouldn't map
+something to be under the Trove ID of the local Trove, otherwise Lorry
+won't be able to push the repositories. However, at this time, we do
+not have a reasonable way to get the Trove ID of the local Trove, so
+we're skipping implementing that test for now. (FIXME: fix this lack
+of testing.)
+
+Clean up at the end.
+
+ FINALLY WEBAPP terminates
+
+
+Validate local Lorry files
+--------------------------
+
+Lorry files (`.lorry`) are consumed by the Lorry program itself, but
+also by Lorry Controller. In fact, the ones that are in CONFGIT are
+only consumed by Lorry Controller: it reads them in, parses them,
+extracts the relevant information, puts that into STATEDB, and then
+generates a whole new (temporary) file for each Lorry run.
+
+Lorry Controller doesn't validate the Lorry files much, only
+enough that it can extract each separate Lorry specification and feed
+them to Lorry one by one. In other words:
+
+* The `.lorry` file must be valid JSON.
+* It must be a dict.
+* Each key must map to another dict.
+* Each inner dict must have a key `type`, which maps to a string.
+
+Everything else is left for Lorry itself. Lorry Controller only needs
+to handle Lorry not working, and it already does that.
+
+Firstly, some setup.
+
+ SCENARIO validate .lorry files
+ GIVEN a new git repository in CONFGIT
+ AND an empty lorry-controller.conf in CONFGIT
+ AND lorry-controller.conf in CONFGIT adds lorries *.lorry using prefix upstream
+ AND WEBAPP uses CONFGIT as its configuration directory
+ AND a running WEBAPP
+
+Make sure WEBAPP handles there not being any `.lorry` files.
+
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ AND STATEDB is empty
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` file that contains broken JSON.
+
+ GIVEN Lorry file CONFGIT/notjson.lorry with THIS IS NOT JSON
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ AND STATEDB is empty
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` file that is valid JSON, but is not a dict.
+
+ GIVEN Lorry file CONFGIT/notadict.lorry with [1,2,3]
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ AND STATEDB is empty
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` that is a dict, but doesn't map keys to dicts.
+
+ GIVEN Lorry file CONFGIT/notadictofdicts.lorry with { "foo": 1 }
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ AND STATEDB is empty
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` whose inner dict does not have a `type` field.
+
+ GIVEN Lorry file CONFGIT/notype.lorry with { "foo": { "bar": "yo" }}
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ AND STATEDB is empty
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to []
+
+Add a `.lorry` that is A-OK. This should work even when there are some
+broken ones too.
+
+ GIVEN Lorry file CONFGIT/a-ok.lorry with { "foo": { "type": "git", "url": "git://example.com/foo" }}
+ WHEN admin makes request POST /1.0/read-configuration with dummy=value
+ THEN response matches "has been updated"
+ WHEN admin makes request GET /1.0/list-queue
+ THEN response has queue set to ["upstream/foo"]
+
+Clean up at the end.
+
+ FINALLY WEBAPP terminates
diff --git a/yarns.webapp/900-implementations.yarn b/yarns.webapp/900-implementations.yarn
new file mode 100644
index 0000000..4f87be9
--- /dev/null
+++ b/yarns.webapp/900-implementations.yarn
@@ -0,0 +1,484 @@
+Implementations
+===============
+
+This chapter includes IMPLEMENTS sections for the various steps used
+in scenarios.
+
+Managing a WEBAPP instance
+--------------------------
+
+We're testing a web application (convenivently named WEBAPP, though
+the executable is `lorry-controller-webapp`), so we need to be able to
+start it and stop it in scenarios. We start it as a background
+process, and keep its PID in `$DATADIR/webapp.pid`. When it's time to
+kill it, we kill the process with the PID in that file. This is not
+perfect, though it's good enough for our purposes. It doesn't handle
+running multiple instances at the same time, which we don't need, and
+doens't handle the case of the process dying and the kernel re-using
+the PID for something else, which is quite unlikely.
+
+Start an instance of the WEBAPP, using a random port. Record the PID
+and the port. Listen only on localhost. We use `start-stop-daemon` to
+start the process, so that it can keep running in the background,
+but the shell doesn't wait for it to terminate. This way, WEBAPP will
+be running until it crashes or is explicitly killed.
+
+ IMPLEMENTS GIVEN a running WEBAPP
+ rm -f "$DATADIR/webapp.pid"
+ rm -f "$DATADIR/webapp.port"
+ mkfifo "$DATADIR/webapp.port"
+
+ add_to_config_file "$DATADIR/webapp.conf" \
+ statedb "$DATADIR/webapp.db"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ status-html "$DATADIR/lc-status.html"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ log "$DATADIR/webapp.log"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ log-level debug
+ add_to_config_file "$DATADIR/webapp.conf" \
+ debug-host 127.0.0.1
+ add_to_config_file "$DATADIR/webapp.conf" \
+ debug-port-file "$DATADIR/webapp.port"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ static-files "$SRCDIR/static"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ templates "$SRCDIR/templates"
+ add_to_config_file "$DATADIR/webapp.conf" \
+ debug-real-confgit no
+
+ start-stop-daemon -S -x "$SRCDIR/lorry-controller-webapp" \
+ -b -p "$DATADIR/webapp.pid" -m --verbose \
+ -- \
+ --config "$DATADIR/webapp.conf"
+
+ port=$(cat "$DATADIR/webapp.port")
+ rm -f "$DATADIR/webapp.port"
+ echo "$port" >"$DATADIR/webapp.port"
+
+ # Wait for the WEBAPP to actually be ready, i.e., that it's
+ # listening on its assigned port.
+ "$SRCDIR/test-wait-for-port" 127.0.0.1 "$port"
+
+Kill the running WEBAPP, using the recorded PID. We need to do this
+both as a WHEN and a FINALLY step.
+
+ IMPLEMENTS WHEN WEBAPP is terminated
+ kill_daemon_using_pid_file "$DATADIR/webapp.pid"
+
+ IMPLEMENTS FINALLY WEBAPP terminates
+ kill_daemon_using_pid_file "$DATADIR/webapp.pid"
+
+Also test that WEBAPP isn't running.
+
+ IMPLEMENTS THEN WEBAPP isn't running
+ pid=$(head -n1 "$DATADIR/webapp.pid")
+ if kill -0 "$pid"
+ then
+ echo "process $pid is still running, but shouldn't be" 1>&2
+ exit 1
+ fi
+
+Managing Lorry Controller configuration
+---------------------------------------
+
+We need to be able to create, and change, the `lorry-controller.conf`
+file, and other files, in CONFGIT. First of all, we need to create
+CONFGIT.
+
+ IMPLEMENTS GIVEN a new git repository in (\S+)
+ git init "$DATADIR/$MATCH_1"
+
+Then we need to create an empty `lorry-controller.conf` file there.
+This is not just an empty file, it must be a JSON file that contains
+an empty list object.
+
+ IMPLEMENTS GIVEN an empty lorry-controller.conf in (\S+)
+ printf '[]\n' > "$DATADIR/$MATCH_1/lorry-controller.conf"
+
+Set the contents of `lorry-controller.conf` from a textual form.
+
+ IMPLEMENTS GIVEN a lorry-controller.conf in (\S+) containing "(.*)"$
+ printf '%s\n' "$MATCH_2" > "$DATADIR/$MATCH_1/lorry-controller.conf"
+
+Add a `.lorry` file to be used by a `lorry-controller.conf`.
+
+ IMPLEMENTS GIVEN Lorry file (\S+) with (.*)
+ printf '%s\n' "$MATCH_2" > "$DATADIR/$MATCH_1"
+
+Remove a file. This is actually quite generic, but it's relevant to us
+for `.lorry` files only (when this is being written).
+
+ IMPLEMENTS GIVEN file (\S+) is removed
+ rm "$DATADIR/$MATCH_1"
+
+Add a `lorries` section to a `lorry-controller.conf`. This hardcodes
+most of the configuration.
+
+ IMPLEMENTS GIVEN (\S+) in (\S+) adds lorries (\S+) using prefix (\S+)
+ python -c '
+ import os
+ import json
+
+ DATADIR = os.environ["DATADIR"]
+ MATCH_1 = os.environ["MATCH_1"]
+ MATCH_2 = os.environ["MATCH_2"]
+ MATCH_3 = os.environ["MATCH_3"]
+ MATCH_4 = os.environ["MATCH_4"]
+
+ new = {
+ "type": "lorries",
+ "interval": "0s",
+ "prefix": MATCH_4,
+ "globs": [
+ MATCH_3,
+ ],
+ }
+
+ filename = os.path.join(DATADIR, MATCH_2, MATCH_1)
+ with open(filename, "r") as f:
+ obj = json.load(f)
+ obj.append(new)
+ with open(filename, "w") as f:
+ json.dump(obj, f)
+ '
+
+Add a `troves` section to `lorry-controller.conf`. Again, we hardcode
+most of the configuration.
+
+ IMPLEMENTS GIVEN (\S+) in (\S+) adds trove (\S+)
+ python -c '
+ import os
+ import json
+
+ DATADIR = os.environ["DATADIR"]
+ MATCH_1 = os.environ["MATCH_1"]
+ MATCH_2 = os.environ["MATCH_2"]
+ MATCH_3 = os.environ["MATCH_3"]
+
+ new = {
+ "type": "troves",
+ "trovehost": MATCH_3,
+ "protocol": "ssh",
+ "interval": "0s",
+ "ls-interval": "0s",
+ "prefixmap": {},
+ "ignore": [],
+ }
+
+ filename = os.path.join(DATADIR, MATCH_2, MATCH_1)
+ with open(filename, "r") as f:
+ obj = json.load(f)
+ obj.append(new)
+ with open(filename, "w") as f:
+ json.dump(obj, f, indent=4)
+ '
+
+Set the a specific field for all sections in a `lorry-controller.conf`
+file.
+
+ IMPLEMENTS GIVEN (\S+) in (\S+) has (\S+) set to (.+) for everything
+ python -c '
+ import os
+ import json
+
+ DATADIR = os.environ["DATADIR"]
+ MATCH_1 = os.environ["MATCH_1"]
+ MATCH_2 = os.environ["MATCH_2"]
+ MATCH_3 = os.environ["MATCH_3"]
+ MATCH_4 = os.environ["MATCH_4"]
+
+ filename = os.path.join(DATADIR, MATCH_2, MATCH_1)
+
+ with open(filename, "r") as f:
+ obj = json.load(f)
+
+ for section in obj:
+ section[MATCH_3] = json.loads(MATCH_4)
+
+ with open(filename, "w") as f:
+ json.dump(obj, f, indent=4)
+ '
+
+Set a specific field for a `troves` section.
+
+ IMPLEMENTS GIVEN (\S+) in (\S+) sets (\S+) to (\S+) for trove (\S+)
+ python -c '
+ import os
+ import json
+
+ DATADIR = os.environ["DATADIR"]
+ MATCH_1 = os.environ["MATCH_1"]
+ MATCH_2 = os.environ["MATCH_2"]
+ MATCH_3 = os.environ["MATCH_3"]
+ MATCH_4 = os.environ["MATCH_3"]
+ MATCH_5 = os.environ["MATCH_3"]
+
+ filename = os.path.join(DATADIR, MATCH_2, MATCH_1)
+
+ with open(filename, "r") as f:
+ obj = json.load(f)
+
+ for section in obj:
+ if section["type"] in ["trove", "troves"]:
+ if section["trovehost"] == MATCH_5:
+ section[MATCH_3] = json.loads(MATCH_4)
+
+ with open(filename, "w") as f:
+ json.dump(obj, f, indent=4)
+ '
+
+Set the prefixmap for a Trove in a Lorry Controller configuration
+file. Note that the Trove must already be in the configuration file.
+
+ IMPLEMENTS GIVEN (\S+) in (\S+) has prefixmap (\S+):(\S+) for (\S+)
+ python -c '
+ import os
+ import json
+
+ DATADIR = os.environ["DATADIR"]
+ MATCH_1 = os.environ["MATCH_1"]
+ MATCH_2 = os.environ["MATCH_2"]
+ MATCH_3 = os.environ["MATCH_3"]
+ MATCH_4 = os.environ["MATCH_4"]
+ MATCH_5 = os.environ["MATCH_5"]
+
+ filename = os.path.join(DATADIR, MATCH_2, MATCH_1)
+ with open(filename, "r") as f:
+ objs = json.load(f)
+
+ for obj in objs:
+ if obj["type"] == "troves" and obj["trovehost"] == MATCH_5:
+ obj["prefixmap"][MATCH_3] = MATCH_4
+
+ with open(filename, "w") as f:
+ json.dump(objs, f, indent=4)
+ '
+
+We need to be able to tell WEBAPP, when it runs, where the
+configuration directory is.
+
+ IMPLEMENTS GIVEN WEBAPP uses (\S+) as its configuration directory
+ add_to_config_file "$DATADIR/webapp.conf" \
+ configuration-directory "$DATADIR/$MATCH_1"
+
+Make WEBAPP fake access to a Trove using a static file.
+
+ IMPLEMENTS GIVEN WEBAPP fakes Trove (\S+)
+ add_to_config_file "$DATADIR/webapp.conf" \
+ debug-fake-trove "$MATCH_1=$DATADIR/$MATCH_1.trove"
+
+Control the ls listing of a remote Trove.
+
+ IMPLEMENTS GIVEN remote Trove (\S+) has repository (\S+)
+ filename="$DATADIR/$MATCH_1.trove"
+ if [ ! -e "$filename" ]
+ then
+ echo "{}" > "$filename"
+ fi
+ cat "$filename"
+ python -c '
+ import json, os, sys
+ MATCH_2 = os.environ["MATCH_2"]
+ filename = sys.argv[1]
+ with open(filename) as f:
+ data = json.load(f)
+ data["ls-output"] = data.get("ls-output", []) + [MATCH_2]
+ with open(filename, "w") as f:
+ json.dump(data, f)
+ ' "$filename"
+
+Remove a repository from the fake remote Trove.
+
+ IMPLEMENTS GIVEN remote Trove (\S+) doesn't have repository (\S+)
+ filename="$DATADIR/$MATCH_1.trove"
+ if [ ! -e "$filename" ]
+ then
+ echo "{}" > "$filename"
+ fi
+ cat "$filename"
+ python -c '
+ import json, os, sys
+ MATCH_2 = os.environ["MATCH_2"]
+ filename = sys.argv[1]
+ with open(filename) as f:
+ data = json.load(f)
+ paths = data.get("ls-output", [])
+ if MATCH_2 in paths:
+ paths.remove(MATCH_2)
+ data["ls-output"] = paths
+ with open(filename, "w") as f:
+ json.dump(data, f)
+ ' "$filename"
+
+Making and analysing HTTP requests
+---------------------------------
+
+Simple HTTP GET and POST requests are simple. We make the request,
+sending a body if given, and capture the response: HTTP status code,
+response headers, response body.
+
+We make the request using the `curl` command line program, which makes
+capturing the response quite convenient.
+
+HTTP requests can be made by various entities. This does not affect
+test code, but allows for nicer scenario steps.
+
+We check that the HTTP status indicates success, so that every
+scenario doesn't need ot check that separately.
+
+A GET request:
+
+ IMPLEMENTS WHEN admin makes request GET (\S+)
+ > "$DATADIR/response.headers"
+ > "$DATADIR/response.body"
+ port=$(cat "$DATADIR/webapp.port")
+
+ # The timestamp is needed by "THEN static status page got updated"
+ touch "$DATADIR/request.timestamp"
+
+ curl \
+ -D "$DATADIR/response.headers" \
+ -o "$DATADIR/response.body" \
+ --silent --show-error \
+ "http://127.0.0.1:$port$MATCH_1"
+ cat "$DATADIR/response.headers"
+ cat "$DATADIR/response.body"
+ head -n1 "$DATADIR/response.headers" | grep '^HTTP/1\.[01] 200 '
+
+A POST request always has a body. The body consists of `foo=bar`
+pairs, separated by `&` signs.
+
+ IMPLEMENTS WHEN (\S+) makes request POST (\S+) with (.*)
+ > "$DATADIR/response.headers"
+ > "$DATADIR/response.body"
+ port=$(cat "$DATADIR/webapp.port")
+
+ # The timestamp is needed by "THEN static status page got updated"
+ touch "$DATADIR/request.timestamp"
+
+ curl \
+ -D "$DATADIR/response.headers" \
+ -o "$DATADIR/response.body" \
+ --silent --show-error \
+ --request POST \
+ --data "$MATCH_3" \
+ "http://127.0.0.1:$port$MATCH_2"
+ cat "$DATADIR/response.headers"
+ cat "$DATADIR/response.body"
+ head -n1 "$DATADIR/response.headers" | grep '^HTTP/1\.[01] 200 '
+
+Check the Content-Type of the response has the desired type.
+
+ IMPLEMENTS THEN response is (\S+)
+ cat "$DATADIR/response.headers"
+ grep -i "^Content-Type: $MATCH_1" "$DATADIR/response.headers"
+
+A JSON response can then be queried further. The JSON is expected to
+be a dict, so that values are accessed by name from the dict. The
+value is expresssed as a JSON value in the step.
+
+ IMPLEMENTS THEN response has (\S+) set to (.+)
+ cat "$DATADIR/response.body"
+ python -c '
+ import json, os, sys
+ data = json.load(sys.stdin)
+ key = os.environ["MATCH_1"]
+ expected = json.loads(os.environ["MATCH_2"])
+ value = data[key]
+ if value != expected:
+ sys.stderr.write(
+ "Key {key} has value {value}, but "
+ "{expected} was expected".format (
+ key=key, value=value, expected=expected))
+ sys.exit(1)
+ ' < "$DATADIR/response.body"
+
+A JSON response may need to be analysed in more depth. Specifically,
+we may need to look at a list of dicts, as below.
+
+ IMPLEMENTS THEN response has (\S+) item (\d+) field (\S+) set to (\S+)
+ cat "$DATADIR/response.body"
+ python -c '
+ import json, os, sys
+ data = json.load(sys.stdin)
+ print "data:", repr(data)
+ items = os.environ["MATCH_1"]
+ print "items:", repr(items)
+ item = int(os.environ["MATCH_2"])
+ print "item:", repr(item)
+ field = os.environ["MATCH_3"]
+ print "field:", repr(field)
+ print "match3:", repr(os.environ["MATCH_4"])
+ expected = json.loads(os.environ["MATCH_4"])
+ print "expected:", repr(expected)
+ print "data[items]:", repr(data[items])
+ print "data[items][item]:", repr(data[items][item])
+ print "data[items][item][field]:", repr(data[items][item][field])
+ value = data[items][item][field]
+ if value != expected:
+ sys.stderr.write(
+ "Item {item} in {items} has field {field} with "
+ "value {value}, but {expected} was expected".format (
+ item=item, items=items, field=field, value=value,
+ expected=expected))
+ sys.exit(1)
+ ' < "$DATADIR/response.body"
+
+In some cases, such as free disk space, we don't care about the actual
+value, but we do care that it is there.
+
+ IMPLEMENTS THEN response has (\S+) set
+ cat "$DATADIR/response.body"
+ python -c '
+ import json, os, sys
+ data = json.load(sys.stdin)
+ key = os.environ["MATCH_1"]
+ if key not in data:
+ sys.stderr.write(
+ "Key {key} is not set, but was expected to be set".format (
+ key=key))
+ sys.exit(1)
+ ' < "$DATADIR/response.body"
+
+Some responses are just plain text, so we match them with a regexp.
+
+ IMPLEMENTS THEN response matches "(.*)"$
+ cat "$DATADIR/response.body"
+ grep "$MATCH_1" "$DATADIR/response.body"
+
+
+Status web page
+---------------
+
+WEBAPP is expected to update a static HTML pages whenever the
+`/1.0/status` request is made. We configure WEBAPP to write it to
+`$DATADIR/lc-status.html`. We don't test the contents of the page, but
+we do test that it gets updated. We test for the updates by comparing
+the modification time of the file with the time of the request. We
+know the time of the request thanks to the "WHEN admin makes a
+request" step updating the modification time of a file for this
+purpose.
+
+ IMPLEMENTS THEN static status page got updated
+ # test -nt isn't useful: the timestamps might be identical, and
+ # that's OK on filesystems that only store full-second timestamps.
+ # We generate timestamps in (roughly) ISO 8601 format, with stat,
+ # and those can be compared using simple string comparison.
+
+ status=$(stat -c %y "$DATADIR/lc-status.html")
+ request=$(stat -c %y "$DATADIR/request.timestamp")
+ test "$request" = "$status" || test "$request" '<' "$status"
+
+
+STATEDB
+-------
+
+Check that the STATEDB is empty. This means it should exist, and
+should be initialised, but none of the important tables should have
+any rows in them.
+
+ IMPLEMENTS THEN STATEDB is empty
+ test -s "$DATADIR/webapp.db"
+ sqlite3 "$DATADIR/webapp.db" 'SELECT * FROM troves;' | stdin_is_empty
+ sqlite3 "$DATADIR/webapp.db" 'SELECT * FROM lorries;' | stdin_is_empty
diff --git a/yarns.webapp/yarn.sh b/yarns.webapp/yarn.sh
new file mode 100644
index 0000000..3c617e3
--- /dev/null
+++ b/yarns.webapp/yarn.sh
@@ -0,0 +1,56 @@
+# Copyright (C) 2013 Codethink Limited
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; version 2 of the License.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# =*= License: GPL-2 =*=
+
+# This file is a yarn shell library for testing Lorry Controller.
+
+
+# Kill a daemon given its pid file. Report whether it got killed or not.
+
+kill_daemon_using_pid_file()
+{
+ local pid=$(head -n1 "$1")
+ if kill -9 "$pid"
+ then
+ echo "Killed daemon running as $pid"
+ else
+ echo "Error killing daemon running as pid $pid"
+ fi
+}
+
+
+# Add a configuration item to a cliapp-style configuration file.
+
+add_to_config_file()
+{
+ if [ ! -e "$1" ]
+ then
+ printf '[config]\n' > "$1"
+ fi
+ printf '%s = %s\n' "$2" "$3" >> "$1"
+}
+
+
+# Ensure the standard input is empty. If not, exit with an error.
+
+stdin_is_empty()
+{
+ if grep . > /dev/null
+ then
+ echo "ERROR: stdin was NOT empty" 1>&2
+ exit 1
+ fi
+}