diff options
author | Gabriel F. T. Gomes <gabriel@inconstante.net.br> | 2019-08-07 09:17:13 -0300 |
---|---|---|
committer | Gabriel F. T. Gomes <gabriel@inconstante.net.br> | 2019-08-07 09:17:13 -0300 |
commit | 5732da2af736c40cf693354485446ab4867ecb4d (patch) | |
tree | 76d76cdfa16ca62d20fb109da13895ec64fff110 /doc | |
parent | 9cd22d1df8f0f5b554858471c86faa9f37b8fed4 (diff) | |
download | bash-completion-5732da2af736c40cf693354485446ab4867ecb4d.tar.gz |
New upstream version 2.9upstream/2.9
Diffstat (limited to 'doc')
-rw-r--r-- | doc/testing.txt | 507 |
1 files changed, 43 insertions, 464 deletions
diff --git a/doc/testing.txt b/doc/testing.txt index 2ce7f373..c3a1f00a 100644 --- a/doc/testing.txt +++ b/doc/testing.txt @@ -7,23 +7,37 @@ The bash-completion package contains an automated test suite. Running the tests should help verifying that bash-completion works as expected. The tests are also very helpful in uncovering software regressions at an early stage. -The bash-completion test suite is written on top of the +The original, "legacy" bash-completion test suite is written on top of the http://www.gnu.org/software/dejagnu/[DejaGnu] testing framework. DejaGnu is written in http://expect.nist.gov[Expect], which in turn uses http://tcl.sourceforge.net[Tcl] -- Tool command language. +Most of the test framework has been ported over to use +https://pytest.org/[pytest] and https://pexpect.readthedocs.io/[pexpect]. +Eventually, all of it should be ported. + Coding Style Guide ------------------ -The bash-completion test suite tries to adhere to this +For the Python part, all of it is formatted using +https://github.com/ambv/black[Black], and we also run +http://flake8.pycqa.org/[Flake8] on it. + +The legacy test suite tries to adhere to this http://wiki.tcl.tk/708[Tcl Style Guide]. Installing dependencies ----------------------- -Installing dependencies should be easy using your local package manager. +Installing dependencies should be easy using your local package manager or +`pip`. Python 3.4 or newer is required, and the rest of the Python package +dependencies are specified in the `test/requirements.txt` file. If using `pip`, +this file can be fed directly to it, e.g. like: +------------------------------------ +pip install -r test/requirements.txt +------------------------------------ Debian/Ubuntu @@ -31,29 +45,31 @@ Debian/Ubuntu On Debian/Ubuntu you can use `apt-get`: ------------- -sudo apt-get install dejagnu tcllib +sudo apt-get install python3-pytest python3-pexpect dejagnu tcllib ------------- -This should also install the necessary `expect` and `tcl` packages. +This should also install the necessary dependencies. Only Debian testing +(buster) and Ubuntu 18.10 (cosmic) and later have an appropriate version +of pytest in the repositories. Fedora/RHEL/CentOS ~~~~~~~~~~~~~~~~~~ -On Fedora and RHEL/CentOS (with EPEL) you can use `yum`: +On Fedora and RHEL/CentOS (with EPEL) you can try `yum` or `dnf`: ------------- -sudo yum install dejagnu tcllib +sudo yum install python3-pytest python3-pexpect dejagnu tcllib ------------- -This should also install the necessary `expect` and `tcl` packages. +This should also install the necessary dependencies. At time of writing, only +Fedora 29 comes with recent enough pytest. Structure --------- +Pytest tests are in the `t/` subdirectory, with `t/test_\*.py` being +completion tests, and `t/unit/test_unit_\*.py` unit tests. -Main areas (DejaGnu tools) -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The tests are grouped into different areas, called _tool_ in DejaGnu: +Legacy tests are grouped into different areas, called _tool_ in DejaGnu: *completion*:: Functional tests per completion. @@ -63,60 +79,17 @@ The tests are grouped into different areas, called _tool_ in DejaGnu: *unit*:: Unit tests for bash-completion helper functions. -Each tool has a slightly different way of loading the test fixtures, see -<<Test_context,Test context>> below. - - -Completion -~~~~~~~~~~ - -Completion tests are spread over two directories: `completion/\*.exp` calls -completions in `lib/completions/\*.exp`. This two-file system stems from -bash-completion-lib (http://code.google.com/p/bash-completion-lib/, containing -dynamic loading of completions) where tests are run twice per completion; once -before dynamic loading and a second time after to confirm that all dynamic -loading has gone well. - -For example: - ----- -set test "Completion via comp_load() should be installed" -set cmd "complete -p awk" -send "$cmd\r" -expect { - -re "^$cmd\r\ncomplete -o filenames -F comp_load awk\r\n/@$" { pass "$test" } - -re /@ { fail "$test at prompt" } -} - - -source "lib/completions/awk.exp" - - -set test "Completion via _longopt() should be installed" -set cmd "complete -p awk" -send "$cmd\r" -expect { - -re "^$cmd\r\ncomplete -o filenames -F _longopt awk\r\n/@$" { pass "$test" } - -re /@ { fail "$test at prompt" } -} - - -source "lib/completions/awk.exp" ----- - -Looking to the completion tests from a broader perspective, every test for a -command has two stages which are now reflected in the two files: - -. Tests concerning the command completions' environment (typically in -`test/completion/foo`) -. Tests invoking actual command completion (typically in -`test/lib/completions/foo`) - Running the tests ----------------- -The tests are run by calling `runtest` command in the test directory: +Python based tests are run by calling `pytest` on the desired test directories +or individual files, for example in the project root directory: +----------------------- +pytest test/t +----------------------- + +Legacy tests are run by calling `runtest` command in the test directory: ----------------------- runtest --outdir log --tool completion runtest --outdir log --tool install @@ -136,80 +109,15 @@ To run a particular test, specify file name of your test as an argument to ----------------------- That will run `test/completion/ssh.exp`. +See `test/docker/docker-script.sh` for how and what we run and test in CI. -Running tests via cron -~~~~~~~~~~~~~~~~~~~~~~ - -The test suite requires a connected terminal (tty). When invoked via cron, no -tty is connected and the test suite may respond with this error: ---------------------------------------------- -can't read "multipass_name": no such variable ---------------------------------------------- - -To run the tests successfully via cron, connect a terminal by redirecting -stdin from a tty, e.g. /dev/tty40. (In Linux, you can press alt-Fx or -ctrl-alt-Fx to switch the console from /dev/tty1 to tty7. There are many more -/dev/tty* which are not accessed via function keys. To be safe, use a tty -greater than tty7) - ----------------------- -./runUnit < /dev/tty40 ----------------------- - -If the process doesn't run as root (recommended), root will have to change the -owner and permissions of the tty: -------------------------- -sudo chmod o+r /dev/tty40 -------------------------- - -To make this permission permanent (at least on Debian) - and not revert back on -reboot - create the file `/etc/udev/rules.d/10-mydejagnu.rules`, containing: ----------------------------- -KERNEL=="tty40", MODE="0666" ----------------------------- - -To start the test at 01:00, set the crontab to this: ----------------------------- -* 1 * * * cd bash-completion/test && ./cron.sh < /dev/tty40 ----------------------------- - -Here's an example batch file `cron.sh`, to be put in the bash-completion `test` -directory. This batch file only e-mails the output of each test-run if the -test-run fails. - -[source,bash] ---------------------------------------------------------------------- -#!/bin/sh - -set -e # Exit if simple command fails -set -u # Error if variable is undefined - -LOG=/tmp/bash-completion.log~ - - # Retrieve latest sources -git pull - - # Run tests on bash-4 - -./runUnit --outdir log/bash-4 --tool_exec /opt/bash-4.3/bin/bash > $LOG || cat $LOG -./runCompletion --outdir log/bash-4 --tool_exec /opt/bash-4.3/bin/bash > $LOG || cat $LOG - - # Clean up log file -[ -f $LOG ] && rm $LOG ---------------------------------------------------------------------- Specifying bash binary ~~~~~~~~~~~~~~~~~~~~~~ -The test suite standard uses `bash` as found in the tcl path (/bin/bash). -Using `--tool_exec` you can specify which bash binary you want to run the test -suite against, e.g.: - ----------------- -./runUnit --tool_exec /opt/bash-4.3/bin/bash ----------------- - - +The test suite standard uses `bash` as found in PATH. Export the +`bashcomp_bash` environment variable with a path to another bash executable if +you want to test against something else. Maintenance @@ -219,218 +127,10 @@ Maintenance Adding a completion test ~~~~~~~~~~~~~~~~~~~~~~~~ -You can run `cd test && ./generate cmd` to add a test for the `cmd` command. -This will add two files with a very basic tests: ----------------------------------- -test/completion/cmd.exp -test/lib/completions/cmd.exp ----------------------------------- -Place any additional tests into `test/lib/completions/cmd.exp`. - - -Fixing a completion test -~~~~~~~~~~~~~~~~~~~~~~~~ -Let's consider this real-life example where an ssh completion bug is fixed. -First you're triggered by unsuccessful tests: - ----------------------------------- -$ ./runCompletion -... - === completion Summary === - -# of expected passes 283 -# of unexpected failures 8 -# of unresolved testcases 2 -# of unsupported tests 47 ----------------------------------- - -Take a look in `log/completion.log` to find out which specific command is -failing. - ------------------------ -$ vi log/completion.log ------------------------ - -Search for `UNRESOLVED` or `FAIL`. From there scroll up to see which `.exp` -test is failing: - ---------------------------------------------------------- -/@Running ./completion/ssh.exp ... -... -UNRESOLVED: Tab should complete ssh known-hosts at prompt ---------------------------------------------------------- - -In this case it appears `ssh.exp` is causing the problem. Isolate the `ssh` -tests by specifying just `ssh.exp` to run. Furthermore add the `--debug` flag, -so output gets logged in `dbg.log`: - ----------------------------------- -$ ./runCompletion ssh.exp --debug -... - === completion Summary === - -# of expected passes 1 -# of unresolved testcases 1 ----------------------------------- - -Now we can have a detailed look in `dbg.log` to find out what's going wrong. -Open `dbg.log` and search for `UNRESOLVED` (or `FAIL` if that's what you're -looking for): - ---------------------------------------------------------- -UNRESOLVED: Tab should complete ssh known-hosts at prompt ---------------------------------------------------------- - -From there, search up for the first line saying: - -------------------------------------------------- -expect: does "..." match regular expression "..." -------------------------------------------------- - -This tells you where the actual output differs from the expected output. In -this case it looks like the test "ssh -F fixtures/ssh/config <TAB>" is -expecting just hostnames, whereas the actual completion is containing commands -- but no hostnames. -So what should be expected after "ssh -F fixtures/ssh/config <TAB>" are *both* -commands and hostnames. This means both the test and the completion need -fixing. Let's start with the test. - ----------------------------- -$ vi lib/completions/ssh.exp ----------------------------- - -Search for the test "Tab should complete ssh known-hosts". Here you could've -seen that what was expected were hostnames ($hosts): - ------------------------------------------ -set expected "^$cmd\r\n$hosts\r\n/@$cmd$" ------------------------------------------ - -Adding *all* commands (which could well be over 2000) to 'expected', seems a -bit overdone so we're gonna change things here. Lets expect the unit test for -`_known_hosts` assures all hosts are returned. Then all we need to do here is -expect one host and one command, just to be kind of sure that both hosts and -commands are completed. - -Looking in the fixture for ssh: - ------------------------------ -$ vi fixtures/ssh/known_hosts ------------------------------ - -it looks like we can add an additional host 'ls_known_host'. Now if we would -perform the test "ssh -F fixtures/ssh/config ls<TAB>" both the command `ls` and -the host `ls_known_host` should come up. Let's modify the test so: - --------------------------------------------------------- -$ vi lib/completions/ssh.exp -... -set expected "^$cmd\r\n.*ls.*ls_known_host.*\r\n/@$cmd$" --------------------------------------------------------- - -Running the test reveals we still have an unresolved test: - ----------------------------------- -$ ./runCompletion ssh.exp --debug -... - === completion Summary === - -# of expected passes 1 -# of unresolved testcases 1 ----------------------------------- - -But if now look into the log file `dbg.log` we can see the completion only -returns commands starting with 'ls' but fails to match our regular expression -which also expects the hostname `ls_known_host': - ------------------------ -$ vi dbg.log -... -expect: does "ssh -F fixtures/ssh/config ls\r\nls lsattr lsb_release lshal lshw lsmod lsof lspci lspcmcia lspgpot lss16toppm\r\nlsusb\r\n/@ssh -F fixtures/ssh/config ls" (spawn_id exp9) match regular expression "^ssh -F fixtures/ssh/config ls\r\n.*ls.*ls_known_host.*\r\n/@ssh -F fixtures/ssh/config ls$"? no ------------------------ - -Now let's fix ssh completion: - -------------------- -$ vi ../contrib/ssh -... -------------------- - -until the test shows: - ----------------------------------- -$ ./runCompletion ssh.exp -... - === completion Summary === - -# of expected passes 2 ----------------------------------- - -Fixing a unit test -~~~~~~~~~~~~~~~~~~ -Now let's consider a unit test failure. First you're triggered by unsuccessful -tests: - ----------------------------------- -$ ./runUnit -... - === unit Summary === - -# of expected passes 1 -# of unexpected failures 1 ----------------------------------- - -Take a look in `log/unit.log` to find out which specific command is failing. - ------------------ -$ vi log/unit.log ------------------ - -Search for `UNRESOLVED` or `FAIL`. From there scroll up to see which `.exp` -test is failing: - ------------------------------------------- -/@Running ./unit/_known_hosts_real.exp ... -... -FAIL: Environment should stay clean ------------------------------------------- - -In this case it appears `_known_hosts_real.exp` is causing the problem. -Isolate the `_known_hosts_real` test by specifying just `_known_hosts_real.exp` -to run. Furthermore add the `--debug` flag, so output gets logged in -`dbg.log`: - ----------------------------------- -$ ./runUnit _known_hosts_real.exp --debug -... - === completion Summary === - -# of expected passes 1 -# of unexpected failures 1 ----------------------------------- - -Now, if we haven't already figured out the problem, we can have a detailed look -in `dbg.log` to find out what's going wrong. Open `dbg.log` and search for -`UNRESOLVED` (or `FAIL` if that's what you're looking for): - ------------------------------------ -FAIL: Environment should stay clean ------------------------------------ - -From there, search up for the first line saying: - -------------------------------------------------- -expect: does "..." match regular expression "..." -------------------------------------------------- - -This tells you where the actual output differs from the expected output. In -this case it looks like the the function `_known_hosts_real` is unexpectedly -modifying global variables `cur` and `flag`. In case you need to modify the -test: - ------------------------------------ -$ vi lib/unit/_known_hosts_real.exp ------------------------------------ +You can run `cd test && ./generate cmd` to add a test for the `cmd` +command. Additional arguments will be passed to the first generated test case. +This will add the `test/t/test_cmd.py` file with a very basic test, and add it +to `test/t/Makefile.am`. Add additional tests to the generated file. Rationale --------- @@ -449,124 +149,3 @@ script/generate ^^^^^^^^^^^^^^^ The name and location of this code generation script come from Ruby on Rails' http://en.wikibooks.org/wiki/Ruby_on_Rails/Tools/Generators[script/generate]. - - - - -== Reference - -Within test scripts the following library functions can be used: - -[[Test_context]] -== Test context - -The test environment needs to be put to fixed states when testing. For -instance the bash prompt (PS1) is set to the current test directory, followed -by an at sign (@). The default settings for `bash` reside in `config/bashrc` -and `config/inputrc`. - -For each tool (completion, install, unit) a slightly different context is in -effect. - -=== What happens when tests are run? - -==== completion - -When the completions are tested, invoking DejaGnu will result in a call to -`completion_start()` which in turn will start `bash --rcfile config/bashrc`. - -.What happens when completion tests are run? ----- - | runtest --tool completion - V - +----------+-----------+ - | lib/completion.exp | - | lib/library.exp | - | config/default.exp | - +----------+-----------+ - : - V - +----------+-----------+ +---------------+ +----------------+ - | completion_start() +<---+ config/bashrc +<---| config/inputrc | - | (lib/completion.exp) | +---------------+ +----------------+ - +----------+-----------+ - | ,+----------------------------+ - | ,--+-+ "Actual completion tests" | - V / +------------------------------+ - +----------+-----------+ +-----------------------+ - | completion/*.exp +<---| lib/completions/*.exp | - +----------+-----------+ +-----------------------+ - | \ ,+--------------------------------+ - | `----------------------+-+ "Completion invocation tests" | - V +----------------------------------+ - +----------+-----------+ - | completion_exit() | - | (lib/completion.exp) | - +----------------------+ ----- -Setting up bash once within `completion_start()` has the speed advantage that -bash - and bash-completion - need only initialize once when testing multiple -completions, e.g.: ----- - runtest --tool completion alias.exp cd.exp ----- -==== install - -.What happens when install tests are run? ----- - | runtest --tool install - V - +----+----+ - | DejaGnu | - +----+----+ - | - V - +------------+---------------+ - | (file: config/default.exp) | - +------------+---------------+ - | - V - +------------+------------+ - | (file: lib/install.exp) | - +-------------------------+ ----- - -==== unit - -.What happens when unit tests are run? ----- - | runtest --tool unit - V - +----+----+ - | DejaGnu | - +----+----+ - | - V - +----------+-----------+ - | - | - | (file: lib/unit.exp) | - +----------------------+ ----- - -=== bashrc - -This is the bash configuration file (bashrc) used for testing: - -[source,bash] ---------------------------------------------------------------------- -include::bashrc[] ---------------------------------------------------------------------- - - -=== inputrc - -This is the readline configuration file (inputrc) used for testing: - -[source,bash] ---------------------------------------------------------------------- -include::inputrc[] ---------------------------------------------------------------------- - - -Index -===== |