summaryrefslogtreecommitdiff
path: root/travis/README.md
blob: d8f1c4dddd90c9cac2f9a42326ea6a4287ce51a0 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
Testing NASM
============
We use [Travis CI](https://travis-ci.org/) service to execute NASM tests,
which basically prepares the environment and runs our `nasm-t.py` script.

The script scans a testing directory for `*.json` test descriptor files
and runs test by descriptor content.

Test engine
-----------
`nasm-t.py` script is a simple test engine written by Python3 language
which allows either execute a single test or run them all in a sequence.

A typical test case processed by the following steps:

 - a test descriptor get parsed to figure out which arguments
   are to be provided into the NASM command line;
 - invoke the NASM with arguments;
 - compare generated files with precompiled templates.

`nasm-t.py` supports the following commands:

 - `list`: to list all test cases
 - `run`: to run test cases
 - `update`: to update precompiled templates

Use `nasm-t.py -h` command to get the detailed description of every option.

### Test unit structure
Each test consists at least of three files:

 - a test descriptor in with `*.json` extension;
 - a source file to compile;
 - a target file to compare result with, it is assumed to have
   the same name as output generated during the pass file but with `*.t`
   extension; thus if a test generates `*.bin` file the appropriate target
   should have `*.bin.t` name.

Running tests
-------------
To run all currently available tests simply type the following

```console
python3 travis/nasm-t.py run
```

By default the `nasm-t.py` scans `test` subdirectory for `*.json` files and
consider each as a test descriptor. Then every test is executed sequentially.
If the descriptor can not be parsed it silently ignored.

To run a particular test provide the test name, for example

```console
python3 travis/nasm-t.py list
...
./travis/test/utf                Test __utf__ helpers
./travis/test/utf                Test errors in __utf__ helpers
...
python3 travis/nasm-t.py run -t ./travis/test/utf
```

Test name duplicates in the listing above means that the descriptor
carries several tests with same name but different options.

Test descriptor file
--------------------
A descriptor file should provide enough information how to run the NASM
itself and which output files or streams to compare with predefined ones.
We use *JSON* format with the following fields:

 - `description`: a short description of a test which is shown to
   a user when tests are being listed;
 - `id`: descriptor internal name to use with `ref` field;
 - `ref`: a reference to `id` from where settings should be
   copied, it is convenient when say only `option` is different
   while the rest of the fields are the same;
 - `format`: NASM output format to use (`bin`,`elf` and etc);
 - `source`: is a source file name to compile, this file must
   be shipped together with descriptor file itself;
 - `option`: an additional option passed to the command line;
 - `update`: a trigger to skip updating targets when running
   an update procedure;
 - `target`: an array of targets which the test engine should
   check once compilation finished:
    - `stderr`: a file containing *stderr* stream output to check;
    - `stdout`: a file containing *stdout* stream output to check;
    - `output`: a file containing compiled result to check, in other
      words it is a name passed as `-o` option to the compiler;
 - `error`: an error handler, can be either *over* to ignore any
   error happened, or *expected* to make sure the test is failing.

### Examples
A simple test where no additional options are used, simply compile
`absolute.asm` file with `bin` format for output, then compare
produced `absolute.bin` file with precompiled `absolute.bin.t`.

```json
{
	"description": "Check absolute addressing",
	"format": "bin",
	"source": "absolute.asm",
	"target": [
		{ "output": "absolute.bin" }
	]
}
```

Note the `output` target is named as *absolute.bin* where *absolute.bin.t*
should be already precompiled (we will talk about it in `update` action)
and present on disk.

A slightly complex example: compile one source file with different optimization
options and all results must be the same. To not write three descriptors
we assign `id` to the first one and use `ref` term to copy settings.
Also, it is expected that `stderr` stream will not be empty but carry some
warnings to compare.

```json
[
	{
		"description": "Check 64-bit addressing (-Ox)",
		"id": "addr64x",
		"format": "bin",
		"source": "addr64x.asm",
		"option": "-Ox",
		"target": [
			{ "output": "addr64x.bin" },
			{ "stderr": "addr64x.stderr" }
		]
	},
	{
		"description": "Check 64-bit addressing (-O1)",
		"ref": "addr64x",
		"option": "-O1",
		"update": "false"
	},
	{
		"description": "Check 64-bit addressing (-O0)",
		"ref": "addr64x",
		"option": "-O0",
		"update": "false"
	}
]
```

Updating tests
--------------
If during development some of the targets are expected to change
the tests will start to fail so the should be updated. Thus new
precompiled results will be treated as templates to compare with.

To update all tests in one pass run

```console
python3 travis/nasm-t.py update
...
=== Updating ./travis/test/xcrypt ===
	Processing ./travis/test/xcrypt
	Executing ./nasm -f bin -o ./travis/test/xcrypt.bin ./travis/test/xcrypt.asm
	Moving ./travis/test/xcrypt.bin to ./travis/test/xcrypt.bin.t
=== Test ./travis/test/xcrypt UPDATED ===
...
```

and commit the results. To update a particular test provide its name
with `-t` option.