| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
With the new Elasticsearch does not support custom field type [1].
[1] https://www.elastic.co/guide/en/elasticsearch/reference/7.17/removal-of-types.html#_custom_type_field
Change-Id: I0b154da0a4736c6b7758f9936356d5b7097c35ad
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to the removal-of-types[1] documentation, it is no longer
necessary to specify a document type.
[1] https://www.elastic.co/guide/en/elasticsearch/reference/7.17/removal-of-types.html
Change-Id: I02996ce328a48b5ae6493646abe08ebab31ec962
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is a spec that describes how we could merge the functionality
of Nodepool into Zuul.
Change-Id: I60871b5f895826811888aacffa8dac946e49f333
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When a build result arrives for a non-current buildset we should skip
the reporting as we can no longer create the reference to the buildset.
Traceback (most recent call last):
File "/opt/zuul/lib/python3.10/site-packages/zuul/scheduler.py", line 2654, in _doBuildCompletedEvent
self.sql.reportBuildEnd(
File "/opt/zuul/lib/python3.10/site-packages/zuul/driver/sql/sqlreporter.py", line 143, in reportBuildEnd
db_build = self._createBuild(db, build)
File "/opt/zuul/lib/python3.10/site-packages/zuul/driver/sql/sqlreporter.py", line 180, in _createBuild
tenant=buildset.item.pipeline.tenant.name, uuid=buildset.uuid)
AttributeError: 'NoneType' object has no attribute 'item'
Change-Id: Iccbe9ab8212fbbfa21cb29b84a17e03ca221d7bd
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When a job already fails during setup we never load the frozen hostvars.
Since the cleanup playbooks depend on those, we can skip the cleanup
runs if the dict is empty.
As we always add "localhost" to the hostlist, the frozen hostvars will
never be empty when loading was successful.
This will get rid of the following exception:
Traceback (most recent call last):
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 1126, in execute
self._execute()
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 1493, in _execute
self.runCleanupPlaybooks(success)
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 1854, in runCleanupPlaybooks
self.runAnsiblePlaybook(
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 3042, in runAnsiblePlaybook
self.writeInventory(playbook, self.frozen_hostvars)
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 2551, in writeInventory
inventory = make_inventory_dict(
File "/opt/zuul/lib/python3.10/site-packages/zuul/executor/server.py", line 913, in make_inventory_dict
node_hostvars = hostvars[node['name']].copy()
KeyError: 'node'
Change-Id: I33a6a9ab355482e471e79f3dd5d702589fee04b3
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Change-Id: I0d450d9385b9aaab22d2d87fb47798bf56525f50
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This is a follow-on to Ia78ad9e3ec51bc47bf68c9ff38c0fcd16ba2e728 to
use a different loopback address for the local connection to the
Python 2.7 container. This way, we don't have to override the
existing localhost/127.0.0.1 matches that avoid the executor trying to
talk to a zuul_console daemon. These bits are removed.
The comment around the port settings is updated while we're here.
Change-Id: I33b2198baba13ea348052e998b1a5a362c165479
|
|\ \ \ \ \ \ \
| |/ / / / / / |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Change Ief366c092e05fb88351782f6d9cd280bfae96237 intoduced a bug in
the streaming daemons because it was using Python 3.6 features. The
streaming console needs to work on all Ansible managed nodes, which
includes back to Python 2.7 nodes (while Ansible supports that).
This introduces a regression test by building about the smallest
Python 2.7 container that can be managed by Ansbile. We start this
container and modify the test inventory to include it, then run the
stream tests against it.
The existing testing runs against the "new" console but also tests
against the console OpenDev's Zuul starts to ensure
backwards-compatability. Since this container wasn't started by Zuul
it doesn't have this, so that testing is skipped for this node.
It might be good to abstract all testing of the console daemons into
separate containers for each Ansible supported managed-node Python
version -- it's a bit more work than I want to take on right now.
This should ensure the lower-bound though and prevent regressions for
older platforms.
Change-Id: Ia78ad9e3ec51bc47bf68c9ff38c0fcd16ba2e728
|
|\ \ \ \ \ \ \
| | |/ / / / /
| |/| | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Change-Id: I2576d0dcec7c8f7bbb76bdd469fd992874742edc
|
|\ \ \ \ \ \ \ |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
I noticed in some of our testing a construct like
debug:
msg: '{{ ansible_version }}'
was actually erroring out; you'll see in the console output if you're
looking
Ansible output: b'TASK [Print ansible version msg={{ ansible_version }}] *************************'
Ansible output: b'[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin'
Ansible output: b'(<ansible.plugins.callback.zuul_stream.CallbackModule object at'
Ansible output: b"0x7f502760b490>): 'dict' object has no attribute 'startswith'"
and the job-output.txt will be empty for this task (this is detected
by by I9f569a411729f8a067de17d99ef6b9d74fc21543).
This is because the msg value here comes in as a dict, and in several
places we assume it is a string. This changes places we inspect the
msg variable to use the standard Ansible way to make a text string
(to_text function) and ensures in the logging function it converts the
input to a string.
We test for this with updated tasks in the remote_zuul_stream tests.
It is slightly refactored to do partial matches so we can use the
version strings, which is where we saw the issue.
Change-Id: I6e6ed8dba2ba1fc74e7fc8361e8439ea6139279e
|
|\ \ \ \ \ \ \ \
| | |/ / / / / /
| |/| | | | | | |
|
| | |/ / / / /
| |/| | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Several of our tests which validate Ansible behavior with Zuul are
not versioned so that they test all supported versions of Ansible.
For those cases, add versioned tests and fix any descrepancies that
have been uncovered by the additional tests (fortunately all are
minor test syntax issues and do not affect real-world usage).
One of our largest versioned Ansible tests was not actually testing
multiple Ansible versions -- we just ran it 3 times on the default
version. Correct that and add validation that the version ran was
the expected version.
Change-Id: I26213f69fe844776408fce24322749a197e07551
|
|\ \ \ \ \ \ \
| | |/ / / / /
| |/| | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Currently the task in the test playbook
- hosts: compute1
tasks:
- name: Command Not Found
command: command-not-found
failed_when: false
is failing in the zuul_stream callback with an exception trying to
fill out the "delta" value in the message here. The result dict
(taken from the new output) shows us why:
2022-08-24 07:19:27.079961 | TASK [Command Not Found]
2022-08-24 07:19:28.578380 | compute1 | ok: ERROR (ignored)
2022-08-24 07:19:28.578622 | compute1 | {
2022-08-24 07:19:28.578672 | compute1 | "failed_when_result": false,
2022-08-24 07:19:28.578700 | compute1 | "msg": "[Errno 2] No such file or directory: b'command-not-found'",
2022-08-24 07:19:28.578726 | compute1 | "rc": 2
2022-08-24 07:19:28.578750 | compute1 | }
i.e. it has no start/stop/delta in the result (it did run and fail, so
you'd think it might ... but this is what Ansible gives us).
This checks for this path; as mentioned the output will now look like
above in this case.
This was found by the prior change
I9f569a411729f8a067de17d99ef6b9d74fc21543. This fixes the current
warning, so we invert the test to prevent further regressions.
Change-Id: I106b2bbe626ed5af8ca739d354ba41eca2f08f77
|
|\ \ \ \ \ \ \
| |/ / / / / / |
|
| | |_|_|/ /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We have a couple of places that can trigger errors in the zuul_stream
callback that this testing currently misses.
To try and catch this case better we grab the Ansible console output
(where the failure of the callback plugin is noted) and check it for
Ansible failure strings.
Just for review purposes, follow-on changes will correct current
errors so this test can be inverted.
Change-Id: I9f569a411729f8a067de17d99ef6b9d74fc21543
|
|\ \ \ \ \ \ |
|
| | |_|_|_|/
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The keycloak tutorial incorrectly instructed users to run
"docker-compose-compose". Correct that.
Also, change the instructions to "stop" rather than "down" the
original containers so that the results of the quick-start tutorial
are still present.
Finally, verify that, and also add a verification that the intended
effect of the restart worked (by checking the available authn methods).
Change-Id: I43a17e27300126e8acdc1919ba2bbe98719ad604
|
| |_|_|/ /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Timer unit test jobs should disable timer triggers before ending,
otherwise we may not shut down cleanly and will fail the test.
Change-Id: I2bbbfcaa7da50cd2daedb8f7dea11eb5725d56e4
|
|\ \ \ \ \
| |_|_|_|/
|/| | | | |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The Ansible version is sometimes used for selecting the correct linter
or for implementing feature switches to make roles/playbooks backward
compatible.
With the split of Ansible into an "ansible" and "ansible-core" package,
the `ansible_version` now contains the version of the core package.
There seems to be no other variable that contains the version of the
"Ansible community" package that Zuul is using.
In order to support this use-case for Ansible 5+ we will add the Ansible
version to the job's Zuul vars.
Change-Id: I3f3a3237b8649770a9b7ff488e501a97b646a4c4
|
|\ \ \ \
| |_|_|/
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Update the developer Ansible docs to give some more details on how the
zuul_console daemon streaming happens. In a couple of places where it
is mentioned, rework things and point them at this explaination.
Change-Id: I5bfb61323bf3219168d4d014cbb9703eed230e71
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
With the defaulh "linear" strategy (and likely others), Ansible will
send the on_task_start callback, and then fork a worker process to
execute that task. Since we spawn a thread in the on_task_start
callback, we can end up emitting a log message in this method while
Ansible is forking. If a forked process inherits a Python file object
(i.e., stdout) that is locked by a thread that doesn't exist in the
fork (i.e., this one), it can deadlock when trying to flush the file
object. To minimize the chances of that happening, we should avoid
using _display outside the main thread.
The Python logging module is supposed to use internal locks which are
automatically aqcuired and released across a fork. Assuming this is
(still) true and functioning correctly, we should be okay to issue
our Python logging module calls at any time. If there is a fault
in this system, however, it could have a potential to cause a similar
problem.
If we can convince the Ansible maintainers to lock _display across
forks, we may be able to revert this change in the future.
Change-Id: Ifc6b835c151539e6209284728ccad467bef8be6f
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This adds a config-error pipeline reporter configuration option and
now also reports config errors and merge conflicts to the database
as buildset failures.
The driving use case is that if periodic pipelines encounter config
errors (such as being unable to freeze a job graph), they might send
email if configured to send email on merge conflicts, but otherwise
their results are not reported to the database.
To make this more visible, first we need Zuul pipelines to report
buildset ends to the database in more cases -- currently we typically
only report a buildset end if there are jobs (and so a buildset start),
or in some other special cases. This change adds config errors and
merge conflicts to the set of cases where we report a buildset end.
Because of some shortcuts previously taken, that would end up reporting
a merge conflict message to the database instead of the actual error
message. To resolve this, we add a new config-error reporter action
and adjust the config error reporter handling path to use it instead
of the merge-conflicts action.
Tests of this as well as the merge-conflicts code path are added.
Finally, a small debug aid is added to the GerritReporter so that we
can easily see in the logs which reporter action was used.
Change-Id: I805c26a88675bf15ae9d0d6c8999b178185e4f1f
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The report message "This change depends on a change that failed to
merge" (and a similar change for circular dependency bundles) is
famously vague. To help users identify the actual problem, include
URLs for which change(s) caused the problem so that users may more
easily resolve the issue.
Change-Id: Id8b9f8cf2c108703e9209e30bdc9a3933f074652
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Increase the visibility of the warning.
Change-Id: I9e21c546e98020e5f6d0ada15bce420dc7e82117
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
|
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
As noted inline, the package: task seems unique in sending back an
array of strings. This completely messes up the output currently,
splitting up every letter of the result into a separate item.
This quick hack just reformats the list of strings into something that
comes out correctly.
Change-Id: I8a7e8172f784fc69aa0abb2e6787c63c33d3f802
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The current timer docs say that the configuration is cron-like, which s
true but apscheduler differs from cron in that '5' in the Day of the
week slot is actually Sunday, rather than the expectation from cron.
Add a few more words to make that more obvious
Change-Id: Ib81a54f1e2d59ed6e4eb95681172c5ea14c106fc
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
To avoid issues with outdated Github access tokens in the Git config we
only update the remote URL on the repo object after the config update
was successful.
This also adds a missing repo lock when building the repo state.
Change-Id: I8e1b5b26f03cb75727d2b2e3c9310214a3eac447
|
|\ \ \ \ \ |
|
| | |_|_|/
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is a small refactor to check the output of each node separately.
This should have no effect, but makes it easier to add more testing in
a follow-on change.
Change-Id: Ic5d490c54da968b23fed068253f5be0249ea953a
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Change-Id: I12e8a056a2e5cd1bb18c1f24ecd7db55405f0a8c
|