| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
This change implements event handling for pull-request.tags.added.
Tags can be used as trigger event filter or required metadata.
Change-Id: I128bbef34245932e3bbee1f848ad1c484d3ccae3
|
|\ \ |
|
| | |
| | |
| | |
| | | |
Change-Id: I94d4a0d2e8630d360ad7c5d07690b6ed33b22f75
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Storing autohold requests in ZooKeeper, rather than in-memory,
allows us to remember requests across restarts, and is a necessity
for future work to scale out the scheduler.
Future changes to build on this will allow us to store held node
information with the change for easy node identification, and to
delete any held nodes for a request using the zuul CLI.
A new 'zuul autohold-delete' command is added since hold requests
are no longer automatically deleted.
This makes the autohold API:
zuul autohold: Create a new hold request
zuul autohold-list: List current hold requests
zuul autohold-delete: Delete a hold request
Change-Id: I6130175d1dc7d6c8ce8667f9b14ae9377737d280
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is broken with Gerrit due to
https://github.com/urllib3/urllib3/pull/1684
Change-Id: Ie2c817bb91463cecc64e3022e11330898b11062c
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When leaving file or line comments, leave them as robot comments
to enable more sophisticated processing/filtering.
Change-Id: Ib9e326d8a87639b06c1bc8b7f85425d98da9c003
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When reporting a Gerrit review, add a tag to the ReviewInput
structure so that review messages and approvals carry the
a tag with the prefix "autogenerated:". The UI (especially in
later versions of Gerrit) can support handling automated messages
specially (this is used by the "Only comments" switch in
polygerrit). This is available as far back as 2.13 (possibly
older, I haven't checked).
Change-Id: I8c693d7bcd38be4ac0301fcc9a3e97748ead6d4d
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This adds support for performing queries and git clones over HTTP
(reporting over HTTP was already supported). This will happen
automatically for any connection with a password configured.
Otherwise, the SSH connection will be used as before.
Change-Id: I11920a13615de103eb3d8fb305eacbbcb30e5e40
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
These fixtures are missing the data returned by the "--current-patch-set"
and "--commit-message" arguments, which we do pass when performing the
queries. A future change alters where some of these data are parsed
and will error because of this omission.
This updates the data to match what is currently returned for these
changes (with the old sortKeys manually added back in where appropriate).
Change-Id: I11e983d22c4da818b3eeab1632ca7102aab087d5
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This adds initial support for the Gerrit checks plugin.
Development of that plugin is still in progress, and hopefully it
(and our support for it) will change over time. Because we expect
to change how we interact with it in the near future, this is
documented as experimental support for now. A release note is
intentionally omitted -- that's more appropriate when we remove
the 'experimental' label.
Change-Id: Ida0cdef682ca2ce117617eacfb67f371426a3131
|
|\ \ \ \
| |_|_|/
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
send() requires a bytes-like object in Python 3, ensure the error
message is encoded correctly.
---
Some debugging notes might come in handy for the future here. This
problem appeared in a fairly specific part of the test cases when
setting "ansible_python_interpreter" to /usr/bin/python3. The remote
streaming test has a task that is designed to fail [1]:
- hosts: all
tasks:
- name: Remote shell task with python exception
command: echo foo
args:
chdir: /remote-shelltask/somewhere/that/does/not/exist
failed_when: false
We see that Ansible ships over a payload and tries to run it, but it
raises an exception very early.
<192.168.122.1> SSH: EXEC ssh -C ... '/bin/sh -c '"'"'/usr/bin/python3 && sleep 0'"'"''
<192.168.122.1> Failed to connect to the host via ssh:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
...
File "/tmp/ansible_command_payload_tieedyzs/__main__.py", line 263, in main
FileNotFoundError: [Errno 2] No such file or directory: '/remote-shelltask/somewhere/that/does/not/exist'
When this task started, the Ansible task callbacks in the zuul_stream
callback plugin have setup a thread that listens for the console
output being sent by the remote zuul_console daemon started earlier in
the playbook [2]. This listening thread is sitting in a recv()
waiting for some streaming data to log [3].
There will be no remote log file for zuul_console to stream back,
because this task failed before it even got started. What should
happen is the "[Zuul] Log not found" message should be sent back and
logic in [4] will match this and stop this thread.
When this does *not* happen, such as when this send() raises an
exception because of wrong data type, the task ends anyway and Ansible
moves on to make the end-of-task callbacks in zuul_stream (actually
there's a bunch of looping happening, but let's ignore those details).
This ends up in _stop_streamers() [5] which attempts to join(30) the
streaming thread. Under normal circumstances, this thread should be
finished and the join() successful. However, because the target
thread is stuck in a recv(), the 30-second timeout begins. The clue
to this is in the logs you eventually get:
[Zuul] Log Stream did not terminate
So eventually, Zuul would have made progress here and given up on
waiting for the thread to finish properly. However, 30 seconds is a
long time to the unit-test and pushes the job over it's timeout.
Thus your end result is that when using Python 3 Zuul aborts the job,
and the test rather mysteriously fails!
[1] https://opendev.org/zuul/zuul/src/commit/3f8b36aa0b710cfa0ab74b5c4d4ee1ed6adb806d/tests/fixtures/config/remote-zuul-stream/git/org_project/playbooks/command.yaml#L93
[2] https://opendev.org/zuul/zuul/src/commit/3f8b36aa0b710cfa0ab74b5c4d4ee1ed6adb806d/tests/fixtures/config/remote-zuul-stream/git/org_project/playbooks/command.yaml#L93
[3] https://opendev.org/zuul/zuul/src/commit/3f8b36aa0b710cfa0ab74b5c4d4ee1ed6adb806d/zuul/ansible/base/callback/zuul_stream.py#L14
[4] https://opendev.org/zuul/zuul/src/commit/3f8b36aa0b710cfa0ab74b5c4d4ee1ed6adb806d/zuul/ansible/base/callback/zuul_stream.py#L174
[5] https://opendev.org/zuul/zuul/src/commit/3f8b36aa0b710cfa0ab74b5c4d4ee1ed6adb806d/zuul/ansible/base/callback/zuul_stream.py#L271
This is tested in the follow-on I2b3bc6d4f873b7d653cfaccd1598464583c561e7
Change-Id: I7cdcfc760975871f7fa9949da1015d7cec92ee67
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The "initial comment" is the first comment of a PR on Pagure. It
is used to provide the Depends-on stanza. Recently Pagure added the
capability to send an event when that initial comment is changed:
https://pagure.io/pagure/issue/4398
This change handles the event as a PR changed to retrigger the attached
jobs.
Change-Id: I62d4e783e94528126cd4a7d85b3e664e84758bf1
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The gitweb url template was wrong and this patch fixes it.
Change-Id: Ic4ead74ddfe09b2a4a90cb7ffca746dc1e132430
|
|\ \ \ \
| | |/ /
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This method can be used by reporters to generate the best "status"
url to report for an item. It will report the buildset page if
that is available, or the per-change status page if it is not.
Eventually we plan on making the database required, at which point
we can insert buildsets in the db earlier. Then we can report the
buildset page in all cases. By beginning to use this method now,
we can seamlessly upgrade reporters in the future.
Test coverage for this is added in change
Ida0cdef682ca2ce117617eacfb67f371426a3131.
Change-Id: Ib0c2ca84f6c4d30f233382048c8885fb73edfeec
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This allows reporters to include the enqueue/dequeue time of an
item. The item model already hase enqueue/dequeue times which we
use when reporting status, however, a reporter runs right before
the item is dequeued. So we need one more time value which
corresponds to the start of the reporting phase -- thus report_time
in this patch.
Test coverage for this is added in change
Ida0cdef682ca2ce117617eacfb67f371426a3131.
Change-Id: I093626e098b7ce2deea2b0c25265cb48d38712ad
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This facilitates integration with the gerrit checks API (and may
prove useful for other similar APIs). It will allow us to report
that a change has no jobs in a particular pipeline. A Zuul
pipeline will correspond to a Gerrit check, which means we can
update the status for that check from "SCHEDULED" to "NOT_RELEVANT"
if we determine that no jobs should run for the change. This
closes out the status of the check in Gerrit when a project is
configured to participate in a check/pipeline but no jobs are
actually configured.
Test coverage for this will be added in change
Ida0cdef682ca2ce117617eacfb67f371426a3131.
Change-Id: Ide2a332b294d7efe23601d80eeb92b5af1d4c21b
|
|\ \ \ \ |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The current location of the reference pipelines is within the
quick-start repo, which causes all sorts of configuration errors
to be logged (but since Zuul is so tolerant of them, the job
doesn't actually fail).
Move them outside of one of the demo project directories into their
own.
Change-Id: I9cb1ddd803d938fc154a32308cae99dbab9392e2
|
|\ \ \ \
| |/ / /
|/| / /
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This facilitates integration with the gerrit checks API (and may
prove useful for other similar APIs). It will allow us to report
that a change has been enqueued in a particular pipeline. A Zuul
pipeline will correspond to a Gerrit check, which means we can
update the status for that check from "NOT_STARTED" to "SCHEDULED"
when it enters the pipeline. This is important for our check
polling loop, and it will cause that check to stop appearing in
the list of pending checks.
Test coverage for this is added in change
Ida0cdef682ca2ce117617eacfb67f371426a3131.
Change-Id: I9ec329b446fa51e0911d4d9ff67eea7ddd55ab5d
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This change prevents executor host command execution through a
malicious rsh synchronize rsync_opts.
Fixes bug #2006526
Change-Id: I3cd17ca91410394f164d8ea7cd91a1ea5890f998
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This expands the discussion of executor-only jobs with some additional
notes.
Additionally a unit test is added to explicitly test executor-only
(i.e. blank nodeset) jobs.
Change-Id: I8fd2f932290e49da5a3605737e8940425cd092f4
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Unit test playbooks are generally written as
- hosts: all
tasks:
...
However, many of the unit tests don't specify any nodes for their
jobs. With no nodes specified, Ansible gets a empty host list and
thus the only host available is the special "implicit localhost".
Since "all" doesn't match the implicit localhost, under normal
circumstances Ansible does not match anything and doesn't run any of
the playbooks.
To get around this, the extant code in
RecordingAnsibleJob:getHostList() (tests/base.py) overrides the host
list and explicitly adds a host named "localhost". This is put into
the Ansible inventory and now the "all" matcher has something to match
against and the playbooks run. This work-around was initially added
with I5e23f330476f064acf3cb87f746c5d3193cce274.
The situation became a bit more confused with
Iacf670d992bb051560a0c46c313beaa6721489c4 where the "localhost" fake
node is only added if other nodes are *not* specified. Several tests
rely on this now as they specify various forms of nodes explicitly and
don't want this fake node added.
This change removes the automatic addition of "localhost" in
unit-tests all together. I believe this is the correct direction to
move in, because it's a fairly confusing anti-feature if, for example,
you write a unit test that *is* explicitly executor-only (i.e. a blank
node list). Such a test fails because the unit-test framework adds a
host for you; something that does not happen in production. It's also
a bit confusing if you're reading the config files and thinking
"hosts: all shouldn't match anything here" without digging into the
test framework.
There are two ways this could be fixed. The playbooks that are part
of jobs that have no nodes defined could be re-written to "hosts:
localhost" so that they match the "implicit localhost" and always run.
This does not really seem to be their intent, however. The other
option, which is taken here, is to always add nodes to the job. I
believe this is a better approach, as it more closely matches what you
would see in actual jobs.
Change-Id: I6b52b7e4bc591c09034461b534ca5225945f76cf
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Log a proper error when the admin API token is invalid or
expired. Also handle API call errors.
Change-Id: Ide7a6ff0266daaac53f40d42a545d76ca76a6ff2
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
You might want to take action only if a job fails or succeeds.
Change-Id: I45c1d3d22d3c49cd100552f6d4606b0c560fab10
|
|\ \ \ \ \ \
| |_|_|/ / /
|/| | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Rename typo in hashi_vault lookup files
Change-Id: Ie3e1d46dce222d2c0ced50cf3437dfb3ce787e51
|
|\ \ \ \ \ \
| |_|/ / / /
|/| | | | | |
|
| |/ / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Since the subprocess is started before the reference timestamp is
created, it can happen that the check for the expiration field fails.
Traceback (most recent call last):
File "/tmp/zuul/tests/unit/test_client.py", line 151, in test_token_generation
(token['exp'], now))
File "/tmp/zuul/.tox/py36/lib/python3.6/site-packages/unittest2/case.py", line 702, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : (1568016146.9831738, 1568015546.1448617)
Change-Id: I9ef56c12ed1be2a6ec168c4a9363125919be44e9
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There's a separate callback method for handler tasks. This is what
we do in the text stream callback plugin; we should do the same in
the json output.
Change-Id: I48273ec182032f198b8886be95b0cba5c6f4843e
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
An executor is accepting up to twice as many starting builds as defined
by the load_multiplier option. This is limited to 4 CPU/vCPU count.
After that the executor is accepting only up to as many starting builds
as defined by the load_multiplier (also up to half as many).
Change-Id: I8cf395c41191647605ec47d1f5681dc46675546d
|
|\ \ \ \ \ \ \
| |/ / / / / / |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
An executor is accepting up to twice as many starting builds as defined
by the load_multiplier option. On system with high CPU/vCPU count an
executor may accept too many starting builds. This can be overwritten
using a new max_starting_builds option.
Change-Id: Ic7c121e795e4e3cecec25b2b06dd1a26aa798439
|