| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Remove data under the `zuul` key from the job returned vars.
These returned values are meant to be used only by Zuul and
shouldn't be included in documents as it may include large amount
of data such as file comments.
Change-Id: Ie6de7e3373b21b7c234ffedd5db7d3ca5a0645b6
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In case of a bundle, zuul should load extra-config-paths not only from
items ahead, but should from all items in that bundle. Otherwise it might
throw a "invalid config" error, because the required zuul items in
extra-config-paths are not found.
Change-Id: I5c14bcb14b7f5c627fd9bd49f887dcd55803c6a1
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We distribute tenant events to pipelines based on whether the event
matches the pipeline (ie, patchset-created for a check pipeline) or
if the event is related to a change already in the pipeline. The
latter condition means that pipelines notice quickly when dependencies
are changed and they can take appropriate action (such as ejecting
changes which no longer have correct dependencies).
For git and commit dependencies, an update to a cycle to add a new
change requires an update to at least one existing change (for example,
adding a new change to a cycle usually requires at least two Depends-On
footers: the new change, as well as one of the changes already in the
cycle). This means that updates to add new changes to cycles quickly
come to the attention of pipelines.
However, it is possible to add a new change to a topic dependency cycle
without updating any existing changes. Merely uploading a new change
in the same topic adds it to the cycle. Since that new change does
not appear in any existing pipelines, pipelines won't notice the update
until their next natural processing cycle, at which point they will
refresh dependencies of any changes they contain, and they will notice
the new dpendency and eject the cycle.
To align the behavior of topic dependencies with git and commit
dependencis, this change causes the scheduler to refresh the
dependencies of the change it is handling during tenant trigger event
processing, so that it can then compare that change's dependencies
to changes already in pipelines to determine if this event is
potentially relevant.
This moves some work from pipeline processing (which is highly parallel)
to tenant processing (which is only somewhat parallel). This could
slow tenant event processing somewhat. However, the work is
persisted in the change cache, and so it will not need to be repeated
during pipeline processing.
This is necessary because the tenant trigger event processor operates
only with the pipeline change list data; it does not perform a full
pipeline refresh, so it does not have access to the current queue items
and their changes in order to compare the event change's topic with
currently enqueued topics.
There are some alternative ways we could implement this if the additional
cost is an issue:
1) At the beginning of tenant trigger event processing, using the change
list, restore each of the queue's change items from the change cache
and compare topics. For large queues, this could end up generating
quite a bit of ZK traffic.
2) Add the change topic to the change reference data structure so that
it is stored in the change list. This is an abuse of this structure
which otherwise exists only to store the minimum amount of information
about a change in order to uniquely identify it.
3) Implement a PipelineTopicList similar to a PipelineChangeList for storing
pipeline topics and accesing them without a full refresh.
Another alternative would be to accept the delayed event handling of topic
dependencies and elect not to "fix" this behavior.
Change-Id: Ia9d691fa45d4a71a1bc78cc7a4bdec206cc025c8
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In I9628e2770dda120b269612e28bb6217036942b8e we switched zuul.change from
a plain string tagged with !unsafe to base64 encoded and no !unsafe tag.
The idea was to make the inventory file parseable by external tools while
avoiding accidental interpolation of the commit message by Ansible.
That doesn't work in all cases -- it's not hard to construct a scenario
where after base64 decoding the message any further processing by Ansible
causes it to undergo interpolation. Moreover, since then we have made
many changes to how we deal with variables; notably, the inventory.yaml
is no longer actually used by Zuul's Anisble -- it is now there only
for human and downstream processing. We call it the "debug inventory".
The actual inventory is much more complex and in some cases has lots of
!unsafe tags in it.
Given all that, it now seems like the most straightforward way to deal
with this is to tag the message variable as !unsafe when passing it to
Zuul's Ansible, but render it as plain text in the inventory.yaml.
To address backwards compatability, this is done in a new variable called
zuul.change_message. Since that's a more descriptive variable anyway,
we will just keep that one in the future and drop the current base64-
encoded zuul.message variable
Change-Id: Iea86de15e722bc271c1bf0540db2c9efb032500c
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Per the documentation, include-branches should be able to override
exclude-branches, but this was not the case in the way the code was
written. Rework the code to correct this, and also add a test to ensure
it works as documented
Change-Id: I2e23b1533c67ccf84b4d6a36f5a003adc7b3e45a
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Independent pipelines ignore requirements for non-live changes
because they are not actually executed. However, a user might
configure an independent pipeline that requires code review and
expect a positive code-review pipeline requirement to be enforced.
To ignore it risks executing unreviewed code via dependencies.
To correct this, we now enforce pipeline requirements in independent
pipelines in the same way as dependent ones.
This also adds a new "allow-other-connections" pipeline configuration
option which permits users to specify exhaustive pipeline requirements.
Change-Id: I6c006f9e63a888f83494e575455395bd534b955f
Story: 2010515
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When adding a unit test for change I4fd6c0d4cf2839010ddf7105a7db12da06ef1074
I noticed that we were still querying the dependent change 4 times instead of
the expected 2. This was due to an indentation error which caused all 3
query retry attempts to execute.
This change corrects that and adds a unit test that covers this as well as
the previous optimization.
Change-Id: I798d8d713b8303abcebc32d5f9ccad84bd4a28b0
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If a build is to be deduplicated and has not started yet and has
a pending node request, we store a dictionary describing the target
deduplicated build in the node_requests dictionary on the buildset.
There were a few places where we directly accessed that dictionary
and assumed the results would be the node request id. Notably, this
could cause an error in pipeline processing (as well os potentially
some other edge cases such as reconfiguring).
Most of the time we can just ignore deduplicated node requests since
the "real" buildset will take care of them. This change enriches
the API to help with that. In other places, we add a check for the
type.
To test this, we enable relative_priority in the config file which
is used in the deduplication tests, and we also add an assertion
which runs at the end of every test that ensures there were no
pipeline exceptions during the test (almost all the existing dedup
tests fail this assertion before this change).
Change-Id: Ia0c3f000426011b59542d8e56b43767fccc89a22
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This updates the branch cache (and associated connection mixin)
to include information about supported project merge modes. With
this, if a project on github has the "squash" merge mode disabled
and a Zuul user attempts to configure Zuul to use the "squash"
mode, then Zuul will report a configuration syntax error.
This change adds implementation support only to the github driver.
Other drivers may add support in the future.
For all other drivers, the branch cache mixin simply returns a value
indicating that all merge modes are supported, so there will be no
behavior change.
This is also the upgrade strategy: the branch cache uses a
defaultdict that reports all merge modes supported for any project
when it first loads the cache from ZK after an upgrade.
Change-Id: I3ed9a98dfc1ed63ac11025eb792c61c9a6414384
|
|\ \ \ \ |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We've investigated an issue where a job was stuck on the executor
because it wasn't aborted properly. The job was cancelled by the
scheduler, but the cleanup playbook on the executor ran into a timeout.
This caused another abort via the WatchDog.
The problem is that the abort function doesn't do anything if the
cleanup playbook is running [1]. Most probably this covers the case
that we don't want to abort the cleanup playbook after a normal job
cancellation.
However, this doesn't differentiate if the abort was caused by the run
of the cleanup playbook itself, resulting in a build that's hanging
indefinitely on the executor.
To fix this, we now differentiate if the abort was caused by a stop()
call [2] or if it was caused by a timeout. In case of a timeout, we kill
the running process.
Add a test case to validate the changed behaviour. Without the fix, the
test case runs indefinitetly because the cleanup playbook won't be
aborted even after the test times out (during the test cleanup).
[1]: https://opendev.org/zuul/zuul/src/commit/4d555ca675d204b1d668a63fab2942a70f159143/zuul/executor/server.py#L2688
[2]: https://opendev.org/zuul/zuul/src/commit/4d555ca675d204b1d668a63fab2942a70f159143/zuul/executor/server.py#L1064
Change-Id: I979f55b52da3b7a237ac826dfa8f3007e8679932
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Most of a change's attributes are tenant-independent. This however is
different for topic dependencies, which should only be considered in
tenants where the dependencies-by-topic feature is enabled.
This is mainly a problem when a project is part of multiple tenants as
the dependencies-by-topic setting might be different for each tenant. To
fix this we will only return the topic dependencies for a change in
tenants where the feature has been activated.
Since the `needs_changes` property is now a method called
`getNeedsChanges()`, we also changed `needed_by_changes` to
`getNeededByChanges()` so they match.
Change-Id: I343306db0abbe2fbf98ddb3f81b6d509eaf4a2bf
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We currently only detect some errors with job parents when freezing
the job graph. This is due to the vagaries of job variants, where
it is possible for a variant on one branch to be okay while one on
another branch is an error. If the erroneous job doesn't match,
then there is no harm.
However, in the typical case where there is only one variant or
multiple variants are identical, it is possible for us to detect
during config loading a situation where we know the job graph
generation will later fail. This change adds that analysis and
raises errors early.
This can save users quite a bit of time, and since variants are
typically added one at a time, may even prevent users from getting
into abiguous situations which could only be detected when freezing
the job graph.
Change-Id: Ie8b9ee7758c94788ee7bc05947ddd97d9fa8e075
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If the file being comment on is the Gerrit special file starting with
"/" (e.g. /COMMIT_MSG), no line mapping transformation should be done,
otherwise strange errors like:
Job: unable to map line for file comments:
stderr: 'fatal: '/COMMIT_MSG' is outside repository at '...'
will show up after the job has run.
Change-Id: Id89041dc7d8bf3f6c956d85b38355053ff0fd707
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds the ability to specify that the Zuul executor should
acquire a semaphore before running an individual playbook. This
is useful for long running jobs which need exclusive access to
a resources for only a small amount of time.
Change-Id: I90f5e0f570ef6c4b0986b0143318a78ddc27bbde
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
GitHub supports a "rebase" merge mode where it will rebase the PR
onto the target branch and fast-forward the target branch to the
result of the rebase.
Add support for this process to the merger so that it can prepare
an effective simulated repo, and map the merge-mode to the merge
operation in the reporter so that gating behavior matches.
This change also makes a few tweaks to the merger to improve
consistency (including renaming a variable ref->base), and corrects
some typos in the similar squash merge test methods.
Change-Id: I9db1d163bafda38204360648bb6781800d2a09b4
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The default merge mode is 'merge-resolve' because it has been observed
that it more closely matches the behavior of jgit in Gerrit (or, at
least it did the last time we looked into this). The other drivers
are unlikely to use jgit and more likely to use the default git
merge strategy.
This change allows the default to differ based on the driver, and
changes the default for all non-gerrit drivers to 'merge'.
The implementation anticipates that we may want to add more granularity
in the future, so the API accepts a project as an argument, and in
the future, drivers could provide a per-project default (which they
may obtain from the remote code review system). That is not implemented
yet.
This adds some extra data to the /projects endpoint in the REST api.
It is currently not easy (and perhaps not possible) to determine what a
project's merge mode is through the api. This change adds a metadata
field to the output which will show the resulting value computed from
all of the project stanzas. The project stanzas themselves may have
null values for the merge modes now, so the web app now protects against
that.
Change-Id: I9ddb79988ca08aba4662cd82124bd91e49fd053c
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This allows configuration of read-only access rules, and corresponding
documentation. It wraps every API method in an auth check (other than
info endpoints).
It exposes information in the info endpoints that the web UI can use
to decide whether it should send authentication information for all
requests. A later change will update the web UI to use that.
Change-Id: I3985c3d0b9f831fd004b2bb010ab621c00486e05
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In order to allow for authenticated read-only access to zuul-web,
we need to be able to control the authz of the API root. Currently,
we can only specify auth info for tenants. But if we want to control
access to the tenant list itself, we need to be able to specify auth
rules.
To that end, add a new "api-root" tenant configuration object which,
like tenants themselves, will allow attaching authz rules to it.
We don't have any admin-level API endpoints at the root, so this change
does not add "admin-rules" to the api-root object, but if we do develop
those in the future, it could be added.
A later change will add "access-rules" to the api-root in order to
allow configuration of authenticated read-only access.
This change does add an "authentication-realm" to the api-root object
since that already exists for tenants and it will make sense to have
that in the future as well. Currently the /info endpoint uses the
system default authentication realm, but this will override it if
set.
In general, the approach here is that the "api-root" object should
mirror the "tenant" object for all attributes that make sense.
Change-Id: I4efc6fbd64f266e7a10e101db3350837adce371f
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a preparatory step to add access-control for read-level
access to the API and web UI. Because we will likely end up with
tenant config that looks like:
- tenant:
name: example
admin-rules: ['my-admin-rule']
access-rules: ['my-read-only-rule']
It does not make sense for 'my-read-only-rule' to be defined as:
- admin-rule:
name: read-only-rule
In other words, the current nomenclature conflates (new word:
nomenconflature) the idea of an abstract authorization rule and
what it authorizes. The new name makes it more clear than an
authorization-rule can be used to authorize more than just admin
access.
Change-Id: I44da8060a804bc789720bd207c34d802a52b6975
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: Icd8c33dfe1c8ffd21a717a1a94f1783c244a6b82
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We try to avoid refreshing JobData from ZK when it is not necessary
(because these objects rarely change). However, a bug in the avoidance
was recently discovered and in fact we have been refreshing them more
than necessary.
This adds a test to catch that case, along with fixing an identical
bug (the same process is used in FrozenJobs and Builds).
The fallout from these bugs may not be exceptionally large, however,
since we generally avoid refreshing FrozenJobs once a build has
started, and avoid refreshing Builds once they have completed,
meaning these bugs may have had little opportunity to show themselves.
Change-Id: I41c3451cf2b59ec18a20f49c6daf716de7f6542e
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds the "draft" PR status as a pipeline requirement to the
GitHub driver. It is already used implicitly in dependent pipelines,
but this will allow it to be added explicitly to other pipelines
(for example, check).
This also fixes some minor copy/pasta errors in debug messages related
to github pipeline requirements.
Change-Id: I05f8f61aee251af24c1479274904b429baedb29d
|
|/ /
| |
| |
| |
| |
| |
| | |
Ansible 5 is no longer supported and 6 is available and working.
Deprecate Ansible 5.
Change-Id: I8c152f7c0818bccd07f50e85bef9a82ddb863a68
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Versions 2.8 and 2.9 are no longer supported by the Ansible project.
Change-Id: I888ddcbecadd56ced83a27ae5a6e70377dc3bf8c
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds support for configuring tracing in Zuul along with
basic documentation of the configuration.
It also adds test infrastructure that runs a gRPC-based collector
so that we can test tracing end-to-end, and exercises a simple
test span.
Change-Id: I4744dc2416460a2981f2c90eb3e48ac93ec94964
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This feature instructs Zuul to attempt a second or more node request
with a different node configuration (ie, possibly different labels)
if the first one fails.
It is intended to address the case where a cloud provider is unable
to supply specialized high-performance nodes, and the user would like
the job to proceed anyway on lower-performance nodes.
Change-Id: Idede4244eaa3b21a34c20099214fda6ecdc992df
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This was deprecated quite some time ago and we should remove it as
part of the next major release.
Also remove a very old Zuul v1 layout.yaml from the test fixtures.
Change-Id: I40030840b71e95f813f028ff31bc3e9b3eac4d6a
|
|\ \ \ \ \
| |/ / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This was previously deprecated and should be removed shortly before
we release Zuul v7.
Change-Id: Idbdfca227d2f7ede5583f031492868f634e1a990
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This adds an option to include result data from a job in the MQTT
reporter. It is off by default since it may be quite large for
some jobs.
Change-Id: I802adee834b60256abd054eda2db834f8db82650
|
|\ \ \ \
| | |/ /
| |/| | |
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: I0d450d9385b9aaab22d2d87fb47798bf56525f50
|