| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the "draft" PR status as a pipeline requirement to the
GitHub driver. It is already used implicitly in dependent pipelines,
but this will allow it to be added explicitly to other pipelines
(for example, check).
This also fixes some minor copy/pasta errors in debug messages related
to github pipeline requirements.
Change-Id: I05f8f61aee251af24c1479274904b429baedb29d
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We'll create a span when zuul-web receives a Github webhook event which
is then linked to the span for the event pre-processing step.
The pre-processing span context will be added to the trigger events and
with Icd240712b86cc22e55fb67f6787a0974d5308043 complete tracing of the
whole chain from receiving a Github event until a change is enqueued.
Change-Id: I1734a3a9e44f0ae01f5ed3453f8218945c90db58
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some github users may choose not to publish their real name and/or email
address in their profile. Instead of creating an ugly message like[0]
Reviewed-by: None <None>
try to work up something from the data we have available.
[0] https://github.com/osism/ansible-defaults/commit/ed2b2ffa8a32a5b5fdc060a44d17f12655d6cbcf
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: Ife4c325c278860c47b70b37462ebd7b1d6b97755
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For heavy users of "dispatch jobs" (where many jobs are declared as
dependencies of a single job which then mutates the child_jobs
return value to indicate which few of those should be run), there
may be large numbers of "SKIPPED" jobs in the status page and in the
final job report, which reduces the usability of both of those.
Yet it is important for users to be able to see skipped jobs since
they may represent an error (they may be inadvertently skipped).
To address this, we remove "SKIPPED" jobs from the status page by
default, but add a button at the bottom of the change box which
can toggle their display.
We remove "SKIPPED" jobs from the report, but add a note at the
bottom which says "Skipped X jobs". Users can follow the buildset
link to see which ones were skipped.
The buildset page will continue to show all the jobs for the buildset.
Change-Id: Ie297168cdf5b39d1d6f219e9b2efc44c01e87f35
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When exclude-unprotected-branche is in effect and a project doesn't have
any protected branches (e.g. a wrong branch protection rule in Github or
none at all) the branch cache will contain an empty list.
Since the empty list was so far used to indicate a fetch error, those
projects showed up with a config error in zuul-web ("Will not fetch
project branches as read-only is set").
For the branch cache we need to distinguish three different cases:
1. branche cache miss (no fetch yet)
2. branch cache hit (including empty list of branches)
3. fetch error
Instead of storing an empty list in case of a fetch error we will store
a sentinel value of `None` in those cases. However, with this change we
can't use `None` as an indicator for a cache miss anymore, so we are now
raising a `LookupError` instead.
Change-Id: I5b51805f7d0a5d916a46fe89db7b32f14063d7aa
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously support for Gerrit's submitWholeTopic feature was added
so that when it is enabled, changes that are submitted together are
treated as circular dependencies in Zuul. However, this feature did
not work in a gating pipeline because when that setting is enabled,
Gerrit requires all changes to be mergable at once so that it can
attempt to atomically merge all of them. That means that every
change would need a Verified+2 vote before any change can be
submitted. Zuul leaves the vote and submits each change one at a
time.
(Note, this does not affect the emulated submitWholeTopic feature
in Zuul, since in that case, Gerrit will merge each change alone.)
To correct this, a two-phase option is added to reporters. In phase1,
reporters will report all information about the change but not submit.
In phase2, they will only submit. In the cases where we are about
to submit a successful bundle, we enable the two-phase option and
report the entire bundle without submitting first, then proceed to
submit each change in the bundle in sequence as normal. In Gerrit,
if submitWholeTopic is enabled, this will cause all changes to be
submitted as soon as the first one is, but that's okay because we
handle the case where we try to submit a change and it is already
submitted.
The fake Gerrit used in the Zuul unit tests is updated to match
the real Gerrit in these cases. If submitWholeTopic is enabled,
it will return a 409 to submit requests if the whole topic is not
able to be submitted.
One unit test of failed bundle reporting is adjusted since we will
now report the buildset result to all changes before potentially
reporting a second time if the bundle later fails to merge.
While this does mean that some changes will have extra report
messages, it actually makes the behavior consistent (before, some
changes would have 2 reports and some would have only 1; now all
changes will have 2 reports: the expected result and then a second
report of the unexpected merge failure).
All reporters are updated to handle the two-phase reporting. Since
all reporters have different API methods to leave comments and merge
changes, this won't actually cause any extra API calls even for
reporters which don't need two-phase reporting.
Non-merging reporters (MQTT, SMTP, etc) simply ignore phase2.
Change-Id: Ibf377ab5b7141fe60ecfd5cbbb296bb4f9c24265
|
| |
| |
| |
| | |
Change-Id: Iec87857aa58f71875d780da3698047dae01120d7
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The fix including 2 parts:
1. For Gtihub, we use the base_sha instead of target branch to
be passed as "tosha" parameter to get precise changed files
2. In method getFilesChanges(), use diff() result to filter out
those files that changed and reverted between commits.
The reason we do not direcly use diff() is that for those
drivers other than github, the "base_sha" is not available yet,
using diff() may include unexpected files when target branch
has diverged from the feature branch.
This solution works for 99.9% of the caseses, it may still get
incorrect list of changed files in following corner case:
1. In non-github connection, whose base_sha is not implented, and
2. Files changed and reverted between commits in the change, and
3. The same file has also diverged in target branch.
The above corner case can be fixed by making base_sha available in
other drivers.
Change-Id: Ifae7018a8078c16f2caf759ae675648d8b33c538
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Github now emits a webhook event when a branch protection rule changes.
Add support for that in the Github driver. We will update the branch
protection status in the branch cache immediately, and emit a trigger
event which will cause tenant reconfigurations if the protection status
of a branch has changed in any tenants.
Note: a branch protection rule applies to any number of branches, so
we may generate multiple Zuul trigger events from a single Github
webhook event in this case.
Change-Id: I0a7af786f9c69cf67eaaf4c75f437f8cf64a054a
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a file is moved/renamed Github will only return an entry for the
file with the new name. However, the previous path also needs to be
included in the list of files. This is especially important when a Zuul
config file is renamed but also when `job.[irrelevant-]files` is used.
Change-Id: Ieba250bed57c8a9c2e7811476c202d530f2b30f1
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
User feedback: Showing CANCELED jobs with "x" emoji does not help to identify
the real failed jobs. So instead show the CANCELED jobs with a warning emoji.
Change-Id: I05e107e6f30ed5d69f74ad0ee85b1a8cc3bb61ce
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The drivers are expected to populate the change cache before
passing trigger events to the scheduler so that all the difficult
work is done outside the main loop. Further, the cache cleanup
is designed to accomodate this so that events in-flight don't have
their change cache entries removed early.
However, at several points since moving the change cache into ZK,
programming errors have caused us to encounter enqueued changes
without entries in the cache. This usually causes Zuul to abort
pipeline processing and is unrecoverable.
We should continue to address all incidences of those since they
represent Zuul not working as designed. However, it would be nice
if Zuul was able to recover from this.
To that end, this change allows missing changes to be added to the
change cache.
That is primarily accomplished by adjusting the Source.getChange
method to accept a ChangeKey instead of an Event. Events are only
available when the triggering event happens, whereas a ChangeKey
is available when loading the pipeline state.
A ChangeKey represents the minimal distinguishing characteristics
of a change, and so can be used in all cases. Some drivers obtain
extra information from events, so we still pass it into the getChange
method if available, but it's entirely optional -- we should still
get a workable Change object whether or not it's supplied.
Ref (and derived: Branch, Tag) objects currently only store their
newrev attribute in the ChangeKey, however we need to be able to
create Ref objects with an oldrev as well. Since the old and new
revs of a Ref are not inherent to the ref but rather the generating
event, we can't get that from the source system. So we need to
extend the ChangeKey object to include that. Adding an extra
attribute is troublesome since the ChangeKey is not a ZKObject and
therefore doesn't have access to the model api version. However,
it's not too much of a stretch to say that the "revision" field
(which like all ChangeKey fileds is driver-dependent) should include
the old and new revs. Therefore, in these cases the field is
upgraded in a backwards compatible way to include old and newrev
in the standard "old..new" git encoding format. We also need to
support "None" since that is a valid value in Zuul.
So that we can continue to identify cache errors, any time we encounter
a change key that is not in the cache and we also don't have an
event object, we log an error.
Almost all of this commit is the refactor to accept change keys
instead of events in getChange. The functional change to populate
the cache if it's missing basically consists of just removing
getChangeByKey and replacing it with getChange. A test which deletes
the cache midway through is added.
Change-Id: I4252bea6430cd434dbfaacd583db584cc796dfaa
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Github won't merge a PR if the main branch has recent change in the .github/
directory and if the PR is not based above it.
Change-Id: I595faf0750e277570965767e22c340740cf5a8d5
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a framework for making upgrades to the ZooKeeper data model
in a manner that can support a rolling Zuul system upgrade.
Change-Id: Iff09c95878420e19234908c2a937e9444832a6ec
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The cache maintenance has an inherent data race as it is only
considering changes as relevent that are currently in any pipeline. To
prevent garbage collection of changes for in-flight events, we only
clean up items older than 2h.
Usually the driver will refresh a change when receiving a connection
event. However, this wasn't the case for trigger events created by the
timer driver.
This can lead to a race condition where a cached branch is cleaned up
while a timer triggered item is enqueued.
For consistency all non-change objects (Branch, Tag, Ref) will now be
refreshed in case the refresh flag of `getChange()` is set to True.
2022-01-24 11:31:50,815 ERROR zuul.Scheduler: Exception processing pipeline periodic-xy in tenant foobar
Traceback (most recent call last):
File "/opt/zuul/lib/python3.8/site-packages/zuul/scheduler.py", line 1786, in process_pipelines
pipeline.state.refresh(ctx)
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 153, in refresh
self._load(context)
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 205, in _load
self._set(**self.deserialize(data, context))
File "/opt/zuul/lib/python3.8/site-packages/zuul/model.py", line 690, in deserialize
queue = ChangeQueue.fromZK(context, queue_path,
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 148, in fromZK
obj._load(context, path=path)
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 205, in _load
self._set(**self.deserialize(data, context))
File "/opt/zuul/lib/python3.8/site-packages/zuul/model.py", line 944, in deserialize
item = QueueItem.fromZK(context, item_path,
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 148, in fromZK
obj._load(context, path=path)
File "/opt/zuul/lib/python3.8/site-packages/zuul/zk/zkobject.py", line 205, in _load
self._set(**self.deserialize(data, context))
File "/opt/zuul/lib/python3.8/site-packages/zuul/model.py", line 4093, in deserialize
change = self.pipeline.manager.resolveChangeReferences(
File "/opt/zuul/lib/python3.8/site-packages/zuul/manager/__init__.py", line 199, in resolveChangeReferences
return self.resolveChangeKeys(
File "/opt/zuul/lib/python3.8/site-packages/zuul/manager/__init__.py", line 211, in resolveChangeKeys
self._change_cache[change.cache_key] = change
AttributeError: 'NoneType' object has no attribute 'cache_key'
2022-01-24 11:31:50,811 ERROR zuul.Pipeline.foobar.periodic-xy: Unable to resolve change from key <ChangeKey github org/project Branch master None hash=da4e62f669e51a7fbef5db1a9b480b0bd42693a7febffdb47a4eb794faa300a9>
Change-Id: I62f5b816780e244e1426ab8a8871f09379293f3e
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently we by default don't reprime the installation map if we don't
find the installation id. This breaks with multiple schedulers since
updating the installation map is done via the github events which are
only processed by one scheduler. Enabling repriming by default is a
quickfix for this problem.
Change-Id: I10c619d77cdcbe530813cd64b5545b27931a7888
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a new component state: INITIALIZING and the scheduler and
web components use it when they are creating their initial config.
It is safe for the scheduler to start processing tenant and pipeline
events as soon as it starts because it only processes those for
the tenants that it has already loaded.
However it is not safe for drivers to move events from their
incoming queue into the scheduler since that requires the full
tenant list. The 4 drivers to which this applies are updated
to wait on config priming.
Zuul-web is already structured to wait until config priming
so does not need a corresponding change.
Change-Id: I36dd4927e583328434e66553aa3ff0cd7469b488
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is an early preparation step for removing the RPC calls between
zuul-web and the scheduler.
In order to do so we must initialize the ConfigLoader in zuul-web which
requires all connections to be available. Therefore, this change ensures
that we can load all connections in zuul-web without providing a
scheduler instance.
To avoid unnecessary traffic from a zuul-web instance the onLoad()
method initializes the change cache only if a scheduler instance is
available on the connection.
Change-Id: I3c1d2995e81e17763ae3454076ab2f5ce87ab1fc
|
| |
| |
| |
| |
| |
| |
| | |
In order to save a list of ltimes for each connection we need a source
interface to get the current ltime of a project branch cache.
Change-Id: If01db0698024beeed813d2c9910651c757377865
|
| |
| |
| |
| | |
Change-Id: I373296d2f3b3a4392c98e1226a5e150c48daa2e0
|
| |
| |
| |
| |
| |
| |
| | |
This will allow us to sync the branch cache to other participants
via ZK.
Change-Id: I75b2436008e7bc44e086abe680d8b98cf73102f8
|
|/
|
|
|
|
|
|
| |
Not all attributes of a GitHub PullRequest were properly
(de-)serialized which makes some test fail when running with
multiple schedulers.
Change-Id: Ic2c97992f8b2b4eaee89ca81bc202433d6715109
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When there is a long list of build results, it is difficult quickly
to see which jobs are failed.
This change add emoji of check mark and cross in front of each job
to solve this problem.
Change-Id: I38177824767d475b0c4881ef0ab24ca1164d3187
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The change cache depends on having singleton objects for entries.
If a scheduler ever ends up with 2 objects for the same change, the
cache will refuse to update the cache with new data for the object
which is not in the cache. However, there is a simple series of
events which could lead to this:
1) Event from source populates cache with a change.
2) Change is enqueued into pipeline.
3) Event from source triggers a data refresh of same change.
4) Data refresh fails.
5) Exception handler for data refresh deletes change from cache.
Imagine that the pipeline processor is now attempting to refresh the
change to determine whether it has merged. At this point, updates
to the cache will fail with this error:
2021-09-28 14:25:23,057 ERROR zuul.Scheduler: Exception in pipeline processing:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/zuul/scheduler.py", line 1615, in _process_pipeline
while not self._stopped and pipeline.manager.processQueue():
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1418, in processQueue
item_changed, nnfi = self._processOneItem(
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1356, in _processOneItem
self.reportItem(item)
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1612, in reportItem
merged = source.isMerged(item.change, item.change.branch)
File "/usr/local/lib/python3.8/site-packages/zuul/driver/gerrit/gerritsource.py", line 47, in isMerged
return self.connection.isMerged(change, head)
File "/usr/local/lib/python3.8/site-packages/zuul/driver/gerrit/gerritconnection.py", line 1013, in isMerged
self._change_cache.updateChangeWithRetry(key, change, _update_change)
File "/usr/local/lib/python3.8/site-packages/zuul/zk/change_cache.py", line 330, in updateChangeWithRetry
self.set(key, change, version)
File "/usr/local/lib/python3.8/site-packages/zuul/zk/change_cache.py", line 302, in set
if self._change_cache[key._hash] is not change:
KeyError: 'ef075359268c2f3ee7d52ccbcb6ac51a3a5922c709e634fdb2efcf97c57095b2'
The process may continue:
6) Event from source triggers a data refresh of same change.
7) Refresh succeeds and cache is popuplated with new change object.
Then the pipeline will fail with this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/zuul/scheduler.py", line 1615, in _process_pipeline
while not self._stopped and pipeline.manager.processQueue():
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1418, in processQueue
item_changed, nnfi = self._processOneItem(
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1356, in _processOneItem
self.reportItem(item)
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1612, in reportItem
merged = source.isMerged(item.change, item.change.branch)
File "/usr/local/lib/python3.8/site-packages/zuul/driver/gerrit/gerritsource.py", line 47, in isMerged
return self.connection.isMerged(change, head)
File "/usr/local/lib/python3.8/site-packages/zuul/driver/gerrit/gerritconnection.py", line 1013, in isMerged
self._change_cache.updateChangeWithRetry(key, change, _update_change)
File "/usr/local/lib/python3.8/site-packages/zuul/zk/change_cache.py", line 330, in updateChangeWithRetry
self.set(key, change, version)
File "/usr/local/lib/python3.8/site-packages/zuul/zk/change_cache.py", line 303, in set
raise RuntimeError(
RuntimeError: Conflicting change objects (existing <Change 0x7f1405c188e0 starlingx/nfv 810014,2> vs. new <Change 0x7f148446c370 starlingx/nfv 810014,2> for key '{"connection_name": "gerrit", "project_name": null, "change_type": "GerritChange", "stable_id": "810014", "revision": "2"}'
To avoid this, we should never remove a change from the cache unless it is
completely unused (that is, we should only remove changes from the cache via
the prune method). Even if it means that the change is out of date, it is
still the best information that we have, and a future event may succeed and
eventually update the change.
This removes the exception handling which deleted the change from all drivers.
Change-Id: Idbecdf717b517cce5c25975a40d9f42d57a26c9e
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds a ChangeKey class which is essentially a structured universal
identifier for a change-like object (Ref, Branch, Change, PR, whatever).
We can use this in ZK objects to reference changes, and by doing so, we
can in many cases avoid actually referencing the change objects
themselves.
This also updates the actual keys in ZK to be sha256sums of the structured
key (for brevity and simplicity of encoding).
Change-Id: I6cd62973d48ad3515f6aa8a8172b9e9c19fcda55
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Admins can "suspend" an app [1]; emperically testing this apps see
"suspended_at" and "suspended_by" populated and any further calls
return a 400 response.
Skip adding projects if the app has been suspended.
[1] https://docs.github.com/en/developers/apps/managing-github-apps/suspending-a-github-app-installation
Change-Id: I3f508a0bf7d992b5bd2713f01c8e5a1b2a17b917
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We only need to call the refreshDeps method if the Depends-On
list has changed. That can only happen with a new patchset
(gerrit) or the PR body has changed (github et al). Add a method
to determine if the PR body has changed so we can reduce the
times where we need to call this method.
Change-Id: Iaa50a274c29347397bc4e10e2c3cefc25e442879
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The onChangeUpdated method is called every time a change is updated.
Its purpose is to iterate over every change in the cache and, if the
updated change is a commit dependency (Depends-On) of the other change,
to set a flag telling the queue processor that changes' dependencies
are out of date and need to be refreshed. The idea is that if this
happens, the queue processor should notice the dependencies no longer
match and dequeue it.
This became quite slow with the change cache in ZK.
Instead, we can do the following:
* Make sure that every event regarding a change is forwarded to every
pipeline with that change in it (including non-live changes).
* When processing pipeline trigger events, refresh the dependencies
of any changes that need to be updated.
There are a few cases that need to be covered in that last point
(but remember, all of this only applies to Depends-on footers which
are in the commit_needs variable):
* Changes to depends-on in gerrit necessarily mean a new patchset
of the change with the depends-on header, so in the B->A situation,
if B is updated, it will be re-enqueued.
* Note that normally the orignal B->A series is dequeued, unless
dequeue-on-new-patchset is false. In that case it will remain
and we will have B->A and B1->A, which is what the user asked for.
To handle this case, we only need to refresh the deps of the change
which is updated.
* In a C->B->A situation in gerrit, if B is updated then regardless
of whether dequeue-on-new-patchset is true or false, the series should
be dequeued because the dependency changed (and so the test setup
is incorrect).
To handle this case, we need to refresh the deps of any change whose
commit_needs point to the updated change.
* Changes to depends-on in github don't create new patchsets, so we
only need to refresh the deps of the change which is updated. Because
the changes it points to are still valid changes, they will have
their own deps refreshed in place.
Change-Id: Ifc40ff5583a14c7f015356b963e1ba354974e1c8
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When unlabling and the label is non-existing on an issue zuul fails to
continue to report with other actions like the check run result. This
is caused by not handling the NotFoundError thrown by github3 in this
case [1]. Handle and ignore this exception to fix this.
[1] Trace:
ERROR zuul.GithubReporter: Exception processing report item
Traceback (most recent call last):
File "/opt/zuul/lib/python3.8/site-packages/zuul/reporter/__init__.py", line 76, in doReport
ret = self.report(item)
File "/opt/zuul/lib/python3.8/site-packages/zuul/driver/github/githubreporter.py", line 90, in report
self.setLabels(item)
File "/opt/zuul/lib/python3.8/site-packages/zuul/driver/github/githubreporter.py", line 305, in setLabels
self.connection.unlabelPull(project, pr_number, label,
File "/opt/zuul/lib/python3.8/site-packages/zuul/driver/github/githubconnection.py", line 1974, in unlabelPull
pull_request.remove_label(label)
File "/opt/zuul/lib/python3.8/site-packages/github3/decorators.py", line 31, in auth_wrapper
return func(self, *args, **kwargs)
File "/opt/zuul/lib/python3.8/site-packages/github3/issues/issue.py", line 383, in remove_label
json = self._json(self._delete(url), 200, 404)
File "/opt/zuul/lib/python3.8/site-packages/github3/models.py", line 156, in _json
raise exceptions.error_for(response)
github3.exceptions.NotFoundError: 404 Label does not exist
Change-Id: I29e14d6c5851e3869e0fff14c46113c4d2fde07e
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move common methods to AbstractChangeCache so that concrete
implementations only need to defined the mapping from change type as
string to the change class.
Change-Id: I78c1bdfad1c0986aa6aef387424ea35b6c2aa71d
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Remove the duplicate methods in the connections that use a
Zookeeper-backed change cache by moving them to a common mixin-class.
Change-Id: I527522e79444dfb69b41c70e7698a281166b2128
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to periodically remove change data nodes that are not referenced
by any cache entry. This needs to happen more often than the periodic
cache maintenance (every 5min vs. every hour).
For this we need to introduce a new method `cleanupCache()` in the
connection interface.
In each run of the change cache's cleanup method we will identify
candidates of dangling data nodes to be cleaned up on the next run. The
reason for this delayed cleanup is that we don't want to remove data
nodes where a driver is in the process of creating/updating a new cache
entry.
Change-Id: I9abf7ac6fa117fe4a093ae7a0db1df0da5ae3ff3
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
With the persistent change cache in Zookeeper we need to periodically
remove changes that are outdated and no longer required by any active
item in the pipelines.
The cache maintenance will be performed together with the other general
cleanup every hour.
Change-Id: I62e75ab8c5b43f830e01b0e4c08b25ecdc5eed08
|
| | |
| | |
| | |
| | | |
Change-Id: I13fe7fc7bceb41c2eab2c35b8a295c4d9dece3ec
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In order to cache changes in Zookeeper we need to make change objects
JSON serializable. This means that we can no longer reference other
change objects directly. Instead we will use a cache key consisting of
the connection name and a connection specific cache key.
Those cache keys can be resolved by getting the source instance using
the connection name and then retrieving the change instance via the new
`getChangeByKey()` interface.
The pipeline manager provides a helper method for resolving a list of
cache keys. Cache keys that where resolved once are also cached by the
manager as long as the reference is needed by any change in the
pipeline. The cache will be cleaned up at the end of a run of the queue
processor.
Until we can run multiple schedulers the change cache in the pipeline
manager will avoid hitting Zookeeper every time we resolve a cache key.
Later on when we have the pipeline state in Zookeeper we probably want
to clear the change cache in the pipeline manager at the end of the
queue processor. This way we make sure the change is recent enough when
we start processing a pipeline.
Change-Id: I09845d65864edc0e54af5f24d3c7be8fe2f7a919
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It seems that the update of the SHA-to-PR cache in `getPull()` was
accidentally removed in Ib14bb387dbd233ff252cf57be7f0517770ade037.
This will lead to stale data in the cache and with that to events not
being processed, e.g. in case of an already resolved issue with multiple
PRs with the same HEAD SHA.
Change-Id: I178b63029d7a60523af4acefb3f05bff3daeb2c5
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
With the way the global management and trigger event queues are
currently implemented we can't continue processing other events during a
reconfiguration and end up serializing reconfigurations.
The implementation is diverging from the spec here, as we did not
anticipate the need for tenant event queues back then.
We also add the following gauge metrics:
* zuul.tenant.<tenant>.trigger_events
* zuul.tenant.<tenant>.management_events
The `zuul.scheduler.eventqueues.trigger` gauge was removed together with
the global trigger event queue.
Change-Id: I1e0d872daf0fc4dd90e22b70c334e3e8152aa5b2
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Github also supported these 2 settings in check runs. In the case of a
PR being dequeued it maybe be nicer in the UI to use 'skipped' as Github
will render this as 'grey'.
https://docs.github.com/en/rest/reference/checks#update-a-check-run
Change-Id: I9313a4f96655054fd2ba1b4f899c1959283f159e
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|