| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This updates the Gerrit driver to match the pattern in the GitHub
driver where instead of specifying individual trigger
requirements such as "require-approvals", instead a complete ref
filter (a la "requirements") can be embedded in the trigger
filter.
The "require-approvals" and "reject-approvals" attributes are
deprecated in favor of the new approach.
Additionally, all require filters in Gerrit are now available as
reject filters.
And finally, the Gerrit filters are updated to return
FalseWithReason so that log messages are more useful, and the
Github filters are updated to improve the language, avoid
apostraphes for ease of grepping, and match the new Gerrit
filters.
Change-Id: Ia9c749f1c8e318fe01e84e52831a9d0d2c10b203
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This mimics a useful feature of the Gerrit driver and allows users
to configure pipelines that trigger on events but only if certain
conditions of the PR are met.
Unlike the Gerrit driver, this embeds the entire require/reject
filter within the trigger filter (the trigger filter has-a require
or reject filter). This makes the code simpler and is easier for
users to configure. If we like this approach, we should migrate the
gerrit driver as well, and perhaps the other drivers.
The "require-status" attribute already existed, but was undocumented.
This documents it, adds backwards-compat handling for it, and
deprecates it.
Some documentation typos are also corrected.
Change-Id: I4b6dd8c970691b1e74ffd5a96c2be4b8075f1a87
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We distribute tenant events to pipelines based on whether the event
matches the pipeline (ie, patchset-created for a check pipeline) or
if the event is related to a change already in the pipeline. The
latter condition means that pipelines notice quickly when dependencies
are changed and they can take appropriate action (such as ejecting
changes which no longer have correct dependencies).
For git and commit dependencies, an update to a cycle to add a new
change requires an update to at least one existing change (for example,
adding a new change to a cycle usually requires at least two Depends-On
footers: the new change, as well as one of the changes already in the
cycle). This means that updates to add new changes to cycles quickly
come to the attention of pipelines.
However, it is possible to add a new change to a topic dependency cycle
without updating any existing changes. Merely uploading a new change
in the same topic adds it to the cycle. Since that new change does
not appear in any existing pipelines, pipelines won't notice the update
until their next natural processing cycle, at which point they will
refresh dependencies of any changes they contain, and they will notice
the new dpendency and eject the cycle.
To align the behavior of topic dependencies with git and commit
dependencis, this change causes the scheduler to refresh the
dependencies of the change it is handling during tenant trigger event
processing, so that it can then compare that change's dependencies
to changes already in pipelines to determine if this event is
potentially relevant.
This moves some work from pipeline processing (which is highly parallel)
to tenant processing (which is only somewhat parallel). This could
slow tenant event processing somewhat. However, the work is
persisted in the change cache, and so it will not need to be repeated
during pipeline processing.
This is necessary because the tenant trigger event processor operates
only with the pipeline change list data; it does not perform a full
pipeline refresh, so it does not have access to the current queue items
and their changes in order to compare the event change's topic with
currently enqueued topics.
There are some alternative ways we could implement this if the additional
cost is an issue:
1) At the beginning of tenant trigger event processing, using the change
list, restore each of the queue's change items from the change cache
and compare topics. For large queues, this could end up generating
quite a bit of ZK traffic.
2) Add the change topic to the change reference data structure so that
it is stored in the change list. This is an abuse of this structure
which otherwise exists only to store the minimum amount of information
about a change in order to uniquely identify it.
3) Implement a PipelineTopicList similar to a PipelineChangeList for storing
pipeline topics and accesing them without a full refresh.
Another alternative would be to accept the delayed event handling of topic
dependencies and elect not to "fix" this behavior.
Change-Id: Ia9d691fa45d4a71a1bc78cc7a4bdec206cc025c8
|
|
|
|
|
|
|
|
|
|
|
|
| |
When adding a unit test for change I4fd6c0d4cf2839010ddf7105a7db12da06ef1074
I noticed that we were still querying the dependent change 4 times instead of
the expected 2. This was due to an indentation error which caused all 3
query retry attempts to execute.
This change corrects that and adds a unit test that covers this as well as
the previous optimization.
Change-Id: I798d8d713b8303abcebc32d5f9ccad84bd4a28b0
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This updates the branch cache (and associated connection mixin)
to include information about supported project merge modes. With
this, if a project on github has the "squash" merge mode disabled
and a Zuul user attempts to configure Zuul to use the "squash"
mode, then Zuul will report a configuration syntax error.
This change adds implementation support only to the github driver.
Other drivers may add support in the future.
For all other drivers, the branch cache mixin simply returns a value
indicating that all merge modes are supported, so there will be no
behavior change.
This is also the upgrade strategy: the branch cache uses a
defaultdict that reports all merge modes supported for any project
when it first loads the cache from ZK after an upgrade.
Change-Id: I3ed9a98dfc1ed63ac11025eb792c61c9a6414384
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We currently only detect some errors with job parents when freezing
the job graph. This is due to the vagaries of job variants, where
it is possible for a variant on one branch to be okay while one on
another branch is an error. If the erroneous job doesn't match,
then there is no harm.
However, in the typical case where there is only one variant or
multiple variants are identical, it is possible for us to detect
during config loading a situation where we know the job graph
generation will later fail. This change adds that analysis and
raises errors early.
This can save users quite a bit of time, and since variants are
typically added one at a time, may even prevent users from getting
into abiguous situations which could only be detected when freezing
the job graph.
Change-Id: Ie8b9ee7758c94788ee7bc05947ddd97d9fa8e075
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
GitHub supports a "rebase" merge mode where it will rebase the PR
onto the target branch and fast-forward the target branch to the
result of the rebase.
Add support for this process to the merger so that it can prepare
an effective simulated repo, and map the merge-mode to the merge
operation in the reporter so that gating behavior matches.
This change also makes a few tweaks to the merger to improve
consistency (including renaming a variable ref->base), and corrects
some typos in the similar squash merge test methods.
Change-Id: I9db1d163bafda38204360648bb6781800d2a09b4
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The default merge mode is 'merge-resolve' because it has been observed
that it more closely matches the behavior of jgit in Gerrit (or, at
least it did the last time we looked into this). The other drivers
are unlikely to use jgit and more likely to use the default git
merge strategy.
This change allows the default to differ based on the driver, and
changes the default for all non-gerrit drivers to 'merge'.
The implementation anticipates that we may want to add more granularity
in the future, so the API accepts a project as an argument, and in
the future, drivers could provide a per-project default (which they
may obtain from the remote code review system). That is not implemented
yet.
This adds some extra data to the /projects endpoint in the REST api.
It is currently not easy (and perhaps not possible) to determine what a
project's merge mode is through the api. This change adds a metadata
field to the output which will show the resulting value computed from
all of the project stanzas. The project stanzas themselves may have
null values for the merge modes now, so the web app now protects against
that.
Change-Id: I9ddb79988ca08aba4662cd82124bd91e49fd053c
|
|\ \ \
| |_|/
|/| | |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We try to avoid refreshing JobData from ZK when it is not necessary
(because these objects rarely change). However, a bug in the avoidance
was recently discovered and in fact we have been refreshing them more
than necessary.
This adds a test to catch that case, along with fixing an identical
bug (the same process is used in FrozenJobs and Builds).
The fallout from these bugs may not be exceptionally large, however,
since we generally avoid refreshing FrozenJobs once a build has
started, and avoid refreshing Builds once they have completed,
meaning these bugs may have had little opportunity to show themselves.
Change-Id: I41c3451cf2b59ec18a20f49c6daf716de7f6542e
|
|/
|
|
|
|
|
|
|
|
|
|
| |
This adds the "draft" PR status as a pipeline requirement to the
GitHub driver. It is already used implicitly in dependent pipelines,
but this will allow it to be added explicitly to other pipelines
(for example, check).
This also fixes some minor copy/pasta errors in debug messages related
to github pipeline requirements.
Change-Id: I05f8f61aee251af24c1479274904b429baedb29d
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This feature instructs Zuul to attempt a second or more node request
with a different node configuration (ie, possibly different labels)
if the first one fails.
It is intended to address the case where a cloud provider is unable
to supply specialized high-performance nodes, and the user would like
the job to proceed anyway on lower-performance nodes.
Change-Id: Idede4244eaa3b21a34c20099214fda6ecdc992df
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This was deprecated quite some time ago and we should remove it as
part of the next major release.
Also remove a very old Zuul v1 layout.yaml from the test fixtures.
Change-Id: I40030840b71e95f813f028ff31bc3e9b3eac4d6a
|
|/
|
|
|
|
|
| |
This was previously deprecated and should be removed shortly before
we release Zuul v7.
Change-Id: Idbdfca227d2f7ede5583f031492868f634e1a990
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a config-error pipeline reporter configuration option and
now also reports config errors and merge conflicts to the database
as buildset failures.
The driving use case is that if periodic pipelines encounter config
errors (such as being unable to freeze a job graph), they might send
email if configured to send email on merge conflicts, but otherwise
their results are not reported to the database.
To make this more visible, first we need Zuul pipelines to report
buildset ends to the database in more cases -- currently we typically
only report a buildset end if there are jobs (and so a buildset start),
or in some other special cases. This change adds config errors and
merge conflicts to the set of cases where we report a buildset end.
Because of some shortcuts previously taken, that would end up reporting
a merge conflict message to the database instead of the actual error
message. To resolve this, we add a new config-error reporter action
and adjust the config error reporter handling path to use it instead
of the merge-conflicts action.
Tests of this as well as the merge-conflicts code path are added.
Finally, a small debug aid is added to the GerritReporter so that we
can easily see in the logs which reporter action was used.
Change-Id: I805c26a88675bf15ae9d0d6c8999b178185e4f1f
|
|/
|
|
| |
Change-Id: I12e8a056a2e5cd1bb18c1f24ecd7db55405f0a8c
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We call item.setResult after a build is complete so that the queue
item can do any internal processing necessary (for example, prepare
data structures for child jobs, or move the build to the retry_builds
list).
In the case of deduplicated builds, we should do that for every queue
item the build participates in since each item may have a different
job graph.
We were not correctly identifying other builds of deduplicated jobs
and so in the case of a dependency cycle we would call setResult on
jobs of the same name in that cycle regardless of whether they were
deduplicated.
This corrects the issue and adds a test to detect that case.
Change-Id: I4c47beb2709a77c21c11c97f1d1a8f743d4bf5eb
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There is no good reason to do so (there are no resources consumed
by the job), and it's difficult to disable a behavior for the
noop job globally since it has no definition. Let's never have it
deduplicate so that we keep things simple for folks who want to
avoid deduplication.
Change-Id: Ib3841ce5ef020540edef1cfa479d90c65be97112
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the before times when we only had a single scheduler, it was
naturally the case that reconfiguration events were processed as they
were encountered and no trigger events which arrived after them would
be processed until the reconfiguration was complete. As we added more
event queues to support SOS, it became possible for trigger events
which arrived at the scheduler to be processed before a tenant
reconfiguration caused by a preceding event to be complete. This is
now even possible with a single scheduler.
As a concrete example, imagine a change merges which updates the jobs
which should run on a tag, and then a tag is created. A scheduler
will process both of those events in succession. The first will cause
it to submit a tenant reconfiguration event, and then forward the
trigger event to any matching pipelines. The second event will also
be forwarded to pipeline event queues. The pipeline events will then
be processed, and then only at that point will the scheduler return to
the start of the run loop and process the reconfiguration event.
To correct this, we can take one of two approaches: make the
reconfiguration more synchronous, or make it safer to be
asynchronous. To make reconfiguration more synchronous, we would need
to be able to upgrade a tenant read lock into a tenant write lock
without releasing it. The lock recipes we use from kazoo do not
support this. While it would be possible to extend them to do so, it
would lead us further from parity with the upstream kazoo recipes, so
this aproach is not used.
Instead, we will make it safer for reconfiguration to be asynchronous
by annotating every trigger event we forward with the last
reconfiguration event that was seen before it. This means that every
trigger event now specifies the minimum reconfiguration time for that
event. If our local scheduler has not reached that time, we should
stop processing trigger events and wait for it to catch up. This
means that schedulers may continue to process events up to the point
of a reconfiguration, but will then stop. The already existing
short-circuit to abort processing once a scheduler is ready to
reconfigure a tenant (where we check the tenant write lock contenders
for a waiting reconfiguration) helps us get out of the way of pending
reconfigurations as well. In short, once a reconfiguration is ready
to start, we won't start processing tenant events anymore because of
the existing lock check. And up until that happens, we will process
as many events as possible until any further events require the
reconfiguration.
We will use the ltime of the tenant trigger event as our timestamp.
As we forward tenant trigger events to the pipeline trigger event
queues, we decide whether an event should cause a reconfiguration.
Whenever one does, we note the ltime of that event and store it as
metadata on the tenant trigger event queue so that we always know what
the most recent required minimum ltime is (ie, the ltime of the most
recently seen event that should cause a reconfiguration). Every event
that we forward to the pipeline trigger queue will be annotated to
specify that its minimum required reconfiguration ltime is that most
recently seen ltime. And each time we reconfigure a tenant, we store
the ltime of the event that prompted the reconfiguration in the layout
state. If we later process a pipeline trigger event with a minimum
required reconfigure ltime greater than the current one, we know we
need to stop and wait for a reconfiguration, so we abort early.
Because this system involves several event queues and objects each of
which may be serialized at any point during a rolling upgrade, every
involved object needs to have appropriate default value handling, and
a synchronized model api change is not helpful. The remainder of this
commit message is a description of what happens with each object when
handled by either an old or new scheduler component during a rolling
upgrade.
When forwarding a trigger event and submitting a tenant
reconfiguration event:
The tenant trigger event zuul_event_ltime is initialized
from zk, so will always have a value.
The pipeline management event trigger_event_ltime is initialzed to the
tenant trigger event zuul_event_ltime, so a new scheduler will write
out the value. If an old scheduler creates the tenant reconfiguration
event, it will be missing the trigger_event_ltime.
The _reconfigureTenant method is called with a
last_reconfigure_event_ltime parameter, which is either the
trigger_event_ltime above in the case of a tenant reconfiguration
event forwarded by a new scheduler, or -1 in all other cases
(including other types of reconfiguration, or a tenant reconfiguration
event forwarded by an old scheduler). If it is -1, it will use the
current ltime so that if we process an event from an old scheduler
which is missing the event ltime, or we are bootstrapping a tenant or
otherwise reconfiguring in a context where we don't have a triggering
event ltime, we will use an ltime which is very new so that we don't
defer processing trigger events. We also ensure we never go backward,
so that if we process an event from an old scheduler (and thus use the
current ltime) then process an event from a new scheduler with an
older (than "now") ltime, we retain the newer ltime.
Each time a tenant reconfiguration event is submitted, the ltime of
that reconfiguration event is stored on the trigger event queue. This
is then used as the min_reconfigure_ltime attribute on the forwarded
trigger events. This is updated by new schedulers, and ignored by old
ones, so if an old scheduler process a tenant trigger event queue it
won't update the min ltime. That will just mean that any events
processed by a new scheduler may continue to use an older ltime as
their minimum, which should not cause a problem. Any events forwarded
by an old scheduler will omit the min_reconfigure_ltime field; that
field will be initialized to -1 when loaded on a new scheduler.
When processing pipeline trigger events:
In process_pipeline_trigger_queue we compare two values: the
last_reconfigure_event_ltime on the layout state which is either set
to a value as above (by a new scheduler), or will be -1 if it was last
written by an old scheduler (including in the case it was overwritten
by an old scheduler; it will re-initialize to -1 in that case). The
event.min_reconfigure_ltime field will either be the most recent
reconfiguration ltime seen by a new scheduler forwarding trigger
events, or -1 otherwise. If the min_reconfigure_ltime of an event is
-1, we retain the old behavior of processing the event regardless.
Only if we have a min_reconfigure_ltime > -1 and it is greater than
the layout state last_reconfigure_event_ltime (which itself may be -1,
and thus less than the min_reconfigure_ltime) do we abort processing
the event.
(The test_config_update test for the Gerrit checks plugin is updated
to include an extra waitUntilSettled since a potential test race was
observed during development.)
Change-Id: Icb6a7858591ab867e7006c7c80bfffeb582b28ee
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
In previous change support for the gitlab merge was added, but the
parameters dict was not properly passed to the invocation method.
Fix this now and add corresponding test.
Change-Id: I781c02848abc524ca98e03984539507b769d19fe
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a project configuration defined a queue, but did not directly
specify any pipeline configuration (e.g. only referenced templates), the
relative priority queues were not setup correctly.
This could happen in pipelines using the independent and supercedent
manager. Other pipelines using the shared change queue mixin handle this
correctly.
This edge case will be tested in
`test_scheduler.TestScheduler.test_nodepool_relative_priority_check` by
slightly modifying the config to use a template for one of the projects.
Change-Id: I1f682e6593ccdad3cfacf5817fc1a1cf7de8856b
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for deduplicating jobs within dependency cycles.
By default, this will happen automatically if we can determine that the
results of two builds would be expected to be identical. This uses a
heuristic which should almost always be correct; the behavior can be
overidden otherwise.
Change-Id: I890407df822035d52ead3516942fd95e3633094b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Node request failures cause a queue item to fail (naturally). In a normal
queue without cycles, that just means that we would cancel jobs behind and
wait for the current item to finish the remaining jobs. But with cycles,
the items in the bundle detect that items ahead (which are part of the bundle)
are failing and so they cancel their own jobs more agressively. If they do
this before all the jobs have started (ie, because we are waiting on an
unfulfilled node request), they can end up in a situation where they never
run builds, but yet they don't report because they are still expecting
those builds.
This likely points to a larger problem in that we should probably not be
canceling those jobs so aggressively. However, the more serious and immediate
problem is the race condition that can cause items not to report.
To correct this immediate problem, tell the scheduler to create fake build
objects with a result of "CANCELED" when the pipeline manager cancels builds
and there is no existing build already. This will at least mean that all
expected builds are present regardless of whether the node request has been
fulfilled.
A later change can be made to avoid canceling jobs in the first place without
needing to change this behavior.
Change-Id: I1e1150ef67c03452b9a98f9366434c53a5ad26fb
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Before this fix, a re-enqueue would do setResult for all builds,
skipped builds are considered non "SUCCESS", and their soft
dependent childs would be skipped, too.
Change-Id: I194e77968376621998f417b96c27fce71512f542
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The github and gitlab drivers were just updated to support a base_sha
attribute. Pagure doesn't need this because of the way it gets its
file lists from the diffstat endpoint. Update the Pagure driver with
similar tests to codify this behavior.
Change-Id: I87c6fe4ee0da29f3b920a343a0d759214f70b577
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This adds the base_sha field to the gitlab driver. This matches
the behavior and tests for the github driver.
Change-Id: I2abe7326b920c9844333972daa5356fc0fed69f7
|
|\ \ \
| |/ /
| | /
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The fix including 2 parts:
1. For Gtihub, we use the base_sha instead of target branch to
be passed as "tosha" parameter to get precise changed files
2. In method getFilesChanges(), use diff() result to filter out
those files that changed and reverted between commits.
The reason we do not direcly use diff() is that for those
drivers other than github, the "base_sha" is not available yet,
using diff() may include unexpected files when target branch
has diverged from the feature branch.
This solution works for 99.9% of the caseses, it may still get
incorrect list of changed files in following corner case:
1. In non-github connection, whose base_sha is not implented, and
2. Files changed and reverted between commits in the change, and
3. The same file has also diverged in target branch.
The above corner case can be fixed by making base_sha available in
other drivers.
Change-Id: Ifae7018a8078c16f2caf759ae675648d8b33c538
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The previous fix for erroneously reporting NO_JOB results to the
database did not handle the case of superceding queues. An item
enqueued in check and then superceded into gate would still report
NO_JOBS in gate and DEQUEUED in check, neither of which is desired.
It appears that we were previously relying on not having reported
a buildset start in order to avoid reporting buildset ends when
they are uninteresting. This changed in
bf2eb71f95257e0dfac259fb74e7a97fe4a53eb8 where we now intentionally
create missing buildset entries.
To return to the original behavior, we now only attempt to report
a buildset end under most circumstances if we would have reported
a start as well (by virtue of having a non-empty job graph). There
is one exception to this, where we report an item which otherwise
can't be enqueued in order to report an error. This is maintained.
Change-Id: Ic2322e293a44a2946c6b766cf87d256ed39319ea
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A recent change to report additional buildset results to the db
inadvertently also reported NO_JOBS results. We should not report
those as they are frequent and uninteresting. Special case them
and add a test.
Change-Id: Ic7502bd53e2a51d1cc178834344e01cd2a5942db
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
This adds a pipeline queue setting to emulate the Gerrit behavior
of submitWholeTopic without needing to enable it site-wide in Gerrit.
Change-Id: Icb33a1e87d15229e6fb3aa1e4b1ad14a60623a29
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If a cycle of dependencies is attempted to be enqueued into a pipeline
and at least one of the participating projects has a per-branch change queue
and the changes in the cycle are in different branches, it can be confusing
for users why the changes were not enqueued. This is even more likely to
happen with implicit cyclic dependencies such as those from Gerrit's
submitted-together feature (but can happen with any driver).
To aid users in this situation, report this situation back to the code
review system.
Change-Id: I26174849deab627b2cf91d75029c5a2674cc37d6
|
|/
|
|
|
|
|
|
|
| |
A recent change to gitlab event handling regarding file lists for
large pull requests broke timer-triggered events for gitlab by
assuming an attribute on events which may not always be present.
Correct that and add a test case.
Change-Id: I0a2694613b499b9e79e3a133c4bb7b766c74e097
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recent optimization to avoid processing pipelines if no events are
waiting did not account for semaphores which may be held by jobs in
different pipelines. In that case, a job completing in one pipeline
needs to generate an event in another pipeline in order to prompt it
to begin processing.
We have no easy way of knowing which pipelines may have jobs which are
waiting for a semaphore, so this change broadcasts an event to every
pipeline in the tenant when a semaphore is released. Hopefully this
shouldn't generate that much more traffic (how much depends on how
frequently semaphores are released). If desired, we can further
optimize this by storing semaphore pipeline waiters in ZK in a later
change.
Change-Id: Ide381279b0442d11535c00746e4baf19f32f3cd7
|