| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ \ \ \
| |_|/ / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This further reduces the number of ZK object reads during pipeline
refreshes by tracking when builds and frozen jobs are updated.
During the phases of a build where we know no updates can occur,
we already avoid refreshing the Build and FrozenJob objects.
But, for example, while a build is running we have to continually
refresh it to see if it has completed.
We can avoid this by recording expected version information in ZK
and only refresh those objects if we know our local copy is out
of date.
We can store the latest ZK object version of FrozenJob and Build
objects on the Buildset. On pipeline refresh, we currently
always refresh the buildset object, which means that when we
prepare to refresh the FrozenJob or Build objects underneath a
Buildset, we will have information about the latest versions of
those objects in ZK and can compare to the versions we currently
have in memory to decide if we need to refresh them. This should
reduce the number of reads in a pipeline refresh by about 50%.
But it will cause more writes, in that we will update the
Buildset object each time we modify one of its children. This
may affect pipeline processing times but the impact should be
very small.
We will use version numbers (rather than transaction ids) because they
are predictable, and updating the buildset first with the predicted
next version before updating the child avoids issues caused by a crash
between those two steps.
Since it is typical for many objects to be created at once, we do
optimize the case where the objects are initially created and we
avoid making an update to the BuildSet in that case so that we
don't repeatedly write the buildset object.
Change-Id: I3824af6149bf27c41a8d895fc682236bd0d91f6b
|
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Per the documentation, include-branches should be able to override
exclude-branches, but this was not the case in the way the code was
written. Rework the code to correct this, and also add a test to ensure
it works as documented
Change-Id: I2e23b1533c67ccf84b4d6a36f5a003adc7b3e45a
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Independent pipelines ignore requirements for non-live changes
because they are not actually executed. However, a user might
configure an independent pipeline that requires code review and
expect a positive code-review pipeline requirement to be enforced.
To ignore it risks executing unreviewed code via dependencies.
To correct this, we now enforce pipeline requirements in independent
pipelines in the same way as dependent ones.
This also adds a new "allow-other-connections" pipeline configuration
option which permits users to specify exhaustive pipeline requirements.
Change-Id: I6c006f9e63a888f83494e575455395bd534b955f
Story: 2010515
|
|\ \ \ \ \ |
|
| | |/ / /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When adding a unit test for change I4fd6c0d4cf2839010ddf7105a7db12da06ef1074
I noticed that we were still querying the dependent change 4 times instead of
the expected 2. This was due to an indentation error which caused all 3
query retry attempts to execute.
This change corrects that and adds a unit test that covers this as well as
the previous optimization.
Change-Id: I798d8d713b8303abcebc32d5f9ccad84bd4a28b0
|
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We were overlogging because we check for an openssl flag early and warn
if it isn't present. That warning creates a default root streamhandler
that emits to stderr causing all our logging to be emitted there.
Fix this by creating a specific logger for this warning (avoids
polluting the root logger) and add an assertion that the root logger's
handler list is empty when we modify it for testing.
Note I'm not sure why this warning is happening now and wasn't before.
Maybe our openssl installations changed or cryptography modified the
flag? This is worth investigating in a followup.
Change-Id: I2a82cd6575e86facb80b28c81418ddfee8a32fa5
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
I8e1b5b26f03cb75727d2b2e3c9310214a3eac447 introduced a regression that
prevented us from re-cloning a repo that no longer exists on the file
system (e.g. deleted by an operator) but where we still have the cached
`Repo` object.
The problem was that we only updated the remote URL of the repo object
after we wrote it to the Git config. Unfortunately, if the repo no
longer existed on the file system we would attempt to re-clone it with a
possibly outdated remote URL.
`test_set_remote_url` is a regression test for the issue described
above.
`test_set_remote_url_invalid` verifies that the original issue is fixes,
where we updated the remote URL attribute of the repo object, but fail
to update the Git config.
Change-Id: I311842ccc7af38664c28177450ea9e80e1371638
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When a build is paused or resumed, we now store this information on the
build together with the event time. Instead of additional attributes for
each timestamp, we add an "event" list attribute to the build which can
also be used for other events in the future.
The events are stored in the SQL database and added to the MQTT payload
so the information can be used by the zuul-web UI (e.g. in the "build
times" gantt chart) or provided to external services.
Change-Id: I789b4f69faf96e3b8fd090a2e389df3bb9efd602
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If a build is to be deduplicated and has not started yet and has
a pending node request, we store a dictionary describing the target
deduplicated build in the node_requests dictionary on the buildset.
There were a few places where we directly accessed that dictionary
and assumed the results would be the node request id. Notably, this
could cause an error in pipeline processing (as well os potentially
some other edge cases such as reconfiguring).
Most of the time we can just ignore deduplicated node requests since
the "real" buildset will take care of them. This change enriches
the API to help with that. In other places, we add a check for the
type.
To test this, we enable relative_priority in the config file which
is used in the deduplication tests, and we also add an assertion
which runs at the end of every test that ensures there were no
pipeline exceptions during the test (almost all the existing dedup
tests fail this assertion before this change).
Change-Id: Ia0c3f000426011b59542d8e56b43767fccc89a22
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This updates the branch cache (and associated connection mixin)
to include information about supported project merge modes. With
this, if a project on github has the "squash" merge mode disabled
and a Zuul user attempts to configure Zuul to use the "squash"
mode, then Zuul will report a configuration syntax error.
This change adds implementation support only to the github driver.
Other drivers may add support in the future.
For all other drivers, the branch cache mixin simply returns a value
indicating that all merge modes are supported, so there will be no
behavior change.
This is also the upgrade strategy: the branch cache uses a
defaultdict that reports all merge modes supported for any project
when it first loads the cache from ZK after an upgrade.
Change-Id: I3ed9a98dfc1ed63ac11025eb792c61c9a6414384
|
|\ \ \ \ \ |
|
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We've investigated an issue where a job was stuck on the executor
because it wasn't aborted properly. The job was cancelled by the
scheduler, but the cleanup playbook on the executor ran into a timeout.
This caused another abort via the WatchDog.
The problem is that the abort function doesn't do anything if the
cleanup playbook is running [1]. Most probably this covers the case
that we don't want to abort the cleanup playbook after a normal job
cancellation.
However, this doesn't differentiate if the abort was caused by the run
of the cleanup playbook itself, resulting in a build that's hanging
indefinitely on the executor.
To fix this, we now differentiate if the abort was caused by a stop()
call [2] or if it was caused by a timeout. In case of a timeout, we kill
the running process.
Add a test case to validate the changed behaviour. Without the fix, the
test case runs indefinitetly because the cleanup playbook won't be
aborted even after the test times out (during the test cleanup).
[1]: https://opendev.org/zuul/zuul/src/commit/4d555ca675d204b1d668a63fab2942a70f159143/zuul/executor/server.py#L2688
[2]: https://opendev.org/zuul/zuul/src/commit/4d555ca675d204b1d668a63fab2942a70f159143/zuul/executor/server.py#L1064
Change-Id: I979f55b52da3b7a237ac826dfa8f3007e8679932
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Most of a change's attributes are tenant-independent. This however is
different for topic dependencies, which should only be considered in
tenants where the dependencies-by-topic feature is enabled.
This is mainly a problem when a project is part of multiple tenants as
the dependencies-by-topic setting might be different for each tenant. To
fix this we will only return the topic dependencies for a change in
tenants where the feature has been activated.
Since the `needs_changes` property is now a method called
`getNeedsChanges()`, we also changed `needed_by_changes` to
`getNeededByChanges()` so they match.
Change-Id: I343306db0abbe2fbf98ddb3f81b6d509eaf4a2bf
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When the git command crashes or is aborted due to a timeout we might end
up with a leaked index.lock file in the affected repository.
This has the effect that all subsequent git operations that try to
create the lock will fail. Since Zuul maintains a separate lock for
serializing operations on a repositotry, we can be sure that the lock
file was leaked in a previous operation and can be removed safely.
Unable to checkout 8a87ff7cc0d0c73ac14217b653f9773a7cfce3a7
Traceback (most recent call last):
File "/opt/zuul/lib/python3.10/site-packages/zuul/merger/merger.py", line 1045, in _mergeChange
repo.checkout(ref, zuul_event_id=zuul_event_id)
File "/opt/zuul/lib/python3.10/site-packages/zuul/merger/merger.py", line 561, in checkout
repo.head.reset(working_tree=True)
File "/opt/zuul/lib/python3.10/site-packages/git/refs/head.py", line 82, in reset
self.repo.git.reset(mode, commit, '--', paths, **kwargs)
File "/opt/zuul/lib/python3.10/site-packages/git/cmd.py", line 542, in <lambda>
return lambda *args, **kwargs: self._call_process(name, *args, **kwargs)
File "/opt/zuul/lib/python3.10/site-packages/git/cmd.py", line 1005, in _call_process
return self.execute(call, **exec_kwargs)
File "/opt/zuul/lib/python3.10/site-packages/git/cmd.py", line 822, in execute
raise GitCommandError(command, status, stderr_value, stdout_value)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git reset --hard HEAD --
stderr: 'fatal: Unable to create '/var/lib/zuul/merger-git/github/foo/foo%2Fbar/.git/index.lock': File exists.
Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.'
Change-Id: I97334383df476809c39e0d03b1af50cb59ee0cc7
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
PyJWT 2.6.0 began performing validation of iat (issued at) claims
in https://github.com/jpadilla/pyjwt/commit/9cb9401cc579f11dbb17181e8713f061f8e40ed4
I believe the intent of RFC7519 is to support any numeric values
(including floating point) for iat, nbf, and exp, however, the
PyJWT library has made the assumption that the values should be
integers, and therefore when we supply an iat with decimal seconds,
PyJWT will round down when validating the value. In our unit tests,
this can cause validation errors.
In order to avoid any issues, we will round down the times that
we supply when generating JWT tokens and supply them as integers
in accordance with the robustness principle.
Change-Id: Ia8341b4d5de827e2df8878f11f2d1f52a1243cd4
|
| |_|/ /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If a timer trigger is configured with a large jitter and a
reconfiguration happens within the jitter time, it is possible
to miss an expected scheduled trigger because the act of
reconfiguration removes and re-adds all of a tenant's timer
trigger apscheduler jobs.
To avoid this situation, we will try to preserve any jobs with
identical configurations.
Change-Id: I5d3a4d7be891fcb4b9a3f268ee347f2069aaded3
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Github uses libgit2 to compute merges without requiring a worktree [0].
In some cases this can lead to Github detecting a merge conflict while
for Zuul the PR merges fine.
To prevent such changes from entering dependent pipelines and e.g. cause
a gate reset, we'll also check if Github detected any merge conflicts.
[0] https://github.blog/2022-10-03-highlights-from-git-2-38/
Change-Id: I22275f24c903a8548bb0ef6c32a2e15ba9eadac8
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We currently only detect some errors with job parents when freezing
the job graph. This is due to the vagaries of job variants, where
it is possible for a variant on one branch to be okay while one on
another branch is an error. If the erroneous job doesn't match,
then there is no harm.
However, in the typical case where there is only one variant or
multiple variants are identical, it is possible for us to detect
during config loading a situation where we know the job graph
generation will later fail. This change adds that analysis and
raises errors early.
This can save users quite a bit of time, and since variants are
typically added one at a time, may even prevent users from getting
into abiguous situations which could only be detected when freezing
the job graph.
Change-Id: Ie8b9ee7758c94788ee7bc05947ddd97d9fa8e075
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If the file being comment on is the Gerrit special file starting with
"/" (e.g. /COMMIT_MSG), no line mapping transformation should be done,
otherwise strange errors like:
Job: unable to map line for file comments:
stderr: 'fatal: '/COMMIT_MSG' is outside repository at '...'
will show up after the job has run.
Change-Id: Id89041dc7d8bf3f6c956d85b38355053ff0fd707
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We may be able to speed up pipeline refreshes in cases where there
are large numbers of items or jobs/builds by parallelizing ZK reads.
Quick refresher: the ZK protocol is async, and kazoo uses a queue to
send operations to a single thread which manages IO. We typically
call synchronous kazoo client methods which wait for the async result
before returning. Since this is all thread-safe, we can attempt to
fill the kazoo pipe by having multiple threads call the synchronous
kazoo methods. If kazoo is waiting on IO for an earlier call, it
will be able to start a later request simultaneously.
Quick aside: it would be difficult for us to use the async methods
directly since our overall code structure is still ordered and
effectively single threaded (we need to load a QueueItem before we
can load the BuildSet and the Builds, etc).
Thus it makes the most sense for us to retain our ordering by using
a ThreadPoolExecutor to run some operations in parallel.
This change parallelizes loading QueueItems within a ChangeQueue,
and also Builds/Jobs within a BuildSet. These are the points in
a pipeline refresh tree which potentially have the largest number
of children and could benefit the most from the change, especially
if the ZK server has some measurable latency.
Change-Id: I0871cc05a2d13e4ddc4ac284bd67e5e3003200ad
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This adds the ability to specify that the Zuul executor should
acquire a semaphore before running an individual playbook. This
is useful for long running jobs which need exclusive access to
a resources for only a small amount of time.
Change-Id: I90f5e0f570ef6c4b0986b0143318a78ddc27bbde
|
|\ \ \ \ \ \
| |/ / / / /
|/| / / / /
| |/ / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This adds a temporary debug command to the zuul-scheduler process.
It allows an operator to enable detailed ZK request profiling for
a given tenant-pipeline. This will be used to identify opportunities
for further optimization.
Change-Id: Id6e0ee3ffc78874e91ebcdbfe14269c93af958cd
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This model api revision was added after 5.0.0, which means the backwards
compatability code is safe to remove afte 6.0.0. Since we're at 7.0.0+
now, remove this code to simplify it in preparation for upcoming changes.
Change-Id: Icb84e7ea4c0f5a3eef9970fda4810664739c9725
|
| |/ / /
|/| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The model tests exercise some tracing code (by creating buildsets,
etc) but do not have the tracing module initialized because they are
not full Zuul system tests and do not inherit from ZuulTestCase.
Without initialization, we get different behavior from opentelemetry
(we have not provided our global tracer object, etc), which makes
these tests perform differently than they would in production. They
also fail when run in isolation because of these differences. They
likely only succeed when we run the whole test suite because of leakage
of the opentelemetry module initialization.
To correct this, initialize the tracing library (but with no
configuration, so we still are not configuring it to emit traces).
Change-Id: Iea5a8890f545c4e4ddf0a796b3a2fcfc7078a50e
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
When specifying job.override-checkout, we apply job variants from
that match the specified branch. The mechanism we use to do that
is to create a synthetic Ref object to pass to the branch matcher
instead of the real branch of the Change (since the real branch
is likely different -- that's why override-checkout was specified).
However, branch matching behavior has gottes slightly more
sophisticated and Ref objects don't match in the same way that
Change objects do.
In particular, implied branch matchers match Ref subclasses that
have a branch attribute iff the match is exact.
This means that if a user constructed two branches:
* testbranch
* testbranch2
and used override-checkout to favor a job definition from testbranch2,
the implied branch matcher for the variant in testbranch would match
since the matcher behaved as if it were matching a Ref not a Change
or Branch.
To correct this, we update the simulated change object used in the
override-checkout variant matching routine to be a Branch (which
unsurprisingly has a branch attribute) instead of a Ref.
The test test_implied_branch_matcher_similar_override_checkout is added
to cover this test case. Additionally, the test
test_implied_branch_matcher_similar is added for good measure (it tests
implied branch matchers in the same way but without specifying
override-checkout), though its behavior is already correct.
A release note is included since this may have an effect on job behavior.
Change-Id: I1104eaf02f752e8a73e9b04939f03a4888763b27
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
GitHub supports a "rebase" merge mode where it will rebase the PR
onto the target branch and fast-forward the target branch to the
result of the rebase.
Add support for this process to the merger so that it can prepare
an effective simulated repo, and map the merge-mode to the merge
operation in the reporter so that gating behavior matches.
This change also makes a few tweaks to the merger to improve
consistency (including renaming a variable ref->base), and corrects
some typos in the similar squash merge test methods.
Change-Id: I9db1d163bafda38204360648bb6781800d2a09b4
|
|\ \ \ \ \
| |/ / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The default merge mode is 'merge-resolve' because it has been observed
that it more closely matches the behavior of jgit in Gerrit (or, at
least it did the last time we looked into this). The other drivers
are unlikely to use jgit and more likely to use the default git
merge strategy.
This change allows the default to differ based on the driver, and
changes the default for all non-gerrit drivers to 'merge'.
The implementation anticipates that we may want to add more granularity
in the future, so the API accepts a project as an argument, and in
the future, drivers could provide a per-project default (which they
may obtain from the remote code review system). That is not implemented
yet.
This adds some extra data to the /projects endpoint in the REST api.
It is currently not easy (and perhaps not possible) to determine what a
project's merge mode is through the api. This change adds a metadata
field to the output which will show the resulting value computed from
all of the project stanzas. The project stanzas themselves may have
null values for the merge modes now, so the web app now protects against
that.
Change-Id: I9ddb79988ca08aba4662cd82124bd91e49fd053c
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This updates the web UI to support the requirement for authn/z
for read-only access.
If authz is required for read access, we will automatically redirect.
If we return and still aren't authorized, we will display an
"Authorization required" page (rather than continuing and popping up
API error notifications).
The API methods are updated to send an authorization token whenever
one is present.
Change-Id: I31c13c943d05819b4122fcbcf2eaf41515c5b1d9
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This allows configuration of read-only access rules, and corresponding
documentation. It wraps every API method in an auth check (other than
info endpoints).
It exposes information in the info endpoints that the web UI can use
to decide whether it should send authentication information for all
requests. A later change will update the web UI to use that.
Change-Id: I3985c3d0b9f831fd004b2bb010ab621c00486e05
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In order to allow for authenticated read-only access to zuul-web,
we need to be able to control the authz of the API root. Currently,
we can only specify auth info for tenants. But if we want to control
access to the tenant list itself, we need to be able to specify auth
rules.
To that end, add a new "api-root" tenant configuration object which,
like tenants themselves, will allow attaching authz rules to it.
We don't have any admin-level API endpoints at the root, so this change
does not add "admin-rules" to the api-root object, but if we do develop
those in the future, it could be added.
A later change will add "access-rules" to the api-root in order to
allow configuration of authenticated read-only access.
This change does add an "authentication-realm" to the api-root object
since that already exists for tenants and it will make sense to have
that in the future as well. Currently the /info endpoint uses the
system default authentication realm, but this will override it if
set.
In general, the approach here is that the "api-root" object should
mirror the "tenant" object for all attributes that make sense.
Change-Id: I4efc6fbd64f266e7a10e101db3350837adce371f
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Authentication checking in the admin methods of zuul-web is very
duplicative. Consolidate all of the auth checks into a cherrypy
tool that we can use to decorate methods.
This tool also anticipates that we will have read-only checks in
the future, but for now, it is still only used for admin checks.
This tool also populates some additional parameters (like tenant
and auth info) so that we don't need to call "getTenantOrRaise"
multiple times in a request.
Several methods performed HTTP method checks inside the method
which inhibits our ability to wrap an entire method with an
auth_check. To resolve this, we now use method conditions on
the routes dispatcher. As a convention, I have put the
options handling on the "GET" methods since they are most likely
to be universal.
Change-Id: Id815efd9337cbed621509bb0f914bdb552379bc7
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This method iterates over all tenants but only needs to return
information about a single tenant. Simplify the calculation for
efficiency.
This includes a change in behavior for unknown tenants. Currently,
a request to /api/tenant/{name}/authorizations will always succeed
even if the tenant does not exist (it will return an authorization
entry indicating the user is not an admin of the unknown tenant).
This is unnecessary and confusing. It will now return a 404 for
the unknown tenant.
In the updated unit test, tenant-two was an unknown tenant; its name
has been updated to 'unknown' to make that clear.
(Since the test asserted that data were returned either way, it is
unclear whether the original author of the unit test expected
tenant-two to be unknown or known.)
Change-Id: I545575fb73ef555b34c207f8a5f2e70935c049aa
|