| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
| |
This updates the github driver to cache PRs by sha using cachetool's
LRUCache.
We make this change because we need to cache closed PRs so can't rely on
the action of closing a PR to remove the PR from the cache. Since we
don't have a good method of evicting entries we fall back to LRU with
a reasonable cache size (2k commits).
Change-Id: I5fb6c8b33f9eed221a8b84e537f02e7dccf2d2df
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the gearman server vanishes (e.g. due to a VM crash) some clients
like the merger may not notice that it is gone. They just wait forever
for data to be received on an inactive connection. In our case the VM
containing the zuul-scheduler crashed and after the restart of the
scheduler all mergers were waiting for data on the stale connection
which blocked a successful scheduler restart. Using tcp keepalive we
can detect that situation and let broken inactive connections be
killed by the kernel.
Depends-On: I8589cd45450245a25539c051355b38d16ee9f4b9
Change-Id: I30049d59d873d64f3b69c5587c775827e3545854
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Our custom command.py Ansible module is updated to match the
version from 2.5, plus our additions.
strip_internal_keys() is moved within Ansible yet again.
Change-Id: Iab951c11b23a24757cf5334b36bc8f7d12e19db0
Depends-On: https://review.openstack.org/567007
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This updates both the dependency to ansible 2.4 and also ports in the
needed changes to the command module.
Version 2.4.0 definitely does not work for us because YAML
hosts file parsing is broken, but 2.4.1 and greater should
be fine.
Change-Id: I63f72b45ecb9533eac5ba9eb0eef426beec905e3
|
| |
| |
| |
| | |
Change-Id: I480385a8b0e85266fdd77d251d0b748f1be028b0
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Aiohttp (and related libraries) have a python support policy
which is causing us problems.
* Cherrypy supports threads which integrates well with the rest
of Zuul.
Change-Id: Ib611df06035890d3e87fc5ad92fdfc7ac441edce
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
This change adds a MQTT reporter to publish build results message.
Change-Id: I5a9937a7952beac5c77d83ab791d48ff000b447b
|
| | |
| | |
| | |
| | |
| | | |
Change-Id: Iaae6ca9cc52d2c63821bc4266aef437457e6fe92
Story: 2002104
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
async-timeout 3.0.0 has released and only supports python 3.5.3 (which
is greater than the Xenial we test on). Pin it for lower pythons
(this is similar to I460652a4468bfa76895a5c563612ff6119c0d483 which
pinned yarl -- in that case it was stated it was 1.1.1 which would be
the final release for python<3.5.3 support; however in this case it's
not clear if maybe there might be more 2.X releases, so hence the
less-than qualifier...)
Change-Id: Ifa2ddedbd4f431d8cd08059e31814b95dc74e368
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As described in [1], the latest releases of yarl (brought in via
aiohttp) require python 3.5.3. Pin it for lower versions (which
happens to be 3.5.2 in xenial ... which we test on) to the last
release that works (1.1.1).
[1] https://github.com/aio-libs/yarl/issues/189
Change-Id: I460652a4468bfa76895a5c563612ff6119c0d483
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The github status requirements matching and trigger filter are
currently plain text matching based. This currently limits sharing of
pipeline definitions between tenants as zuul reports the status as
'<tenant>/<pipeline>'. This currently makes it necessary to define
trigger filter for each tenant [1] and completely blocks pipeline
requirements.
A solution to this is regex matching which makes it possible to define
the filter once [2].
Further this enables an interesting further use case to trigger on any
successfull status [3]. This makes it easier to cooperate with other
CI systems or github apps which also set a status.
Directly use re2 as this will be used in the future for regex
matching.
[1] Trigger filter snippet
trigger:
github:
- event: pull_request
action: status
status:
- zuul:tenant1/check:success
- zuul:tenant2/check:success
- zuul:tenant3/check:success
- zuul:tenant4/check:success
[2] Regex trigger filter snippet
trigger:
github:
- event: pull_request
action: status
status:
- zuul:.+/check:success
[3] Generic success filter snippet
trigger:
github:
- event: pull_request
action: status
status:
- .*:success
Change-Id: Id1b9d7334db78d0f13db33d47a80ffdb65f921df
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We need at least GitHub3.py 1.1.0 to pull in a bugfix which fixes
handling large patches [1].
[1] https://github.com/sigmavirus24/github3.py/pull/817
Change-Id: I3e83cae456c4211cfe8b3547fab5ed652c06fe76
|
| |
| |
| |
| |
| |
| | |
This release fixes the issues preventing our use of it.
Change-Id: Ife6534360cce313096596226fca123d67b2c5536
|
|/
|
|
|
|
| |
Update to stats >=3.0 to be in-line with nodepool
Change-Id: Ib84655378bdb7c7c3c66bf6187b462b3be2f908d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The latest change on github3.py broke us [1]. As a temporary quick fix
pin it to a revision prior to this change until we have a proper way
how to deal with this.
[1] Trace:
2018-03-12 12:48:36,299 ERROR zuul.Scheduler: Exception in management event:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/github3/models.py", line 47, in __init__
self._update_attributes(json)
File "/usr/lib/python3.6/site-packages/github3/repos/repo.py", line 2720, in _update_attributes
self.original_license = repo['license']
KeyError: 'license'
Change-Id: I22f62ea6b6d621a8b817542e72fe3a762a79a491
|
|
|
|
|
|
| |
We don't use it anymore. We use aiohttp.
Change-Id: Iad3aa9421e8a317941a5f252bda9817806caee93
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Detecting an unknown tenant got tricky when we started returning a
message about tenants not being ready yet. In order to be able to return
a 404 for tenants we legitimately do not know anything about, keep an
UnparsedAbideConfig on the scheduler so that we can check it in case of
a miss. If we know about a tenant but don't have a config, we can return
the 'not ready' message. If we don't know about it at all, we can throw
the 404.
Also, remove the custom handler test. We have tests in other contexts
(like tests of the github webhook) that test the equivilent
functionality.
Change-Id: Icff5d7036b6a237646ad7482103f7b487621bac0
|
|
|
|
|
|
|
|
|
|
|
| |
This is to work around the following issue:
aiohttp requires Python '>=3.5.3' but the running Python is 3.5.2
Which is breaking builds on Ubuntu Xenial.
Change-Id: I656f4e37a12159b60154cd868a5416c6af3e3139
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
It seems like recent version fixes deadlock issue and there is security
concerns regarding PyCrypto: http://www.paramiko.org/installing-1.x.html
Change-Id: I601de7319d2ed7746135028b64483110c19becdf
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
This will unregister for concurrent jobs whenever available system
memory drops below 5% by default. It does not take into account
buffers or cache which could be reclaimed. Users can tune this up
or down as necessary.
This is a very conservative default and will likely need tuning
once observed in production.
Change-Id: Iab6469c0173d9f5635769d4ab0e8034a41355cd4
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These tests relied on sleeps which can cause races when running
the full test suite in parallel. Instead, wait for the events
we know will happen to happen.
Also remove the dependency on yarl now that aiohttp has made a
release which works with yarl 1.0 (however, it does not work with
<1.0 which is why this needs to be combined with this change to
fix tests).
Change-Id: Ib1c626cdd3f083dd1d23a3c6547bd7163b66567e
|
|
|
|
|
|
|
|
| |
Aiohttp had an open requirement specification on yarl which has
released a 1.0 that is backwards incompatible. Pin to <1.0
until https://github.com/aio-libs/aiohttp/issues/2662 is fixed.
Change-Id: I4e750900501ed92bdbb616f5664f7e8ab7fa99c3
|
|
|
|
|
|
|
|
| |
2.1.8 incorporates the noted fixes.
Story: 2001393
Task: 5982
Change-Id: I828506bd7c1a1f7ce088e958361782b6cbc71f5a
|
|
|
|
|
|
|
|
|
|
| |
We don't need these unreleased requirements to be editable. Doing
so means that they are installed into the user's home directory.
That doesn't work if you install them as root.
Instead, install them in the normal, non-editable manner.
Change-Id: Iab7b946d03db8e80ac296485bb8de3f9c89cb8b5
|
|
|
|
|
|
| |
Also clean up some tabs.
Change-Id: If641a164c21dc7b13d48548558ea16e0c0a0b400
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Timeout remote git operations after 300 seconds.
Because it could be in an invalid state, delete the local repo if a
timeout occurs (subsequent operations will recreate it).
This replaces our use of the clone_from() and fetch() methods from
GitPython with lower-level equivalents. The high-level methods
do not currently permit the hard timeout.
The GitPython requirement is changed to a temporary fork until both
https://github.com/gitpython-developers/GitPython/pull/682
and
https://github.com/gitpython-developers/GitPython/pull/686
end up in a release.
Change-Id: I7f680472a8d67ff2dbe7956a8585fb3714119e65
|
|
|
|
|
|
|
|
|
|
|
| |
Running the ipv4 and ipv6 filters [1] requires the netaddr python
library to be installed on the Ansible *control node*, which ends up
being the executor.
These filters are very useful to determine if an IP is ipv4 or ipv6.
[1]: http://docs.ansible.com/ansible/latest/playbooks_filters_ipaddr.html
Change-Id: I800c7512fc60f9a302fb77cb061610430fcf8e49
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ansible 2.4 is scheduled for release in september [1] and changes a lot
of things we might not be ready for.
Let's just make sure we control when we want to upgrade to 2.4.
The latest release of ARA, 0.14.2, does not currently support Ansible 2.4.
The support for Ansible 2.4 will likely come in 1.0.
[1]: https://github.com/ansible/ansible/blob/b3f2d1befe509614febc28d1bf1734cb2cc50ac0/docs/docsite/rst/roadmap/ROADMAP_2_4.rst
Change-Id: Icd9b847d4e75c509aba0c3a060bb6eedfd9be257
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
zuul now provides socket-based console streaming, which is super cool.
In order to have jenkins parity with web streaming, we need to provide a
websocket (javascript in browsers can't really connect to random ports
on servers)
After surveying the existing python websocket options, basically all of
them are based around twisted, eventlet, gevent or asyncio. It's not
just a thing we can easily deal with from our current webob/paste
structure, because it is a change to the fundamental HTTP handling.
While we could write our own websocket server implementation that was
threaded like the rest of zuul, that's a pretty giant amount of work.
Instead, we can run an async-based server that's just for the
websockets, so that we're not all of a sudden putting async code into
the rest of zuul and winding up frankensteined. Since this is new code,
using asyncio and python3 seems like an excellent starting place.
aiohttp supports running a websocket server in a thread. It also
supports doing other HTTP/REST calls, so by going aiohttp we can set
ourselves up for a single answer for the HTTP tier.
In order to keep us from being an open socket relay, we'll expect two
parameters as the first message on the websocket - what's the zuul build
uuid, and what log file do we want to stream. (the second thing,
multiple log files, isn't supported yet by the rest of zuul, but one can
imagine a future where we'd like to support that too, so it's in the
protocol) The websocket server will then ask zuul over gearman for the
IP and port associated with the build and logfile and will start
streaming it to the socket.
Ultimately we'll want the status page to make links of the form:
/console.html?uuid=<uuid>&logfile=console.log
and we'll want to have apache map the websocket server to something like
/console.
Co-Authored-By: Monty Taylor <mordred@inaugust.com>
Change-Id: Idd0d3f9259e81fa9a60d7540664ce8d5ad2c298f
|
|
|
|
|
|
| |
The zuul_stream callback needs ansible>=2.3 for the _handle_exception method.
Change-Id: I9130aaacc036c2e67fea31efe54c4cbaf7193f50
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It exists only for py2/py3 compat. We do not need it any more.
This will explicitly break Zuul v3 for python2, which is different than
simply ceasing to test it and no longer declaring we support it. Since
we're not testing it any longer, it's bound to degrade overtime without
us noticing, so hopefully a clean and explicit break will prevent people
from running under python2 and it working for a minute, then breaking
later.
Change-Id: Ia16bb399a2869ab37a183f3f2197275bb3acafee
|
|
|
|
|
|
| |
OrderedDict is included in python3 collections module.
Change-Id: I63c91e22a8f03a5e2598a224f0b1018c2951d67b
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
The basics of authenticating to github as an app when posting
comments and cloning. This is still a WIP.
Change-Id: I11fab75d635a8bcea7210945df4071bf51d7d3f2
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Github is very strict about quotas per authentication. To prevent going
over these quotas and to ensure we don't get marked as spam we should
make sure to respect the Etags present in responses.
Use cachecontrol to give us a caching layer that will fullfil requests
that are cachable without going over the network.
A point to consider here is that there does not appear to be a way to
vary caching based on the current authentication - so any per auth
requests may be mishandled or auth may expire. I don't think there is
any concern here as it's simply zuul making the requests.
Change-Id: I04bfc0cfec1ffc8ebdfd2d9181ac3119cc6e14ac
Signed-off-by: Jamie Lennox <jamielennox@gmail.com>
|
|/
|
|
|
|
|
|
|
| |
Using the git protocol makes it hard to fetch the repo if internet
access is only possible via a proxy. An easy fix is using the https
protocol. That way the http_proxy vars on the host are automatically
obeyed and the clone via proxy works.
Change-Id: I18d75b5a16c809ac2d7834e91d32617017aea7f8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Github reviews are a new pipeline requirement that is driver specific.
Reviews can be approved, changes_requested, or comment. They can come
from people with read, write, or admin access. Access is hierarchical,
admin level includes write and read, and write access includes read.
Review requirements model loosely the gerrit approvals, allowing
filtering on username, email, newer-than, older-than, type, and
permission.
Brings in an unreleased Github3.py code. Further extends that code to
determine if a user has push rights to a repository.
Documentation is not included with this change, as the docs need
restructuring for driver specific require / reject.
Change-Id: I3ab2139c2b11b7dc8aa896a03047615bcf42adba
Signed-off-by: Jesse Keating <omgjlk@us.ibm.com>
|
|
|
|
|
|
|
|
| |
This makes the transition to python3 much smoother.
Change-Id: I9d8638dd98502bdd91cbe6caf3d94ce197f06c6f
Depends-On: If6bfc35d916cfb84d630af59f4fde4ccae5187d4
Depends-On: I93bfe33f898294f30a82c0a24a18a081f9752354
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Github reporter can be configured to merge pull reqeusts.
When there are multiple merges called at the same time, it leads to a
situation when github returns 405 MethodNotAllowed error becuase github
is checking the branch mergeability.
When we encounter this situation, we try to wait a bit (2 seconds for
now) and try to merge again.
Pre-release version of Github3.py has to be used, because the latest
released version 9.4 has a bug in merge method. Furthermore the newest
merge method supports specifying exact sha to be merged, which is
desirable to ensure that the exact commit that went through the pipeline
gets merged.
Both are already fixed in the stable branch, but not yet released on
PyPi. See:
https://github.com/sigmavirus24/github3.py/commit/90c6b7c2653d65ce686cf4346f9aea9cb9c5c836
https://github.com/sigmavirus24/github3.py/commit/6ef02cb33ff21257eeaf9cab186419ca45ef5806
Change-Id: I0c3abbcce476774a5ba8981c171382eaa4fe0abf
|
|
|
|
|
|
|
|
|
| |
Story: 2000774
Change-Id: I2713c5d19326213539689e9d822831a393b2bf19
Co-Authored-By: Wayne Warren <waynr+launchpad@sdf.org>
Co-Authored-By: Jan Hruban <jan.hruban@gooddata.com>
Co-Authored-By: Jesse Keating <omgjlk@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Reading historical voluptuous docs it appears that the change to dict
handling occurred in the new 0.10.2 release. There are no earlier 0.10
releases to check against so just require voluptuous>=0.10.2.
Change-Id: I5ade4a8c2d03d5519ae1ed95e133717d5c28d0ad
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Every project should have a public and private key to encrypt secrets.
Zuul expects them to already exist under /var/lib/zuul/keys on the
scheduler host. If an operator manages these keys externally, they
should simply be placed there. If they are not found, Zuul will
create them on startup and store them there so they will be found on
the next run.
The test framework uses a pre-generated keypair most of the time to
save time, however, a test is added to ensure that the auto-generate
code path is run.
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Change-Id: Iedf7ce6ca97fab2a8b800158ed1561e45899bc51
|
|
|
|
|
|
|
|
|
| |
It appears newer versions of GitPython have slowed considerably.
Cap GitPython until https://github.com/gitpython-developers/GitPython/issues/605
is resolved.
Change-Id: Ie6c8722e8b607bb50e77fbad59e18363616f7e0d
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|
|\
| |
| |
| | |
Change-Id: I37a3c5d4f12917b111b7eb624f8b68689687ebc4
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This will allow us to enter results from all jobs for
use with the openstack-health dashboard.
Depends-On: I08dbbb64b3daba915a94e455f75eef61ab392852
Change-Id: I28056d84a3f6abcd8d9038a91a6c9a3902142f90
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
|