| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
| |
We had a point-in-time workaround for the importlib issue with
tox-py35 that is no longer needed.
Change-Id: I5df0dc4ad87e5381c6d15926a6aa6ac332eb4b35
|
|
|
|
|
|
|
|
|
|
|
|
| |
The global tox installation on our test nodes is affected by an
upstream issue[1]. Additionally, the virtualenv created by tox
under those conditions is also affected. To work around this,
pin the version of importlib-resources in the global tox install,
as well as in Zuul's own requirements.
[1] https://gitlab.com/python-devs/importlib_resources/issues/83
Change-Id: I31ed50185a71d867a2ad512ef9b526c5b607ed5c
|
|
|
|
|
|
|
|
| |
install-docker allows us to install docker-compose, so why not just use
that instead and test our roles.
Depends-On: https://review.opendev.org/707902
Change-Id: Ie45cbcd6ad05d035e708fed45348e498a35a8eac
|
|
|
|
|
|
| |
This catches the localtest playbook up with the recent moves.
Change-Id: Ib87e7e9fc91e56bcbf6390d1e130dacfdb2302e6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reorganizing docs as recommended in:
https://www.divio.com/blog/documentation/
This is simply a reorganization of the existing documents and changes
no content EXCEPT to correct the location of sphinx doc references.
Expect followup changes to change document names (to reflect the new
structure) and to move content from existing guides (e.g., to move the
pipeline/project/job structure definitions out of the "Project Configuration"
reference guide into their own reference documents for easier locatability).
All documents are now located in either the "overview", "tutorials",
"discussions", or "references" subdirectories to reflect the new structure
presented to the user. Code examples and images are moved to "examples" and
"images" root-level directories.
Developer specific documents are located in the "references/developer"
directory.
Change-Id: I538ffd7409941c53bf42fe64b7acbc146023c1e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pre.yaml playbook for the zuul-stream-functional tests copies
the Ansible inventory.yaml file from the executor to the test
"controller" node. The controller then runs the specified version
of Ansible 2.x against the other nodes. This fails because the
executor version of inventory.yaml contains "auto" for the Ansible
python interpreter which is valid under the version of Ansible used
on the executor, but on the controller node, which runs the older
versions of Ansible, this is *not* a valid value. Thus it fails.
This change forces the executor to use the version of Ansible being
tested on the controller so that the inventory.yaml will be correct.
Also, Ansible 2.8 now throws a FileNotFoundError exception instead
of OSError when a referenced file is not found.
Change-Id: Ibd31f1161df0076ed7498fd1d7b1ae76c802c6e4
|
|
|
|
|
|
|
| |
The quick start test assumes a compressed job output file but we don't
compress it by default anymore. Handle this new behavior change.
Change-Id: I9db270ae5ceaa98afcd078af0d99460d406295f3
|
|
|
|
|
|
|
|
|
| |
When the quickstart job fails, make sure that we get stderr from
the docker logs. Also, add timestamps to the waiting to start
script so that we can compare when it gave up to when systems
came online.
Change-Id: I632c794de7fb792fbe7d0b8e095701a5d7fd1af7
|
|
|
|
|
|
| |
We need it installed so that the javascript gets built.
Change-Id: I909ea8af5cc11e6109f6258e2294ef7593d06881
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With the removal[*] of refs/publish in Gerrit 3.0.0, git-review
1.27.1 or later is needed to avoid attempting a push there and
ultimately failing. The git-review package in Ubuntu 18.04 LTS is
too old (1.26.0), so use latest from PyPI instead.
Adjust the quick-start document to install git-review with pip, and
on Debian/Ubuntu suggest including the python3-pip distro package
since it's split out separate from the interpreter packages.
[*] https://gerrit-review.googlesource.com/c/gerrit/+/192494
Change-Id: I247fb761667a99cf9f25478b49c5a1fe5d11a6cf
|
|
|
|
|
|
|
|
|
| |
To use the buildset registry we need the docker-compose command to
be executed as the zuul user, otherwise the auth configuration is
not set and the test pulls the images from docker.io.
This change also adds the --digests argument to docker image ls.
Change-Id: I62970cae4851b06ff79cdc953f90772c550000bd
|
|
|
|
| |
Change-Id: I3c1ac5478efed4dee1d525deb036d457287fa136
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:
http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html
Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
|
|
|
|
|
|
| |
The command 'ip addr show' has a differnet output format.
Change-Id: Ida96686fd22a3c1d5d83b4688ae2ece19948a75d
|
|
|
|
|
|
|
| |
The job no longer needs to build duplicate copies of the images,
it can fetch them from the buildset registry instead.
Change-Id: Ibcca12c20d29b9b45a67b65934e5a02087c8cdf8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As a first step towards supporting multiple ansible versions we need
tooling to manage ansible installations. This moves the installation
of ansible from the requirements.txt into zuul. This is called as a
setup hook to install the ansible versions into
<prefix>/lib/zuul/ansible. Further this tooling abstracts knowledge
that the executor must know in order to actually run the correct
version of ansible.
The actual usage of multiple ansible versions will be done in
follow-ups.
For better maintainability the ansible plugins live in
zuul/ansible/base where plugins can be kept in different versions if
necessary. For each supported ansible version there is a specific
folder that symlinks the according plugins.
Change-Id: I5ce1385245c76818777aa34230786a9dbaf723e5
Depends-On: https://review.openstack.org/623927
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While pbrx is nice and all, it's quite the divergence from how
the rest of the container ecosystem works. Switch to using
Dockerfile and the python-builder image.
Bind mount ld.so.cache into bwrap context
When using images based on the python:slim base image, python
is installed in /usr/local and the linker needs to know to look
in /usr/local/lib for shared libraries.
Depends-On: https://review.openstack.org/632187
Change-Id: I84f6dd2a8e3222f7807103dcbb61bdadedfdd22d
|
|
|
|
|
|
| |
Replace plain timeout with retries over tenant status API
Change-Id: I741f3cdc3a042af9775998c72ce6fb5d9c552f8b
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to the docs ansible is expected to return 3 if there were
unreachable nodes. Zuul reacts on this and retries the job. However
ansible really returns 4 in this case [1] which actually is the code
for a parse error (which doesn't make sense to retry).
To work around that add a simple callback plugin that gets notified if
there are unreachable nodes and write the information about which
nodes failed to a special file beneath the job-output.txt. The
executor can detect this and react properly then. Further we have the
information about which nodes failed in this file which gets uploaded
to the log server too if it exists.
[1] https://github.com/ansible/ansible/issues/19720
Change-Id: I6d609835dba18b50dcbe883d01c6229ce4e38d91
|
| |
| |
| |
| | |
Change-Id: I492841fea5a3d3c48b38bde30d73ad12353553af
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This allows us to show build history in the web ui.
Also add a local test playbook for the quickstart which basically
does what the zuul-quick-start job does, but expects to run locally
on a workstation (ie, does not install the docker registry and
docker).
Change-Id: Id62dcf69f48399dab3d1259679bf2fc2ce50460e
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
We should wait until Zuul has loaded its configuration before
performing the status check. That's slightly complicated right
now, so stabilize the job with a sleep until we work through
the correct solution.
Change-Id: Ia94b14deae16786e5f88507dde637198fceb7707
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Tell users that the status page exists and point them to it during
the quick-start documentation.
Also, verify that it is served in the quick-start test job.
Change-Id: I8783ac731112af7752e8a7fc34e3337b52c382d9
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This makes the job suitable for gating.
Depends-On: https://review.openstack.org/609844
Change-Id: I4f32a35ddb9f880bb617a4896429e4cb05b0c2f1
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This test is trying to test a user process on a typical VM/workstation
so make sure that apt behaves in the default manner and installs
recommended packages (such as docker itself).
Change-Id: I62b1367f9d4311deedf16176fa3f97cf0b3a74f1
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This playbook performs approximately the same steps documented in
the quick-start tutorial. Use it to verify that things still function.
Note that it does not yet build the images from the current source
(rather, it downloads them from dockerhub) so it is not suitable
as a gate check (it does not use the change under test in zuul).
It does, however, use the sample config files from the change under
test, so can be used to verify changes to those.
Incidentally, this is the first-ever live functional test between
Zuul, Nodepool, and Gerrit.
Change-Id: I5b3dc4b8a8d409787d07b4ad155898f97f1e9eb9
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This change simplify dashboard run playbook to fix sub-url serving of
ci build.
Change-Id: I58db8958894f2b51cca03752d9913ca11df5bba4
|
|/ /
| |
| |
| |
| |
| |
| | |
A previous change broke the homepage update for zuul-build-dashboard
ci results.
Change-Id: Ie2344070425a2d0edab9c3edb15b0e2b577a02ce
|
| |
| |
| |
| |
| |
| | |
This reverts commit 3dba813c643ec8f4b3323c2a09c6aecf8ad4d338.
Change-Id: I233797a9b4e3485491c49675da2c2efbdba59449
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Revert "Fix publish-openstack-javascript-content"
This reverts commit ca199eb9dbb64e25490ee5803e4f18c91f34681d.
This reverts commit 1082faae958bffa719ab333c3f5ae9776a8b26d7.
This appears to remove the tarball publishing system that we rely on.
Change-Id: Id746fb826dfc01b157c5b772adc1d2991ddcd93a
|
|/
|
|
|
|
|
|
|
|
| |
This change rewrites the web interface using React:
http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-August/000528.html
Depends-On: https://review.openstack.org/591964
Change-Id: Ic6c33102ac3da69ebd0b8e9c6c8b431d51f3cfd4
Co-Authored-By: Monty Taylor <mordred@inaugust.com>
Co-Authored-By: James E. Blair <jeblair@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing code results in:
cp: omitting directory 'src/git.openstack.org/openstack-infra/zuul/zuul/web/static/t'
because it's picking up the t subdir when trying to copy everything.
Update it to be more smarter about which things to copy.
Also, add a shell variable to make the shell snipped easier to read.
Change-Id: Ib61110cfa10e137c3d780e8529e64655b64c3cce
|
|
|
|
|
|
|
|
| |
We currently use symlink loops to allow the multi-tenant dashboard to
work properly. With the move to swift, that's gonna be no bueno. Just
copy the html/js files instead of symlink. There's not that many of them.
Change-Id: I8a71abe4329cff817beca71e61127967b5b8aeb5
|
|
|
|
|
|
|
|
| |
This reverts commit fc1a71f69fc4a09983a8b1018f3cf5a935037451.
This time with better handling for base hrefs.
Change-Id: I530b6ff0a4da0546584d0c93bf6e0bb716a9dbc3
|
|
|
|
|
|
|
|
|
| |
This reverts commit 36aecc1229d8071980210314bb1caa3fd4f9ef90.
This reverts commit 683f50ed5537d7024912867a61870af57bfdbce9.
This caused zuul.openstack.org to attempt to GET "https://api/status".
Change-Id: Ib25356f7ea5bfeec84e91195ac161d497f74d73d
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we got started in all of this angular business back in the good
old storyboard days of yore, the angular folks cut a major release
(ok, 5 major releases). The old v1 angular is known as angularjs now, and
starting at v2 the new codebase is just 'angular'. While angularjs is
still supported for now, angularjs vs. angular seems to be more like
zuulv2 vs. zuulv3 - the developers really want people to
be on the >=v2 series, and they spent a good deal of time fixing issues
from the original angularjs.
The notable differences are the angular is a bit more explicit/verbose,
and that it uses typescript instead of plain javscript. The increased
verbosity wasn't the most popular with some fans of the original angularjs,
but for those of us who aren't breathing it every day the verbosity is
helpful.
There is a recommended code organization structure which has been used.
For zuul, there are notable changes to how the http client and location
service work, so the code related to those has been reworked.
$http has been reworked to use HttpClient - which defaults to grabbing
the remote json and which can do so in a typesafe way.
$location has been reworked to use the angular-routing module, which allows us
to pull both URL and Query String parameters in a structured manner. We
can similary pass query parameters to our output http requests.
Since routing is the new solution for $location, extract the navigation
bar into a re-usable component.
Add tslint config for the typescript. Keep running eslint on our
remaining plain javascript files, at least until we've got them all
transitioned over. Use the angular tslint config as a base, but also
adopt the rule from standardjs that says to not use semicolons since
they are not actually needed.
The main.ejs file is a webpack template, not an angular template. Move
it to web/config with the other webpack files to make that clear.
Add a job that builds the zuul dashboard with the ZUUL_API_URL set to
point to software factory. This should allow us to see a live test with
a multi-tenant scheme.
Depends-On: https://review.openstack.org/572542
Change-Id: Ida959da05df358994f4d11bb6f40f094d39a9541
Co-Authored-By: Tristan Cacqueray <tdecacqu@redhat.com>
Co-Authored-By: Artem Goncharov <artem.goncharov@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Some deployment methods require an __init__.py file in every directory
containing python files. However action-general is no valid package
name so we need to rename that.
Change-Id: If15b0a6166538debc52df41c06767978ef183b05
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The log streaming callback is not being called in the same way
in Ansible 2.5 as it was in 2.3. In particular, in some cases
different Task objects are used for different hosts. This,
combined with the fact that the callback is only called once for
a given task means that in these cases we are unable to supply
the zuul_log_id to the Task object for the second host on a task.
This can be resolved by injecting the zuul_log_id within the command
action plugin based on the task uuid directly.
Change-Id: I7ff35263c52d93aeabe915532230964994c30850
|
|
|
|
|
|
|
| |
This is no longer needed.
Depends-On: https://review.openstack.org/560982
Change-Id: I927ab285761bf711d7a223816a44b59bb026faa8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
yarn drives package and dependency management. webpack handles
bundling, minification and transpiling down to browser-acceptable
javascript but allows for more modern javascript like import statements.
There are some really neat things in the webpack dev server. CSS
changes, for instance, get applied immediately without a refresh. Other
things, like the jquery plugin do need a refresh, but it's handled just
on a file changing.
As a followup, we can also consider turning the majority of the status page
into a webpack library that other people can depend on as a mechanism
for direct use. Things like that haven't been touched because allowing
folks to poke at the existing known status page without too many changes
using the tools seems like a good way for people to learn/understand the
stack.
Move things so that the built content gets put
into zuul/web/static so that the built-in static serving from zuul-web
will/can serve the files.
Update MANIFEST.in so that if npm run build:dist is run before the
python setup.py sdist, the built html/javascript content will be
included in the source tarball.
Add a pbr hook so that if yarn is installed, javascript content will be
built before the tarball.
Add a zuul job with a success url that contains a source_url
pointing to the live v3 data.
This adds a framework for verifying that we can serve the web app
urls and their dependencies for all of the various ways we want to
support folks hosting zuul-web.
It includes a very simple reverse proxy server for approximating
what we do in openstack to "white label" the Zuul service -- that
is, hide the multitenancy aspect and present the single tenant
at the site root.
We can run similar tests without the proxy to ensure the default,
multi-tenant view works as well.
Add babel transpiling enabling use of ES6 features
ECMAScript6 has a bunch of nice things, like block scoped variables,
const, template strings and classes. Babel is a javascript transpiler
which webpack can use to allow us to write using modern javascript but
the resulting code to still work on older browsers.
Use the babel-plugin-angularjs-annotate so that angular's dependency
injection doesn't get borked by babel's transpiling things (which causes
variables to otherwise be renamed in a way that causes angular to not
find them)
While we're at it, replace our use of var with let (let is the new
block-scoped version of var) and toss in some use of const and template
strings for good measure.
Add StandardJS eslint config for linting
JavaScript Standard Style is a code style similar to pep8/flake8. It's
being added here not because of the pep8 part, but because the pyflakes
equivalent can catch real errors. This uses the babel-eslint parser
since we're using Babel to transpile already.
This auto-formats the existing code with:
npm run format
Rather than using StandardJS directly through the 'standard' package,
use the standardjs eslint plugin so that we can ignore the camelCase
rule (and any other rule that might emerge in the future)
Many of under_score/camelCase were fixed in a previous version of the patch.
Since the prevailing zuul style is camelCase methods anyway, those fixes
were left. That warning has now been disabled.
Other things, such as == vs. === and ensuring template
strings are in backticks are fixed.
Ignore indentation errors for now - we'll fix them at the end of this
stack and then remove the exclusion.
Add a 'format' npm run target that will run the eslint command with
--fix for ease of fixing reported issues.
Add a 'lint' npm run target and a 'lint' environment that runs with
linting turned to errors. The next patch makes the lint environment more
broadly useful.
When we run lint, also run the BundleAnalyzerPlugin and set the
success-url to the report.
Add an angular controller for status and stream page
Wrap the status and stream page construction with an angular controller
so that all the javascripts can be bundled in a single file.
Building the files locally is wonderful and all, but what we really want
is to make a tarball that has the built code so that it can be deployed.
Put it in the root source dir so that it can be used with the zuul
fetch-javascript-tarball role.
Also, replace the custom npm job with the new build-javascript-content
job which naturally grabs the content we want.
Make a 'main.js' file that imports the other three so that we just have
a single bundle. Then, add a 'vendor' entry in the common webpack file
and use the CommonsChunkPlugin to extract dependencies into their own
bundle. A second CommonsChunkPlugin entry pulls out a little bit of
metadata that would otherwise cause the main and vendor chunks to change
even with no source change. Then add chunkhash into the filename. This
way the files themselves can be aggressively cached.
This all follows recommendations from https://webpack.js.org/guides/caching/
https://webpack.js.org/guides/code-splitting/ and
https://webpack.js.org/guides/output-management/
Change-Id: I2e1230783fe57f1bc3b7818460463df1e659936b
Co-Authored-By: Tristan Cacqueray <tdecacqu@redhat.com>
Co-Authored-By: James E. Blair <jeblair@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Squashed changes:
- Use 'inventory' instead of 'hostfile' in ansible.cfg.
'hostfile' is deprecated.
- Use 'os.environ.copy()' in zuul_return.py since this causes 2.4 to
throw an exception now deep within module.exit_json().
Change-Id: I0a52c9e169a54d24a7b361010045fb10211418b7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The log stream is read in chunked blocks. When having multi byte
unicode characters in the log stream it can happen that this character
is split into different buffers. This can break the decode step with
an exception [1]. This can be fixed by treating the buffer as binary
and decoding the final lines.
Further we must expect that the data also contains binary data. In
order to cope with this further harden the final decoding by adding
'backslashreplace'. This will replace every occurrence of an
undecodable character by an appropriate escape sequence. This way we
can retain all the information (even binary) without being unable to
decode the stream.
[1]: Log output
Ansible output: b'Exception in thread Thread-10:'
Ansible output: b'Traceback (most recent call last):'
Ansible output: b' File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner'
Ansible output: b' self.run()'
Ansible output: b' File "/usr/lib/python3.5/threading.py", line 862, in run'
Ansible output: b' self._target(*self._args, **self._kwargs)'
Ansible output: b' File "/var/lib/zuul/ansible/zuul/ansible/callback/zuul_stream.py", line 140, in _read_log'
Ansible output: b' more = s.recv(4096).decode("utf-8")'
Ansible output: b"UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 4094-4095: unexpected end of data"
Ansible output: b''
Change-Id: I568ede2a2a4a64fd3a98480cebcbc2e86c54a2cf
|
|
|
|
|
|
|
|
| |
These are the things one does after running the script. Update the job
to run the actual script we're running to generate the data the way
we're running it.
Change-Id: I62d75d561efbb290d2fccbabf4fabfbf705e6288
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out 99 isn't a good prefix.
Also, remove the move argument now that we're merging with the existing
projects.yaml.
Also, stop running zuul unittests on migration script changes. They are
not relevant.
Change-Id: I10ed8cae64c82ed5afd01bb03a74ffc4fd2d87ee
|