diff options
121 files changed, 5771 insertions, 1345 deletions
diff --git a/.zuul.yaml b/.zuul.yaml index 041681a7b..7473ad3db 100644 --- a/.zuul.yaml +++ b/.zuul.yaml @@ -36,6 +36,8 @@ irrelevant-files: - zuul/cmd/migrate.py - playbooks/zuul-migrate/.* + vars: + sphinx_python: python3 - tox-cover: irrelevant-files: - zuul/cmd/migrate.py @@ -53,6 +55,8 @@ irrelevant-files: - zuul/cmd/migrate.py - playbooks/zuul-migrate/.* + vars: + sphinx_python: python3 - tox-pep8 - tox-py35: irrelevant-files: @@ -61,5 +65,5 @@ - zuul-stream-functional post: jobs: - - publish-openstack-sphinx-docs-infra + - publish-openstack-sphinx-docs-infra-python3 - publish-openstack-python-branch-tarball diff --git a/README.rst b/README.rst index 52b89dfb6..8d0066530 100644 --- a/README.rst +++ b/README.rst @@ -10,6 +10,14 @@ preparation for the third major version of Zuul. We call this effort The latest documentation for Zuul v3 is published at: https://docs.openstack.org/infra/zuul/feature/zuulv3/ +If you are looking for the Edge routing service named Zuul that is +related to Netflix, it can be found here: +https://github.com/Netflix/zuul + +If you are looking for the Javascript testing tool named Zuul, it +can be found here: +https://github.com/defunctzombie/zuul + Contributing ------------ diff --git a/bindep.txt b/bindep.txt index 85254b4cc..3dcc3e7cd 100644 --- a/bindep.txt +++ b/bindep.txt @@ -8,7 +8,7 @@ openssl [test] zookeeperd [platform:dpkg] build-essential [platform:dpkg] gcc [platform:rpm] -graphviz [test] +graphviz [doc] libssl-dev [platform:dpkg] openssl-devel [platform:rpm] libffi-dev [platform:dpkg] diff --git a/doc/source/admin/components.rst b/doc/source/admin/components.rst index 86b01efc3..18bbfa3f4 100644 --- a/doc/source/admin/components.rst +++ b/doc/source/admin/components.rst @@ -287,7 +287,7 @@ The following section of ``zuul.conf`` is used by the merger: .. attr:: merger - ,, attr:: command_socket + .. attr:: command_socket :default: /var/lib/zuul/merger.socket Path to command socket file for the merger process. @@ -408,7 +408,7 @@ The following sections of ``zuul.conf`` are used by the executor: Path to command socket file for the executor process. .. attr:: finger_port - :default: 79 + :default: 7900 Port to use for finger log streamer. @@ -451,13 +451,6 @@ The following sections of ``zuul.conf`` are used by the executor: SSH private key file to be used when logging into worker nodes. - .. attr:: user - :default: zuul - - User ID for the zuul-executor process. In normal operation as a - daemon, the executor should be started as the ``root`` user, but - it will drop privileges to this user during startup. - .. _admin_sitewide_variables: .. attr:: variables @@ -627,3 +620,65 @@ Operation To start the web server, run ``zuul-web``. To stop it, kill the PID which was saved in the pidfile specified in the configuration. + +Finger Gateway +-------------- + +The Zuul finger gateway listens on the standard finger port (79) for +finger requests specifying a build UUID for which it should stream log +results. The gateway will determine which executor is currently running that +build and query that executor for the log stream. + +This is intended to be used with the standard finger command line client. +For example:: + + finger UUID@zuul.example.com + +The above would stream the logs for the build identified by `UUID`. + +Configuration +~~~~~~~~~~~~~ + +In addition to the common configuration sections, the following +sections of ``zuul.conf`` are used by the finger gateway: + +.. attr:: fingergw + + .. attr:: command_socket + :default: /var/lib/zuul/fingergw.socket + + Path to command socket file for the executor process. + + .. attr:: listen_address + :default: all addresses + + IP address or domain name on which to listen. + + .. attr:: log_config + + Path to log config file for the finger gateway process. + + .. attr:: pidfile + :default: /var/run/zuul-fingergw/zuul-fingergw.pid + + Path to PID lock file for the finger gateway process. + + .. attr:: port + :default: 79 + + Port to use for the finger gateway. Note that since command line + finger clients cannot usually specify the port, leaving this set to + the default value is highly recommended. + + .. attr:: user + :default: zuul + + User ID for the zuul-fingergw process. In normal operation as a + daemon, the finger gateway should be started as the ``root`` user, but + it will drop privileges to this user during startup. + +Operation +~~~~~~~~~ + +To start the finger gateway, run ``zuul-fingergw``. To stop it, kill the +PID which was saved in the pidfile specified in the configuration. diff --git a/doc/source/admin/connections.rst b/doc/source/admin/connections.rst index 29ca3be7c..55ac629c1 100644 --- a/doc/source/admin/connections.rst +++ b/doc/source/admin/connections.rst @@ -55,6 +55,7 @@ Zuul includes the following drivers: drivers/gerrit drivers/github + drivers/git drivers/smtp drivers/sql drivers/timer diff --git a/doc/source/admin/drivers/git.rst b/doc/source/admin/drivers/git.rst new file mode 100644 index 000000000..e0acec116 --- /dev/null +++ b/doc/source/admin/drivers/git.rst @@ -0,0 +1,59 @@ +:title: Git Driver + +Git +=== + +This driver can be used to load Zuul configuration from public Git repositories, +for instance from ``openstack-infra/zuul-jobs`` that is suitable for use by +any Zuul system. It can also be used to trigger jobs from ``ref-updated`` events +in a pipeline. + +Connection Configuration +------------------------ + +The supported options in ``zuul.conf`` connections are: + +.. attr:: <git connection> + + .. attr:: driver + :required: + + .. value:: git + + The connection must set ``driver=git`` for Git connections. + + .. attr:: baseurl + + Path to the base Git URL. Git repos name will be appended to it. + + .. attr:: poll_delay + :default: 7200 + + The delay in seconds of the Git repositories polling loop. + +Trigger Configuration +--------------------- + +.. attr:: pipeline.trigger.<git source> + + The dictionary passed to the Git pipeline ``trigger`` attribute + supports the following attributes: + + .. attr:: event + :required: + + Only ``ref-updated`` is supported. + + .. attr:: ref + + On ref-updated events, a ref such as ``refs/heads/master`` or + ``^refs/tags/.*$``. This field is treated as a regular expression, + and multiple refs may be listed. + + .. attr:: ignore-deletes + :default: true + + When a ref is deleted, a ref-updated event is emitted with a + newrev of all zeros specified. The ``ignore-deletes`` field is a + boolean value that describes whether or not these newrevs + trigger ref-updated events. diff --git a/doc/source/admin/drivers/github.rst b/doc/source/admin/drivers/github.rst index 8dd776414..4f46af694 100644 --- a/doc/source/admin/drivers/github.rst +++ b/doc/source/admin/drivers/github.rst @@ -18,9 +18,11 @@ the project's owner needs to know the zuul endpoint and the webhook secrets. Web-Hook ........ -To configure a project's `webhook events <https://developer.github.com/webhooks/creating/>`_: +To configure a project's `webhook events +<https://developer.github.com/webhooks/creating/>`_: -* Set *Payload URL* to ``http://<zuul-hostname>/connection/<connection-name>/payload``. +* Set *Payload URL* to + ``http://<zuul-hostname>/connection/<connection-name>/payload``. * Set *Content Type* to ``application/json``. @@ -30,22 +32,27 @@ You will also need to have a GitHub user created for your zuul: * Zuul public key needs to be added to the GitHub account -* A api_token needs to be created too, see this `article <See https://help.github.com/articles/creating-an-access-token-for-command-line-use/>`_ +* A api_token needs to be created too, see this `article + <https://help.github.com/articles/creating-an-access-token-for-command-line-use/>`_ Then in the zuul.conf, set webhook_token and api_token. Application ........... -To create a `GitHub application <https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_: +To create a `GitHub application +<https://developer.github.com/apps/building-integrations/setting-up-and-registering-github-apps/registering-github-apps/>`_: -* Go to your organization settings page to create the application, e.g.: https://github.com/organizations/my-org/settings/apps/new +* Go to your organization settings page to create the application, e.g.: + https://github.com/organizations/my-org/settings/apps/new * Set GitHub App name to "my-org-zuul" -* Set Setup URL to your setup documentation, when user install the application they are redirected to this url +* Set Setup URL to your setup documentation, when user install the application + they are redirected to this url -* Set Webhook URL to ``http://<zuul-hostname>/connection/<connection-name>/payload``. +* Set Webhook URL to + ``http://<zuul-hostname>/connection/<connection-name>/payload``. * Create a Webhook secret @@ -93,7 +100,8 @@ Then in the zuul.conf, set webhook_token, app_id and app_key. After restarting zuul-scheduler, verify in the 'Advanced' tab that the Ping payload works (green tick and 200 response) -Users can now install the application using its public page, e.g.: https://github.com/apps/my-org-zuul +Users can now install the application using its public page, e.g.: +https://github.com/apps/my-org-zuul Connection Configuration diff --git a/doc/source/admin/drivers/sql.rst b/doc/source/admin/drivers/sql.rst index a269f5d2e..b9ce24bc9 100644 --- a/doc/source/admin/drivers/sql.rst +++ b/doc/source/admin/drivers/sql.rst @@ -43,6 +43,14 @@ The connection options for the SQL driver are: <http://docs.sqlalchemy.org/en/latest/core/pooling.html#setting-pool-recycle>`_ for more information. + .. attr:: table_prefix + :default: '' + + The string to prefix the table names. This makes it possible to run + several zuul deployments against the same database. This can be useful + if you rely on external databases which you don't have under control. + The default is to have no prefix. + Reporter Configuration ---------------------- diff --git a/doc/source/admin/drivers/zuul.rst b/doc/source/admin/drivers/zuul.rst index d95dffc9e..41535ee06 100644 --- a/doc/source/admin/drivers/zuul.rst +++ b/doc/source/admin/drivers/zuul.rst @@ -26,6 +26,12 @@ can simply be used by listing ``zuul`` as the trigger. When Zuul merges a change to a project, it generates this event for every open change in the project. + .. warning:: + + Triggering on this event can cause poor performance when + using the GitHub driver with a large number of + installations. + .. value:: parent-change-enqueued When Zuul enqueues a change into any pipeline, it generates diff --git a/doc/source/admin/tenants.rst b/doc/source/admin/tenants.rst index 47227501a..48e7ba8aa 100644 --- a/doc/source/admin/tenants.rst +++ b/doc/source/admin/tenants.rst @@ -105,7 +105,7 @@ configuration. An example tenant definition is: changes in response to proposed changes, and Zuul will read configuration files in all of their branches. - .. attr:: <project>: + .. attr:: <project> The items in the list may either be simple string values of the project names, or a dictionary with the project name as diff --git a/doc/source/index.rst b/doc/source/index.rst index 677e9584c..6e1b52e21 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -12,6 +12,14 @@ are installing or operating a Zuul system, you will also find the :doc:`admin/index` useful. If you want help make Zuul itself better, take a look at the :doc:`developer/index`. +If you are looking for the Edge routing service named Zuul that is +related to Netflix, it can be found here: +https://github.com/Netflix/zuul + +If you are looking for the Javascript testing tool named Zuul, it +can be found here: +https://github.com/defunctzombie/zuul + Contents: .. toctree:: diff --git a/doc/source/user/config.rst b/doc/source/user/config.rst index d1711078f..525cb3892 100644 --- a/doc/source/user/config.rst +++ b/doc/source/user/config.rst @@ -616,92 +616,6 @@ Here is an example of two job definitions: tags from all the jobs and variants used in constructing the frozen job, with no duplication. - .. attr:: branches - - A regular expression (or list of regular expressions) which - describe on what branches a job should run (or in the case of - variants: to alter the behavior of a job for a certain branch). - - If there is no job definition for a given job which matches the - branch of an item, then that job is not run for the item. - Otherwise, all of the job variants which match that branch (and - any other selection criteria) are used when freezing the job. - - This example illustrates a job called *run-tests* which uses a - nodeset based on the current release of an operating system to - perform its tests, except when testing changes to the stable/2.0 - branch, in which case it uses an older release: - - .. code-block:: yaml - - - job: - name: run-tests - nodeset: current-release - - - job: - name: run-tests - branches: stable/2.0 - nodeset: old-release - - In some cases, Zuul uses an implied value for the branch - specifier if none is supplied: - - * For a job definition in a :term:`config-project`, no implied - branch specifier is used. If no branch specifier appears, the - job applies to all branches. - - * In the case of an :term:`untrusted-project`, if the project - has only one branch, no implied branch specifier is applied to - :ref:`job` definitions. If the project has more than one - branch, the branch containing the job definition is used as an - implied branch specifier. - - * In the case of a job variant defined within a :ref:`project`, - if the project definition is in a :term:`config-project`, no - implied branch specifier is used. If it appears in an - :term:`untrusted-project`, with no branch specifier, the - branch containing the project definition is used as an implied - branch specifier. - - * In the case of a job variant defined within a - :ref:`project-template`, if no branch specifier appears, the - implied branch containing the project-template definition is - used as an implied branch specifier. This means that - definitions of the same project-template on different branches - may run different jobs. - - When that project-template is used by a :ref:`project` - definition within a :term:`untrusted-project`, the branch - containing that project definition is combined with the branch - specifier of the project-template. This means it is possible - for a project to use a template on one branch, but not on - another. - - This allows for the very simple and expected workflow where if a - project defines a job on the ``master`` branch with no branch - specifier, and then creates a new branch based on ``master``, - any changes to that job definition within the new branch only - affect that branch, and likewise, changes to the master branch - only affect it. - - See :attr:`pragma.implied-branch-matchers` for how to override - this behavior on a per-file basis. - - .. attr:: files - - This attribute indicates that the job should only run on changes - where the specified files are modified. This is a regular - expression or list of regular expressions. - - .. attr:: irrelevant-files - - This is a negative complement of **files**. It indicates that - the job should run unless *all* of the files changed match this - list. In other words, if the regular expression ``docs/.*`` is - supplied, then this job will not run if the only files changed - are in the docs directory. A regular expression or list of - regular expressions. - .. attr:: secrets A list of secrets which may be used by the job. A @@ -805,13 +719,6 @@ Here is an example of two job definitions: are run after the parent's. See :ref:`job` for more information. - .. warning:: - - If the path as specified does not exist, Zuul will try - appending the extensions ``.yaml`` and ``.yml``. This - behavior is deprecated and will be removed in the future all - playbook paths should include the file extension. - .. attr:: post-run The name of a playbook or list of playbooks to run after the @@ -822,13 +729,6 @@ Here is an example of two job definitions: playbooks are run before the parent's. See :ref:`job` for more information. - .. warning:: - - If the path as specified does not exist, Zuul will try - appending the extensions ``.yaml`` and ``.yml``. This - behavior is deprecated and will be removed in the future all - playbook paths should include the file extension. - .. attr:: run The name of the main playbook for this job. If it is not @@ -840,13 +740,6 @@ Here is an example of two job definitions: run: playbooks/job-playbook.yaml - .. warning:: - - If the path as specified does not exist, Zuul will try - appending the extensions ``.yaml`` and ``.yml``. This - behavior is deprecated and will be removed in the future all - playbook paths should include the file extension. - .. attr:: roles A list of Ansible roles to prepare for the job. Because a job @@ -985,6 +878,99 @@ Here is an example of two job definitions: it will remain set for all child jobs and variants (it can not be set to ``false``). + .. _matchers: + + The following job attributes are considered "matchers". They are + not inherited in the usual manner, instead, these attributes are + used to determine whether a specific variant is used when + running a job. + + .. attr:: branches + + A regular expression (or list of regular expressions) which + describe on what branches a job should run (or in the case of + variants: to alter the behavior of a job for a certain branch). + + If there is no job definition for a given job which matches the + branch of an item, then that job is not run for the item. + Otherwise, all of the job variants which match that branch (and + any other selection criteria) are used when freezing the job. + + This example illustrates a job called *run-tests* which uses a + nodeset based on the current release of an operating system to + perform its tests, except when testing changes to the stable/2.0 + branch, in which case it uses an older release: + + .. code-block:: yaml + + - job: + name: run-tests + nodeset: current-release + + - job: + name: run-tests + branches: stable/2.0 + nodeset: old-release + + In some cases, Zuul uses an implied value for the branch + specifier if none is supplied: + + * For a job definition in a :term:`config-project`, no implied + branch specifier is used. If no branch specifier appears, the + job applies to all branches. + + * In the case of an :term:`untrusted-project`, if the project + has only one branch, no implied branch specifier is applied to + :ref:`job` definitions. If the project has more than one + branch, the branch containing the job definition is used as an + implied branch specifier. + + * In the case of a job variant defined within a :ref:`project`, + if the project definition is in a :term:`config-project`, no + implied branch specifier is used. If it appears in an + :term:`untrusted-project`, with no branch specifier, the + branch containing the project definition is used as an implied + branch specifier. + + * In the case of a job variant defined within a + :ref:`project-template`, if no branch specifier appears, the + implied branch containing the project-template definition is + used as an implied branch specifier. This means that + definitions of the same project-template on different branches + may run different jobs. + + When that project-template is used by a :ref:`project` + definition within a :term:`untrusted-project`, the branch + containing that project definition is combined with the branch + specifier of the project-template. This means it is possible + for a project to use a template on one branch, but not on + another. + + This allows for the very simple and expected workflow where if a + project defines a job on the ``master`` branch with no branch + specifier, and then creates a new branch based on ``master``, + any changes to that job definition within the new branch only + affect that branch, and likewise, changes to the master branch + only affect it. + + See :attr:`pragma.implied-branch-matchers` for how to override + this behavior on a per-file basis. + + .. attr:: files + + This matcher indicates that the job should only run on changes + where the specified files are modified. This is a regular + expression or list of regular expressions. + + .. attr:: irrelevant-files + + This matcher is a negative complement of **files**. It + indicates that the job should run unless *all* of the files + changed match this list. In other words, if the regular + expression ``docs/.*`` is supplied, then this job will not run + if the only files changed are in the docs directory. A regular + expression or list of regular expressions. + .. _project: Project @@ -1053,11 +1039,12 @@ pipeline. The following attributes may appear in a project: .. attr:: name - :required: The name of the project. If Zuul is configured with two or more unique projects with the same name, the canonical hostname for the project should be included (e.g., `git.example.com/foo`). + If not given it is implicitly derived from the project where this + is defined. .. attr:: templates @@ -1118,6 +1105,14 @@ pipeline. changes which break the others. This is a free-form string; just set the same value for each group of projects. + .. attr:: debug + + If this is set to `true`, Zuul will include debugging + information in reports it makes about items in the pipeline. + This should not normally be set, but in situations were it is + difficult to determine why Zuul did or did not run a certain + job, the additional information this provides may help. + .. _project-template: Project Template @@ -1339,7 +1334,7 @@ pragma directives may not be set and then unset within the same file. .. attr:: pragma - The pragma item currently only supports one attribute: + The pragma item currently supports the following attributes: .. attr:: implied-branch-matchers @@ -1354,3 +1349,43 @@ pragma directives may not be set and then unset within the same file. Note that if a job contains an explicit branch matcher, it will be used regardless of the value supplied here. + + .. attr:: implied-branches + + This is a list of regular expressions, just as + :attr:`job.branches`, which may be used to supply the value of + the implied branch matcher for all jobs in a file. + + This may be useful if two projects share jobs but have + dissimilar branch names. If, for example, two projects have + stable maintenance branches with dissimilar names, but both + should use the same job variants, this directive may be used to + indicate that all of the jobs defined in the stable branch of + the first project may also be used for the stable branch of the + other. For example: + + .. code-block:: yaml + + - pragma: + implied-branches: + - stable/foo + - stable/bar + + The above code, when added to the ``stable/foo`` branch of a + project would indicate that the job variants described in that + file should not only be used for changes to ``stable/foo``, but + also on changes to ``stable/bar``, which may be in another + project. + + Note that if a job contains an explicit branch matcher, it will + be used regardless of the value supplied here. + + Note also that the presence of `implied-branches` does not + automatically set `implied-branch-matchers`. Zuul will still + decide if implied branch matchers are warranted at all, using + the heuristics described in :attr:`job.branches`, and only use + the value supplied here if that is the case. If you want to + declare specific implied branches on, for example, a + :term:`config-project` project (which normally would not use + implied branches), you must set `implied-branch-matchers` as + well. diff --git a/doc/source/user/encryption.rst b/doc/source/user/encryption.rst index 7ced58900..d45195ffa 100644 --- a/doc/source/user/encryption.rst +++ b/doc/source/user/encryption.rst @@ -15,9 +15,8 @@ Each project in Zuul has its own automatically generated RSA keypair which can be used by anyone to encrypt a secret and only Zuul is able to decrypt it. Zuul serves each project's public key using its build-in webserver. They can be fetched at the path -``/keys/<source>/<project>.pub`` where ``<project>`` is the name of a -project and ``<source>`` is the name of that project's connection in -the main Zuul configuration file. +``/<tenant>/<project>.pub`` where ``<project>`` is the canonical name +of a project and ``<tenant>`` is the name of a tenant with that project. Zuul currently supports one encryption scheme, PKCS#1 with OAEP, which can not store secrets longer than the 3760 bits (derived from the key diff --git a/doc/source/user/jobs.rst b/doc/source/user/jobs.rst index 278c4f453..4b6255b20 100644 --- a/doc/source/user/jobs.rst +++ b/doc/source/user/jobs.rst @@ -540,7 +540,8 @@ Return Values A job may return some values to Zuul to affect its behavior and for use by other jobs.. To return a value, use the ``zuul_return`` -Ansible module in a job playbook. For example: +Ansible module in a job playbook running on the executor 'localhost' node. +For example: .. code-block:: yaml diff --git a/etc/status/public_html/zuul.app.js b/etc/status/public_html/zuul.app.js index 7ceb2dda7..bf90a4db7 100644 --- a/etc/status/public_html/zuul.app.js +++ b/etc/status/public_html/zuul.app.js @@ -28,8 +28,6 @@ function zuul_build_dom($, container) { // Build a default-looking DOM var default_layout = '<div class="container">' - + '<h1>Zuul Status</h1>' - + '<p>Real-time status monitor of Zuul, the pipeline manager between Gerrit and Workers.</p>' + '<div class="zuul-container" id="zuul-container">' + '<div style="display: none;" class="alert" id="zuul_msg"></div>' + '<button class="btn pull-right zuul-spinner">updating <span class="glyphicon glyphicon-refresh"></span></button>' diff --git a/requirements.txt b/requirements.txt index 4b8be3cb2..193c64e71 100644 --- a/requirements.txt +++ b/requirements.txt @@ -7,11 +7,7 @@ PyYAML>=3.1.0 Paste WebOb>=1.2.3 paramiko>=1.8.0,<2.0.0 -# Using a local fork of gitpython until at least these changes are in a -# release. -# https://github.com/gitpython-developers/GitPython/pull/682 -# https://github.com/gitpython-developers/GitPython/pull/686 -git+https://github.com/jeblair/GitPython.git@zuul#egg=GitPython +GitPython>=2.1.8 python-daemon>=2.0.4,<2.1.0 extras statsd>=1.0.0,<3.0 @@ -29,5 +25,6 @@ cryptography>=1.6 cachecontrol pyjwt iso8601 +yarl>=0.11,<1.0 aiohttp uvloop;python_version>='3.5' @@ -28,6 +28,7 @@ console_scripts = zuul-bwrap = zuul.driver.bubblewrap:main zuul-web = zuul.cmd.web:main zuul-migrate = zuul.cmd.migrate:main + zuul-fingergw = zuul.cmd.fingergw:main [build_sphinx] source-dir = doc/source diff --git a/tests/base.py b/tests/base.py index ea01d20a1..c4492426f 100755 --- a/tests/base.py +++ b/tests/base.py @@ -40,7 +40,6 @@ import time import uuid import urllib - import git import gear import fixtures @@ -53,6 +52,7 @@ import testtools.content_type from git.exc import NoSuchPathError import yaml +import tests.fakegithub import zuul.driver.gerrit.gerritsource as gerritsource import zuul.driver.gerrit.gerritconnection as gerritconnection import zuul.driver.github.githubconnection as githubconnection @@ -170,7 +170,7 @@ class FakeGerritChange(object): 'status': status, 'subject': subject, 'submitRecords': [], - 'url': 'https://hostname/%s' % number} + 'url': 'https://%s/%s' % (self.gerrit.server, number)} self.upstream_root = upstream_root self.addPatchset(files=files, parent=parent) @@ -559,14 +559,13 @@ class FakeGerritConnection(gerritconnection.GerritConnection): return change.query() return {} - def simpleQuery(self, query): - self.log.debug("simpleQuery: %s" % query) - self.queries.append(query) + def _simpleQuery(self, query): if query.startswith('change:'): # Query a specific changeid changeid = query[len('change:'):] l = [change.query() for change in self.changes.values() - if change.data['id'] == changeid] + if (change.data['id'] == changeid or + change.data['number'] == changeid)] elif query.startswith('message:'): # Query the content of a commit message msg = query[len('message:'):].strip() @@ -577,6 +576,20 @@ class FakeGerritConnection(gerritconnection.GerritConnection): l = [change.query() for change in self.changes.values()] return l + def simpleQuery(self, query): + self.log.debug("simpleQuery: %s" % query) + self.queries.append(query) + results = [] + if query.startswith('(') and 'OR' in query: + query = query[1:-2] + for q in query.split(' OR '): + for r in self._simpleQuery(q): + if r not in results: + results.append(r) + else: + results = self._simpleQuery(query) + return results + def _start_watcher_thread(self, *args, **kw): pass @@ -601,98 +614,6 @@ class GithubChangeReference(git.Reference): _points_to_commits_only = True -class FakeGithub(object): - - class FakeUser(object): - def __init__(self, login): - self.login = login - self.name = "Github User" - self.email = "github.user@example.com" - - class FakeBranch(object): - def __init__(self, branch='master'): - self.name = branch - - class FakeStatus(object): - def __init__(self, state, url, description, context, user): - self._state = state - self._url = url - self._description = description - self._context = context - self._user = user - - def as_dict(self): - return { - 'state': self._state, - 'url': self._url, - 'description': self._description, - 'context': self._context, - 'creator': { - 'login': self._user - } - } - - class FakeCommit(object): - def __init__(self): - self._statuses = [] - - def set_status(self, state, url, description, context, user): - status = FakeGithub.FakeStatus( - state, url, description, context, user) - # always insert a status to the front of the list, to represent - # the last status provided for a commit. - self._statuses.insert(0, status) - - def statuses(self): - return self._statuses - - class FakeRepository(object): - def __init__(self): - self._branches = [FakeGithub.FakeBranch()] - self._commits = {} - - def branches(self, protected=False): - if protected: - # simulate there is no protected branch - return [] - return self._branches - - def create_status(self, sha, state, url, description, context, - user='zuul'): - # Since we're bypassing github API, which would require a user, we - # default the user as 'zuul' here. - commit = self._commits.get(sha, None) - if commit is None: - commit = FakeGithub.FakeCommit() - self._commits[sha] = commit - commit.set_status(state, url, description, context, user) - - def commit(self, sha): - commit = self._commits.get(sha, None) - if commit is None: - commit = FakeGithub.FakeCommit() - self._commits[sha] = commit - return commit - - def __init__(self): - self._repos = {} - - def user(self, login): - return self.FakeUser(login) - - def repository(self, owner, proj): - return self._repos.get((owner, proj), None) - - def repo_from_project(self, project): - # This is a convenience method for the tests. - owner, proj = project.split('/') - return self.repository(owner, proj) - - def addProject(self, project): - owner, proj = project.name.split('/') - self._repos[(owner, proj)] = self.FakeRepository() - - class FakeGithubPullRequest(object): def __init__(self, github, number, project, branch, @@ -720,6 +641,7 @@ class FakeGithubPullRequest(object): self.is_merged = False self.merge_message = None self.state = 'open' + self.url = 'https://%s/%s/pull/%s' % (github.server, project, number) self._createPRRef() self._addCommitToRepo(files=files) self._updateTimeStamp() @@ -1018,18 +940,18 @@ class FakeGithubConnection(githubconnection.GithubConnection): log = logging.getLogger("zuul.test.FakeGithubConnection") def __init__(self, driver, connection_name, connection_config, - upstream_root=None): + changes_db=None, upstream_root=None): super(FakeGithubConnection, self).__init__(driver, connection_name, connection_config) self.connection_name = connection_name self.pr_number = 0 - self.pull_requests = [] + self.pull_requests = changes_db self.statuses = {} self.upstream_root = upstream_root self.merge_failure = False self.merge_not_allowed_count = 0 self.reports = [] - self.github_client = FakeGithub() + self.github_client = tests.fakegithub.FakeGithub(changes_db) def getGithubClient(self, project=None, @@ -1042,7 +964,7 @@ class FakeGithubConnection(githubconnection.GithubConnection): pull_request = FakeGithubPullRequest( self, self.pr_number, project, branch, subject, self.upstream_root, files=files, body=body) - self.pull_requests.append(pull_request) + self.pull_requests[self.pr_number] = pull_request return pull_request def getPushEvent(self, project, ref, old_rev=None, new_rev=None, @@ -1089,35 +1011,8 @@ class FakeGithubConnection(githubconnection.GithubConnection): super(FakeGithubConnection, self).addProject(project) self.getGithubClient(project).addProject(project) - def getPull(self, project, number): - pr = self.pull_requests[number - 1] - data = { - 'number': number, - 'title': pr.subject, - 'updated_at': pr.updated_at, - 'base': { - 'repo': { - 'full_name': pr.project - }, - 'ref': pr.branch, - }, - 'mergeable': True, - 'state': pr.state, - 'head': { - 'sha': pr.head_sha, - 'repo': { - 'full_name': pr.project - } - }, - 'files': pr.files, - 'labels': pr.labels, - 'merged': pr.is_merged, - 'body': pr.body - } - return data - def getPullBySha(self, sha, project): - prs = list(set([p for p in self.pull_requests if + prs = list(set([p for p in self.pull_requests.values() if sha == p.head_sha and project == p.project])) if len(prs) > 1: raise Exception('Multiple pulls found with head sha: %s' % sha) @@ -1125,12 +1020,12 @@ class FakeGithubConnection(githubconnection.GithubConnection): return self.getPull(pr.project, pr.number) def _getPullReviews(self, owner, project, number): - pr = self.pull_requests[number - 1] + pr = self.pull_requests[number] return pr.reviews def getRepoPermission(self, project, login): owner, proj = project.split('/') - for pr in self.pull_requests: + for pr in self.pull_requests.values(): pr_owner, pr_project = pr.project.split('/') if (pr_owner == owner and proj == pr_project): if login in pr.writers: @@ -1147,13 +1042,13 @@ class FakeGithubConnection(githubconnection.GithubConnection): def commentPull(self, project, pr_number, message): # record that this got reported self.reports.append((project, pr_number, 'comment')) - pull_request = self.pull_requests[pr_number - 1] + pull_request = self.pull_requests[pr_number] pull_request.addComment(message) def mergePull(self, project, pr_number, commit_message='', sha=None): # record that this got reported self.reports.append((project, pr_number, 'merge')) - pull_request = self.pull_requests[pr_number - 1] + pull_request = self.pull_requests[pr_number] if self.merge_failure: raise Exception('Pull request was not merged') if self.merge_not_allowed_count > 0: @@ -1173,32 +1068,15 @@ class FakeGithubConnection(githubconnection.GithubConnection): def labelPull(self, project, pr_number, label): # record that this got reported self.reports.append((project, pr_number, 'label', label)) - pull_request = self.pull_requests[pr_number - 1] + pull_request = self.pull_requests[pr_number] pull_request.addLabel(label) def unlabelPull(self, project, pr_number, label): # record that this got reported self.reports.append((project, pr_number, 'unlabel', label)) - pull_request = self.pull_requests[pr_number - 1] + pull_request = self.pull_requests[pr_number] pull_request.removeLabel(label) - def _getNeededByFromPR(self, change): - prs = [] - pattern = re.compile(r"Depends-On.*https://%s/%s/pull/%s" % - (self.server, change.project.name, - change.number)) - for pr in self.pull_requests: - if not pr.body: - body = '' - else: - body = pr.body - if pattern.search(body): - # Get our version of a pull so that it's a dict - pull = self.getPull(pr.project, pr.number) - prs.append(pull) - - return prs - class BuildHistory(object): def __init__(self, **kw): @@ -1432,7 +1310,8 @@ class RecordingAnsibleJob(zuul.executor.server.AnsibleJob): self.log.debug("hostlist") hosts = super(RecordingAnsibleJob, self).getHostList(args) for host in hosts: - host['host_vars']['ansible_connection'] = 'local' + if not host['host_vars'].get('ansible_connection'): + host['host_vars']['ansible_connection'] = 'local' hosts.append(dict( name=['localhost'], @@ -1738,6 +1617,11 @@ class FakeNodepool(object): executor='fake-nodepool') if 'fakeuser' in node_type: data['username'] = 'fakeuser' + if 'windows' in node_type: + data['connection_type'] = 'winrm' + if 'network' in node_type: + data['connection_type'] = 'network_cli' + data = json.dumps(data).encode('utf8') path = self.client.create(path, data, makepath=True, @@ -2162,6 +2046,7 @@ class ZuulTestCase(BaseTestCase): # Set a changes database so multiple FakeGerrit's can report back to # a virtual canonical database given by the configured hostname self.gerrit_changes_dbs = {} + self.github_changes_dbs = {} def getGerritConnection(driver, name, config): db = self.gerrit_changes_dbs.setdefault(config['server'], {}) @@ -2177,7 +2062,10 @@ class ZuulTestCase(BaseTestCase): getGerritConnection)) def getGithubConnection(driver, name, config): + server = config.get('server', 'github.com') + db = self.github_changes_dbs.setdefault(server, {}) con = FakeGithubConnection(driver, name, config, + changes_db=db, upstream_root=self.upstream_root) self.event_queues.append(con.event_queue) setattr(self, 'fake_' + name, con) @@ -2421,7 +2309,7 @@ class ZuulTestCase(BaseTestCase): 'pydevd.CommandThread', 'pydevd.Reader', 'pydevd.Writer', - 'FingerStreamer', + 'socketserver_Thread', ] threads = [t for t in threading.enumerate() if t.name not in whitelist] @@ -2833,6 +2721,16 @@ class ZuulTestCase(BaseTestCase): os.path.join(FIXTURE_DIR, f.name)) self.setupAllProjectKeys() + def addTagToRepo(self, project, name, sha): + path = os.path.join(self.upstream_root, project) + repo = git.Repo(path) + repo.git.tag(name, sha) + + def delTagFromRepo(self, project, name): + path = os.path.join(self.upstream_root, project) + repo = git.Repo(path) + repo.git.tag('-d', name) + def addCommitToRepo(self, project, message, files, branch='master', tag=None): path = os.path.join(self.upstream_root, project) diff --git a/tests/fakegithub.py b/tests/fakegithub.py new file mode 100644 index 000000000..6fb2d6672 --- /dev/null +++ b/tests/fakegithub.py @@ -0,0 +1,214 @@ +#!/usr/bin/env python + +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import re + + +class FakeUser(object): + def __init__(self, login): + self.login = login + self.name = "Github User" + self.email = "github.user@example.com" + + +class FakeBranch(object): + def __init__(self, branch='master'): + self.name = branch + + +class FakeStatus(object): + def __init__(self, state, url, description, context, user): + self._state = state + self._url = url + self._description = description + self._context = context + self._user = user + + def as_dict(self): + return { + 'state': self._state, + 'url': self._url, + 'description': self._description, + 'context': self._context, + 'creator': { + 'login': self._user + } + } + + +class FakeCommit(object): + def __init__(self): + self._statuses = [] + + def set_status(self, state, url, description, context, user): + status = FakeStatus( + state, url, description, context, user) + # always insert a status to the front of the list, to represent + # the last status provided for a commit. + self._statuses.insert(0, status) + + def statuses(self): + return self._statuses + + +class FakeRepository(object): + def __init__(self): + self._branches = [FakeBranch()] + self._commits = {} + + def branches(self, protected=False): + if protected: + # simulate there is no protected branch + return [] + return self._branches + + def create_status(self, sha, state, url, description, context, + user='zuul'): + # Since we're bypassing github API, which would require a user, we + # default the user as 'zuul' here. + commit = self._commits.get(sha, None) + if commit is None: + commit = FakeCommit() + self._commits[sha] = commit + commit.set_status(state, url, description, context, user) + + def commit(self, sha): + commit = self._commits.get(sha, None) + if commit is None: + commit = FakeCommit() + self._commits[sha] = commit + return commit + + +class FakeLabel(object): + def __init__(self, name): + self.name = name + + +class FakeIssue(object): + def __init__(self, fake_pull_request): + self._fake_pull_request = fake_pull_request + + def pull_request(self): + return FakePull(self._fake_pull_request) + + def labels(self): + return [FakeLabel(l) + for l in self._fake_pull_request.labels] + + +class FakeFile(object): + def __init__(self, filename): + self.filename = filename + + +class FakePull(object): + def __init__(self, fake_pull_request): + self._fake_pull_request = fake_pull_request + + def issue(self): + return FakeIssue(self._fake_pull_request) + + def files(self): + return [FakeFile(fn) + for fn in self._fake_pull_request.files] + + def as_dict(self): + pr = self._fake_pull_request + connection = pr.github + data = { + 'number': pr.number, + 'title': pr.subject, + 'url': 'https://%s/%s/pull/%s' % ( + connection.server, pr.project, pr.number + ), + 'updated_at': pr.updated_at, + 'base': { + 'repo': { + 'full_name': pr.project + }, + 'ref': pr.branch, + }, + 'mergeable': True, + 'state': pr.state, + 'head': { + 'sha': pr.head_sha, + 'repo': { + 'full_name': pr.project + } + }, + 'merged': pr.is_merged, + 'body': pr.body + } + return data + + +class FakeIssueSearchResult(object): + def __init__(self, issue): + self.issue = issue + + +class FakeGithub(object): + def __init__(self, pull_requests): + self._pull_requests = pull_requests + self._repos = {} + + def user(self, login): + return FakeUser(login) + + def repository(self, owner, proj): + return self._repos.get((owner, proj), None) + + def repo_from_project(self, project): + # This is a convenience method for the tests. + owner, proj = project.split('/') + return self.repository(owner, proj) + + def addProject(self, project): + owner, proj = project.name.split('/') + self._repos[(owner, proj)] = FakeRepository() + + def pull_request(self, owner, project, number): + fake_pr = self._pull_requests[number] + return FakePull(fake_pr) + + def search_issues(self, query): + def tokenize(s): + return re.findall(r'[\w]+', s) + + parts = tokenize(query) + terms = set() + results = [] + for part in parts: + kv = part.split(':', 1) + if len(kv) == 2: + if kv[0] in set('type', 'is', 'in'): + # We only perform one search now and these aren't + # important; we can honor these terms later if + # necessary. + continue + terms.add(part) + + for pr in self._pull_requests.values(): + if not pr.body: + body = set() + else: + body = set(tokenize(pr.body)) + if terms.intersection(body): + issue = FakeIssue(pr) + results.append(FakeIssueSearchResult(issue)) + + return results diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-merge.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-merge.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-merge.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test1.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test1.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test1.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test2.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test2.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/nonvoting-project-test2.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project-merge.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-merge.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-merge.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project-post.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-post.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-post.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test1.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test1.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test1.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test2.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-test2.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project-testfile.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-testfile.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project-testfile.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/playbooks/project1-project2-integration.yaml b/tests/fixtures/config/cross-source/git/common-config/playbooks/project1-project2-integration.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/playbooks/project1-project2-integration.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/cross-source/git/common-config/zuul.yaml b/tests/fixtures/config/cross-source/git/common-config/zuul.yaml new file mode 100644 index 000000000..abdc34afa --- /dev/null +++ b/tests/fixtures/config/cross-source/git/common-config/zuul.yaml @@ -0,0 +1,168 @@ +- pipeline: + name: check + manager: independent + trigger: + gerrit: + - event: patchset-created + github: + - event: pull_request + action: edited + success: + gerrit: + Verified: 1 + github: {} + failure: + gerrit: + Verified: -1 + github: {} + +- pipeline: + name: gate + manager: dependent + success-message: Build succeeded (gate). + require: + github: + label: approved + gerrit: + approval: + - Approved: 1 + trigger: + gerrit: + - event: comment-added + approval: + - Approved: 1 + github: + - event: pull_request + action: edited + - event: pull_request + action: labeled + label: approved + success: + gerrit: + Verified: 2 + submit: true + github: + merge: true + failure: + gerrit: + Verified: -2 + github: {} + start: + gerrit: + Verified: 0 + github: {} + precedence: high + +- pipeline: + name: post + manager: independent + trigger: + gerrit: + - event: ref-updated + ref: ^(?!refs/).*$ + precedence: low + +- job: + name: base + parent: null + +- job: + name: project-merge + hold-following-changes: true + nodeset: + nodes: + - name: controller + label: label1 + run: playbooks/project-merge.yaml + +- job: + name: project-test1 + attempts: 4 + nodeset: + nodes: + - name: controller + label: label1 + run: playbooks/project-test1.yaml + +- job: + name: project-test1 + branches: stable + nodeset: + nodes: + - name: controller + label: label2 + run: playbooks/project-test1.yaml + +- job: + name: project-post + nodeset: + nodes: + - name: static + label: ubuntu-xenial + run: playbooks/project-post.yaml + +- job: + name: project-test2 + nodeset: + nodes: + - name: controller + label: label1 + run: playbooks/project-test2.yaml + +- job: + name: project1-project2-integration + nodeset: + nodes: + - name: controller + label: label1 + run: playbooks/project1-project2-integration.yaml + +- job: + name: project-testfile + files: + - .*-requires + run: playbooks/project-testfile.yaml + +- project: + name: gerrit/project1 + check: + jobs: + - project-merge + - project-test1: + dependencies: project-merge + - project-test2: + dependencies: project-merge + - project1-project2-integration: + dependencies: project-merge + gate: + queue: integrated + jobs: + - project-merge + - project-test1: + dependencies: project-merge + - project-test2: + dependencies: project-merge + - project1-project2-integration: + dependencies: project-merge + +- project: + name: github/project2 + check: + jobs: + - project-merge + - project-test1: + dependencies: project-merge + - project-test2: + dependencies: project-merge + - project1-project2-integration: + dependencies: project-merge + gate: + queue: integrated + jobs: + - project-merge + - project-test1: + dependencies: project-merge + - project-test2: + dependencies: project-merge + - project1-project2-integration: + dependencies: project-merge diff --git a/tests/fixtures/config/cross-source/git/gerrit_project1/README b/tests/fixtures/config/cross-source/git/gerrit_project1/README new file mode 100644 index 000000000..9daeafb98 --- /dev/null +++ b/tests/fixtures/config/cross-source/git/gerrit_project1/README @@ -0,0 +1 @@ +test diff --git a/tests/fixtures/config/cross-source/git/github_project2/README b/tests/fixtures/config/cross-source/git/github_project2/README new file mode 100644 index 000000000..9daeafb98 --- /dev/null +++ b/tests/fixtures/config/cross-source/git/github_project2/README @@ -0,0 +1 @@ +test diff --git a/tests/fixtures/config/cross-source/main.yaml b/tests/fixtures/config/cross-source/main.yaml new file mode 100644 index 000000000..bf85c33b2 --- /dev/null +++ b/tests/fixtures/config/cross-source/main.yaml @@ -0,0 +1,11 @@ +- tenant: + name: tenant-one + source: + gerrit: + config-projects: + - common-config + untrusted-projects: + - gerrit/project1 + github: + untrusted-projects: + - github/project2 diff --git a/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test2.yaml b/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test2.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/git-driver/git/common-config/playbooks/project-test2.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/git-driver/git/common-config/zuul.yaml b/tests/fixtures/config/git-driver/git/common-config/zuul.yaml index 784b5f2b6..53fc21073 100644 --- a/tests/fixtures/config/git-driver/git/common-config/zuul.yaml +++ b/tests/fixtures/config/git-driver/git/common-config/zuul.yaml @@ -19,6 +19,10 @@ name: project-test1 run: playbooks/project-test1.yaml +- job: + name: project-test2 + run: playbooks/project-test2.yaml + - project: name: org/project check: diff --git a/tests/fixtures/config/implicit-project/git/common-config/playbooks/test-common.yaml b/tests/fixtures/config/implicit-project/git/common-config/playbooks/test-common.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/implicit-project/git/common-config/playbooks/test-common.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/implicit-project/git/common-config/zuul.yaml b/tests/fixtures/config/implicit-project/git/common-config/zuul.yaml new file mode 100644 index 000000000..038c412dd --- /dev/null +++ b/tests/fixtures/config/implicit-project/git/common-config/zuul.yaml @@ -0,0 +1,57 @@ +- pipeline: + name: check + manager: independent + post-review: true + trigger: + gerrit: + - event: patchset-created + success: + gerrit: + Verified: 1 + failure: + gerrit: + Verified: -1 + +- pipeline: + name: gate + manager: dependent + success-message: Build succeeded (gate). + trigger: + gerrit: + - event: comment-added + approval: + - Approved: 1 + success: + gerrit: + Verified: 2 + submit: true + failure: + gerrit: + Verified: -2 + start: + gerrit: + Verified: 0 + precedence: high + + +- job: + name: base + parent: null + +- job: + name: test-common + run: playbooks/test-common.yaml + +- project: + check: + jobs: + - test-common + +- project: + name: org/project + check: + jobs: + - test-common + gate: + jobs: + - test-common diff --git a/tests/fixtures/config/implicit-project/git/org_project/.zuul.yaml b/tests/fixtures/config/implicit-project/git/org_project/.zuul.yaml new file mode 100644 index 000000000..bce195cc6 --- /dev/null +++ b/tests/fixtures/config/implicit-project/git/org_project/.zuul.yaml @@ -0,0 +1,11 @@ +- job: + name: test-project + run: playbooks/test-project.yaml + +- project: + check: + jobs: + - test-project + gate: + jobs: + - test-project diff --git a/tests/fixtures/config/implicit-project/git/org_project/playbooks/test-project.yaml b/tests/fixtures/config/implicit-project/git/org_project/playbooks/test-project.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/implicit-project/git/org_project/playbooks/test-project.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/implicit-project/main.yaml b/tests/fixtures/config/implicit-project/main.yaml new file mode 100644 index 000000000..208e274b1 --- /dev/null +++ b/tests/fixtures/config/implicit-project/main.yaml @@ -0,0 +1,8 @@ +- tenant: + name: tenant-one + source: + gerrit: + config-projects: + - common-config + untrusted-projects: + - org/project diff --git a/tests/fixtures/config/inventory/git/common-config/zuul.yaml b/tests/fixtures/config/inventory/git/common-config/zuul.yaml index ad530a783..f592eb48b 100644 --- a/tests/fixtures/config/inventory/git/common-config/zuul.yaml +++ b/tests/fixtures/config/inventory/git/common-config/zuul.yaml @@ -38,6 +38,10 @@ label: default-label - name: fakeuser label: fakeuser-label + - name: windows + label: windows-label + - name: network + label: network-label - job: name: base diff --git a/tests/fixtures/config/pragma-multibranch/git/common-config/zuul.yaml b/tests/fixtures/config/pragma-multibranch/git/common-config/zuul.yaml new file mode 100644 index 000000000..dc83f9ddf --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/common-config/zuul.yaml @@ -0,0 +1,61 @@ +- pipeline: + name: check + manager: independent + trigger: + gerrit: + - event: patchset-created + success: + gerrit: + Verified: 1 + failure: + gerrit: + Verified: -1 + +- pipeline: + name: gate + manager: dependent + post-review: True + trigger: + gerrit: + - event: comment-added + approval: + - Approved: 1 + success: + gerrit: + Verified: 2 + submit: true + failure: + gerrit: + Verified: -2 + start: + gerrit: + Verified: 0 + precedence: high + +- job: + name: base + parent: null + +- project: + name: common-config + check: + jobs: [] + gate: + jobs: + - noop + +- project: + name: org/project1 + check: + jobs: [] + gate: + jobs: + - noop + +- project: + name: org/project2 + check: + jobs: [] + gate: + jobs: + - noop diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project1/README b/tests/fixtures/config/pragma-multibranch/git/org_project1/README new file mode 100644 index 000000000..9daeafb98 --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project1/README @@ -0,0 +1 @@ +test diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job1.yaml b/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job1.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job1.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job2.yaml b/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job2.yaml new file mode 100644 index 000000000..f679dceae --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project1/playbooks/test-job2.yaml @@ -0,0 +1,2 @@ +- hosts: all + tasks: [] diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project1/zuul.yaml b/tests/fixtures/config/pragma-multibranch/git/org_project1/zuul.yaml new file mode 100644 index 000000000..6c8352a23 --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project1/zuul.yaml @@ -0,0 +1,13 @@ +- job: + name: test-job1 + run: playbooks/test-job1.yaml + +- job: + name: test-job2 + run: playbooks/test-job2.yaml + +- project-template: + name: test-template + check: + jobs: + - test-job1 diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project2/README b/tests/fixtures/config/pragma-multibranch/git/org_project2/README new file mode 100644 index 000000000..9daeafb98 --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project2/README @@ -0,0 +1 @@ +test diff --git a/tests/fixtures/config/pragma-multibranch/git/org_project2/zuul.yaml b/tests/fixtures/config/pragma-multibranch/git/org_project2/zuul.yaml new file mode 100644 index 000000000..748cab264 --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/git/org_project2/zuul.yaml @@ -0,0 +1,7 @@ +- project: + name: org/project2 + templates: + - test-template + check: + jobs: + - test-job2 diff --git a/tests/fixtures/config/pragma-multibranch/main.yaml b/tests/fixtures/config/pragma-multibranch/main.yaml new file mode 100644 index 000000000..950b1172c --- /dev/null +++ b/tests/fixtures/config/pragma-multibranch/main.yaml @@ -0,0 +1,9 @@ +- tenant: + name: tenant-one + source: + gerrit: + config-projects: + - common-config + untrusted-projects: + - org/project1 + - org/project2 diff --git a/tests/fixtures/layouts/basic-git.yaml b/tests/fixtures/layouts/basic-git.yaml new file mode 100644 index 000000000..068d0a0ea --- /dev/null +++ b/tests/fixtures/layouts/basic-git.yaml @@ -0,0 +1,37 @@ +- pipeline: + name: post + manager: independent + trigger: + git: + - event: ref-updated + ref: ^refs/heads/.*$ + +- pipeline: + name: tag + manager: independent + trigger: + git: + - event: ref-updated + ref: ^refs/tags/.*$ + +- job: + name: base + parent: null + run: playbooks/base.yaml + +- job: + name: post-job + run: playbooks/post-job.yaml + +- job: + name: tag-job + run: playbooks/post-job.yaml + +- project: + name: org/project + post: + jobs: + - post-job + tag: + jobs: + - tag-job diff --git a/tests/fixtures/zuul-gerrit-github.conf b/tests/fixtures/zuul-gerrit-github.conf new file mode 100644 index 000000000..d3cbf7b25 --- /dev/null +++ b/tests/fixtures/zuul-gerrit-github.conf @@ -0,0 +1,35 @@ +[gearman] +server=127.0.0.1 + +[statsd] +# note, use 127.0.0.1 rather than localhost to avoid getting ipv6 +# see: https://github.com/jsocol/pystatsd/issues/61 +server=127.0.0.1 + +[scheduler] +tenant_config=main.yaml + +[merger] +git_dir=/tmp/zuul-test/merger-git +git_user_email=zuul@example.com +git_user_name=zuul + +[executor] +git_dir=/tmp/zuul-test/executor-git + +[connection gerrit] +driver=gerrit +server=review.example.com +user=jenkins +sshkey=fake_id_rsa_path + +[connection github] +driver=github +webhook_token=0000000000000000000000000000000000000000 + +[connection smtp] +driver=smtp +server=localhost +port=25 +default_from=zuul@example.com +default_to=you@example.com diff --git a/tests/fixtures/zuul-git-driver.conf b/tests/fixtures/zuul-git-driver.conf index b24b0a1b4..23a2a622c 100644 --- a/tests/fixtures/zuul-git-driver.conf +++ b/tests/fixtures/zuul-git-driver.conf @@ -21,6 +21,7 @@ sshkey=none [connection git] driver=git baseurl="" +poll_delay=0.1 [connection outgoing_smtp] driver=smtp diff --git a/tests/fixtures/zuul-sql-driver-prefix.conf b/tests/fixtures/zuul-sql-driver-prefix.conf new file mode 100644 index 000000000..14064745e --- /dev/null +++ b/tests/fixtures/zuul-sql-driver-prefix.conf @@ -0,0 +1,28 @@ +[gearman] +server=127.0.0.1 + +[scheduler] +tenant_config=main.yaml + +[merger] +git_dir=/tmp/zuul-test/merger-git +git_user_email=zuul@example.com +git_user_name=zuul + +[executor] +git_dir=/tmp/zuul-test/executor-git + +[connection gerrit] +driver=gerrit +server=review.example.com +user=jenkins +sshkey=fake_id_rsa1 + +[connection resultsdb] +driver=sql +dburi=$MYSQL_FIXTURE_DBURI$ +table_prefix=prefix_ + +[connection resultsdb_failures] +driver=sql +dburi=$MYSQL_FIXTURE_DBURI$ diff --git a/tests/unit/test_connection.py b/tests/unit/test_connection.py index c882d3a0a..c45da94cb 100644 --- a/tests/unit/test_connection.py +++ b/tests/unit/test_connection.py @@ -60,14 +60,19 @@ class TestConnections(ZuulTestCase): class TestSQLConnection(ZuulDBTestCase): config_file = 'zuul-sql-driver.conf' tenant_config_file = 'config/sql-driver/main.yaml' + expected_table_prefix = '' - def test_sql_tables_created(self, metadata_table=None): + def test_sql_tables_created(self): "Test the tables for storing results are created properly" - buildset_table = 'zuul_buildset' - build_table = 'zuul_build' - insp = sa.engine.reflection.Inspector( - self.connections.connections['resultsdb'].engine) + connection = self.connections.connections['resultsdb'] + insp = sa.engine.reflection.Inspector(connection.engine) + + table_prefix = connection.table_prefix + self.assertEqual(self.expected_table_prefix, table_prefix) + + buildset_table = table_prefix + 'zuul_buildset' + build_table = table_prefix + 'zuul_build' self.assertEqual(13, len(insp.get_columns(buildset_table))) self.assertEqual(10, len(insp.get_columns(build_table))) @@ -110,11 +115,11 @@ class TestSQLConnection(ZuulDBTestCase): self.assertEqual('check', buildset0['pipeline']) self.assertEqual('org/project', buildset0['project']) self.assertEqual(1, buildset0['change']) - self.assertEqual(1, buildset0['patchset']) + self.assertEqual('1', buildset0['patchset']) self.assertEqual('SUCCESS', buildset0['result']) self.assertEqual('Build succeeded.', buildset0['message']) self.assertEqual('tenant-one', buildset0['tenant']) - self.assertEqual('https://hostname/%d' % buildset0['change'], + self.assertEqual('https://review.example.com/%d' % buildset0['change'], buildset0['ref_url']) buildset0_builds = conn.execute( @@ -136,7 +141,7 @@ class TestSQLConnection(ZuulDBTestCase): self.assertEqual('check', buildset1['pipeline']) self.assertEqual('org/project', buildset1['project']) self.assertEqual(2, buildset1['change']) - self.assertEqual(1, buildset1['patchset']) + self.assertEqual('1', buildset1['patchset']) self.assertEqual('FAILURE', buildset1['result']) self.assertEqual('Build failed.', buildset1['message']) @@ -189,7 +194,7 @@ class TestSQLConnection(ZuulDBTestCase): self.assertEqual('check', buildsets_resultsdb[0]['pipeline']) self.assertEqual('org/project', buildsets_resultsdb[0]['project']) self.assertEqual(1, buildsets_resultsdb[0]['change']) - self.assertEqual(1, buildsets_resultsdb[0]['patchset']) + self.assertEqual('1', buildsets_resultsdb[0]['patchset']) self.assertEqual('SUCCESS', buildsets_resultsdb[0]['result']) self.assertEqual('Build succeeded.', buildsets_resultsdb[0]['message']) @@ -210,12 +215,17 @@ class TestSQLConnection(ZuulDBTestCase): self.assertEqual( 'org/project', buildsets_resultsdb_failures[0]['project']) self.assertEqual(2, buildsets_resultsdb_failures[0]['change']) - self.assertEqual(1, buildsets_resultsdb_failures[0]['patchset']) + self.assertEqual('1', buildsets_resultsdb_failures[0]['patchset']) self.assertEqual('FAILURE', buildsets_resultsdb_failures[0]['result']) self.assertEqual( 'Build failed.', buildsets_resultsdb_failures[0]['message']) +class TestSQLConnectionPrefix(TestSQLConnection): + config_file = 'zuul-sql-driver-prefix.conf' + expected_table_prefix = 'prefix_' + + class TestConnectionsBadSQL(ZuulDBTestCase): config_file = 'zuul-sql-driver-bad.conf' tenant_config_file = 'config/sql-driver/main.yaml' diff --git a/tests/unit/test_cross_crd.py b/tests/unit/test_cross_crd.py new file mode 100644 index 000000000..7d68989ab --- /dev/null +++ b/tests/unit/test_cross_crd.py @@ -0,0 +1,950 @@ +#!/usr/bin/env python + +# Copyright 2012 Hewlett-Packard Development Company, L.P. +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from tests.base import ( + ZuulTestCase, +) + + +class TestGerritToGithubCRD(ZuulTestCase): + config_file = 'zuul-gerrit-github.conf' + tenant_config_file = 'config/cross-source/main.yaml' + + def test_crd_gate(self): + "Test cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest('github/project2', 'master', + 'B') + + A.addApproval('Code-Review', 2) + + AM2 = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', + 'AM2') + AM1 = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', + 'AM1') + AM2.setMerged() + AM1.setMerged() + + # A -> AM1 -> AM2 + # A Depends-On: B + # M2 is here to make sure it is never queried. If it is, it + # means zuul is walking down the entire history of merged + # changes. + + A.setDependsOn(AM1, 1) + AM1.setDependsOn(AM2, 1) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + + for connection in self.connections.connections.values(): + connection.maintainCache([]) + + self.executor_server.hold_jobs_in_build = True + B.addLabel('approved') + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(AM2.queried, 0) + self.assertEqual(A.data['status'], 'MERGED') + self.assertTrue(B.is_merged) + self.assertEqual(A.reported, 2) + self.assertEqual(len(B.comments), 2) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 1,1' % B.head_sha) + + def test_crd_branch(self): + "Test cross-repo dependencies in multiple branches" + + self.create_branch('github/project2', 'mp') + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest('github/project2', 'master', + 'B') + C1 = self.fake_github.openFakePullRequest('github/project2', 'mp', + 'C1') + + A.addApproval('Code-Review', 2) + + # A Depends-On: B+C1 + A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % ( + A.subject, B.url, C1.url) + + self.executor_server.hold_jobs_in_build = True + B.addLabel('approved') + C1.addLabel('approved') + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertTrue(B.is_merged) + self.assertTrue(C1.is_merged) + self.assertEqual(A.reported, 2) + self.assertEqual(len(B.comments), 2) + self.assertEqual(len(C1.comments), 2) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 2,%s 1,1' % + (B.head_sha, C1.head_sha)) + + def test_crd_gate_reverse(self): + "Test reverse cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest('github/project2', 'master', + 'B') + A.addApproval('Code-Review', 2) + + # A Depends-On: B + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + + self.executor_server.hold_jobs_in_build = True + A.addApproval('Approved', 1) + self.fake_github.emitEvent(B.addLabel('approved')) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertTrue(B.is_merged) + self.assertEqual(A.reported, 2) + self.assertEqual(len(B.comments), 2) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 1,1' % + (B.head_sha,)) + + def test_crd_cycle(self): + "Test cross-repo dependency cycles" + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + msg = "Depends-On: %s" % (A.data['url'],) + B = self.fake_github.openFakePullRequest('github/project2', 'master', + 'B', body=msg) + A.addApproval('Code-Review', 2) + B.addLabel('approved') + + # A -> B -> A (via commit-depends) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.reported, 0) + self.assertEqual(len(B.comments), 0) + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + + def test_crd_gate_unknown(self): + "Test unknown projects in dependent pipeline" + self.init_repo("github/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest('github/unknown', 'master', + 'B') + A.addApproval('Code-Review', 2) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + event = B.addLabel('approved') + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + # Unknown projects cannot share a queue with any other + # since they don't have common jobs with any other (they have no jobs). + # Changes which depend on unknown project changes + # should not be processed in dependent pipeline + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + self.assertEqual(A.reported, 0) + self.assertEqual(len(B.comments), 0) + self.assertEqual(len(self.history), 0) + + # Simulate change B being gated outside this layout Set the + # change merged before submitting the event so that when the + # event triggers a gerrit query to update the change, we get + # the information that it was merged. + B.setMerged('merged') + self.fake_github.emitEvent(event) + self.waitUntilSettled() + self.assertEqual(len(self.history), 0) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertTrue(B.is_merged) + self.assertEqual(len(B.comments), 0) + + def test_crd_check(self): + "Test cross-repo dependencies in independent pipelines" + self.executor_server.hold_jobs_in_build = True + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest( + 'github/project2', 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + + self.assertTrue(self.builds[0].hasChanges(A, B)) + + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + self.assertEqual(A.reported, 1) + self.assertEqual(len(B.comments), 0) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 1,1' % + (B.head_sha,)) + + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_duplicate(self): + "Test duplicate check in independent pipelines" + self.executor_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest( + 'github/project2', 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + + # Add two dependent changes... + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...make sure the live one is not duplicated... + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...but the non-live one is able to be. + self.fake_github.emitEvent(B.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 3) + + # Release jobs in order to avoid races with change A jobs + # finishing before change B jobs. + self.orderedRelease() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + self.assertEqual(A.reported, 1) + self.assertEqual(len(B.comments), 1) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 1,1' % + (B.head_sha,)) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,%s' % + (B.head_sha,)) + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.messages[0]) + + def _test_crd_check_reconfiguration(self, project1, project2): + "Test cross-repo dependencies re-enqueued in independent pipelines" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest( + 'github/project2', 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.sched.reconfigure(self.config) + + # Make sure the items still share a change queue, and the + # first one is not live. + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1) + queue = tenant.layout.pipelines['check'].queues[0] + first_item = queue.queue[0] + for item in queue.queue: + self.assertEqual(item.queue, first_item.queue) + self.assertFalse(first_item.live) + self.assertTrue(queue.queue[1].live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertFalse(B.is_merged) + self.assertEqual(A.reported, 1) + self.assertEqual(len(B.comments), 0) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,%s 1,1' % + (B.head_sha,)) + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_reconfiguration(self): + self._test_crd_check_reconfiguration('org/project1', 'org/project2') + + def test_crd_undefined_project(self): + """Test that undefined projects in dependencies are handled for + independent pipelines""" + # It's a hack for fake github, + # as it implies repo creation upon the creation of any change + self.init_repo("github/unknown", tag='init') + self._test_crd_check_reconfiguration('gerrit/project1', + 'github/unknown') + + def test_crd_check_transitive(self): + "Test transitive cross-repo dependencies" + # Specifically, if A -> B -> C, and C gets a new patchset and + # A gets a new patchset, ensure the test of A,2 includes B,1 + # and C,2 (not C,1 which would indicate stale data in the + # cache for B). + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + C = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'C') + # B Depends-On: C + msg = "Depends-On: %s" % (C.data['url'],) + B = self.fake_github.openFakePullRequest( + 'github/project2', 'master', 'B', body=msg) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,1 1,%s 1,1' % + (B.head_sha,)) + + self.fake_github.emitEvent(B.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,1 1,%s' % + (B.head_sha,)) + + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,1') + + C.addPatchset() + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,2') + + A.addPatchset() + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,2 1,%s 1,2' % + (B.head_sha,)) + + def test_crd_check_unknown(self): + "Test unknown projects in independent pipeline" + self.init_repo("github/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'A') + B = self.fake_github.openFakePullRequest( + 'github/unknown', 'master', 'B') + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + + # Make sure zuul has seen an event on B. + self.fake_github.emitEvent(B.getPullRequestEditedEvent()) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertFalse(B.is_merged) + self.assertEqual(len(B.comments), 0) + + def test_crd_cycle_join(self): + "Test an updated change creates a cycle" + A = self.fake_github.openFakePullRequest( + 'github/project2', 'master', 'A') + + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(len(A.comments), 1) + + # Create B->A + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.url) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Dep is there so zuul should have reported on B + self.assertEqual(B.reported, 1) + + # Update A to add A->B (a cycle). + A.editBody('Depends-On: %s\n' % (B.data['url'])) + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + + # Dependency cycle injected so zuul should not have reported again on A + self.assertEqual(len(A.comments), 1) + + # Now if we update B to remove the depends-on, everything + # should be okay. B; A->B + + B.addPatchset() + B.data['commitMessage'] = '%s\n' % (B.subject,) + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + + # Cycle was removed so now zuul should have reported again on A + self.assertEqual(len(A.comments), 2) + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(B.reported, 2) + + +class TestGithubToGerritCRD(ZuulTestCase): + config_file = 'zuul-gerrit-github.conf' + tenant_config_file = 'config/cross-source/main.yaml' + + def test_crd_gate(self): + "Test cross-repo dependencies" + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'])) + + event = A.addLabel('approved') + self.fake_github.emitEvent(event) + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + + for connection in self.connections.connections.values(): + connection.maintainCache([]) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + self.fake_github.emitEvent(event) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertTrue(A.is_merged) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(len(A.comments), 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 1,%s' % A.head_sha) + + def test_crd_branch(self): + "Test cross-repo dependencies in multiple branches" + + self.create_branch('gerrit/project1', 'mp') + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + C1 = self.fake_gerrit.addFakeChange('gerrit/project1', 'mp', 'C1') + + B.addApproval('Code-Review', 2) + C1.addApproval('Code-Review', 2) + + # A Depends-On: B+C1 + A.editBody('Depends-On: %s\nDepends-On: %s\n' % ( + B.data['url'], C1.data['url'])) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + C1.addApproval('Approved', 1) + self.fake_github.emitEvent(A.addLabel('approved')) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + self.assertTrue(A.is_merged) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(C1.data['status'], 'MERGED') + self.assertEqual(len(A.comments), 2) + self.assertEqual(B.reported, 2) + self.assertEqual(C1.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 2,1 1,%s' % + (A.head_sha,)) + + def test_crd_gate_reverse(self): + "Test reverse cross-repo dependencies" + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + self.fake_github.emitEvent(A.addLabel('approved')) + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + + self.executor_server.hold_jobs_in_build = True + A.addLabel('approved') + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertTrue(A.is_merged) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(len(A.comments), 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 1,%s' % + (A.head_sha,)) + + def test_crd_cycle(self): + "Test cross-repo dependency cycles" + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.url) + + B.addApproval('Code-Review', 2) + B.addApproval('Approved', 1) + + # A -> B -> A (via commit-depends) + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + self.fake_github.emitEvent(A.addLabel('approved')) + self.waitUntilSettled() + + self.assertEqual(len(A.comments), 0) + self.assertEqual(B.reported, 0) + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + + def test_crd_gate_unknown(self): + "Test unknown projects in dependent pipeline" + self.init_repo("gerrit/unknown", tag='init') + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/unknown', 'master', 'B') + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + B.addApproval('Approved', 1) + event = A.addLabel('approved') + self.fake_github.emitEvent(event) + self.waitUntilSettled() + + # Unknown projects cannot share a queue with any other + # since they don't have common jobs with any other (they have no jobs). + # Changes which depend on unknown project changes + # should not be processed in dependent pipeline + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(len(A.comments), 0) + self.assertEqual(B.reported, 0) + self.assertEqual(len(self.history), 0) + + # Simulate change B being gated outside this layout Set the + # change merged before submitting the event so that when the + # event triggers a gerrit query to update the change, we get + # the information that it was merged. + B.setMerged() + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 0) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_github.emitEvent(event) + self.waitUntilSettled() + + self.assertTrue(A.is_merged) + self.assertEqual(len(A.comments), 2) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(B.reported, 0) + + def test_crd_check(self): + "Test cross-repo dependencies in independent pipelines" + self.executor_server.hold_jobs_in_build = True + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange( + 'gerrit/project1', 'master', 'B') + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + + self.assertTrue(self.builds[0].hasChanges(A, B)) + + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(len(A.comments), 1) + self.assertEqual(B.reported, 0) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 1,%s' % + (A.head_sha,)) + + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_duplicate(self): + "Test duplicate check in independent pipelines" + self.executor_server.hold_jobs_in_build = True + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange( + 'gerrit/project1', 'master', 'B') + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + + # Add two dependent changes... + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...make sure the live one is not duplicated... + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...but the non-live one is able to be. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 3) + + # Release jobs in order to avoid races with change A jobs + # finishing before change B jobs. + self.orderedRelease() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(len(A.comments), 1) + self.assertEqual(B.reported, 1) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 1,%s' % + (A.head_sha,)) + + changes = self.getJobFromHistory( + 'project-merge', 'gerrit/project1').changes + self.assertEqual(changes, '1,1') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.comments[0]) + + def _test_crd_check_reconfiguration(self, project1, project2): + "Test cross-repo dependencies re-enqueued in independent pipelines" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange( + 'gerrit/project1', 'master', 'B') + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + + self.sched.reconfigure(self.config) + + # Make sure the items still share a change queue, and the + # first one is not live. + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1) + queue = tenant.layout.pipelines['check'].queues[0] + first_item = queue.queue[0] + for item in queue.queue: + self.assertEqual(item.queue, first_item.queue) + self.assertFalse(first_item.live) + self.assertTrue(queue.queue[1].live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(len(A.comments), 1) + self.assertEqual(B.reported, 0) + + changes = self.getJobFromHistory( + 'project-merge', 'github/project2').changes + self.assertEqual(changes, '1,1 1,%s' % + (A.head_sha,)) + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_reconfiguration(self): + self._test_crd_check_reconfiguration('org/project1', 'org/project2') + + def test_crd_undefined_project(self): + """Test that undefined projects in dependencies are handled for + independent pipelines""" + # It's a hack for fake gerrit, + # as it implies repo creation upon the creation of any change + self.init_repo("gerrit/unknown", tag='init') + self._test_crd_check_reconfiguration('github/project2', + 'gerrit/unknown') + + def test_crd_check_transitive(self): + "Test transitive cross-repo dependencies" + # Specifically, if A -> B -> C, and C gets a new patchset and + # A gets a new patchset, ensure the test of A,2 includes B,1 + # and C,2 (not C,1 which would indicate stale data in the + # cache for B). + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange('gerrit/project1', 'master', 'B') + C = self.fake_github.openFakePullRequest('github/project2', 'master', + 'C') + + # B Depends-On: C + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, C.url) + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,%s 1,1 1,%s' % + (C.head_sha, A.head_sha)) + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,%s 1,1' % + (C.head_sha,)) + + self.fake_github.emitEvent(C.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,%s' % + (C.head_sha,)) + + new_c_head = C.head_sha + C.addCommit() + old_c_head = C.head_sha + self.assertNotEqual(old_c_head, new_c_head) + self.fake_github.emitEvent(C.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,%s' % + (C.head_sha,)) + + new_a_head = A.head_sha + A.addCommit() + old_a_head = A.head_sha + self.assertNotEqual(old_a_head, new_a_head) + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '2,%s 1,1 1,%s' % + (C.head_sha, A.head_sha,)) + + def test_crd_check_unknown(self): + "Test unknown projects in independent pipeline" + self.init_repo("gerrit/unknown", tag='init') + A = self.fake_github.openFakePullRequest('github/project2', 'master', + 'A') + B = self.fake_gerrit.addFakeChange( + 'gerrit/unknown', 'master', 'B') + + # A Depends-On: B + A.editBody('Depends-On: %s\n' % (B.data['url'],)) + + # Make sure zuul has seen an event on B. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.fake_github.emitEvent(A.getPullRequestEditedEvent()) + self.waitUntilSettled() + + self.assertFalse(A.is_merged) + self.assertEqual(len(A.comments), 1) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(B.reported, 0) + + def test_crd_cycle_join(self): + "Test an updated change creates a cycle" + A = self.fake_gerrit.addFakeChange( + 'gerrit/project1', 'master', 'A') + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(A.reported, 1) + + # Create B->A + B = self.fake_github.openFakePullRequest('github/project2', 'master', + 'B') + B.editBody('Depends-On: %s\n' % (A.data['url'],)) + self.fake_github.emitEvent(B.getPullRequestEditedEvent()) + self.waitUntilSettled() + + # Dep is there so zuul should have reported on B + self.assertEqual(len(B.comments), 1) + + # Update A to add A->B (a cycle). + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.url) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Dependency cycle injected so zuul should not have reported again on A + self.assertEqual(A.reported, 1) + + # Now if we update B to remove the depends-on, everything + # should be okay. B; A->B + + B.addCommit() + B.editBody('') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Cycle was removed so now zuul should have reported again on A + self.assertEqual(A.reported, 2) + + self.fake_github.emitEvent(B.getPullRequestEditedEvent()) + self.waitUntilSettled() + self.assertEqual(len(B.comments), 2) diff --git a/tests/unit/test_gerrit_crd.py b/tests/unit/test_gerrit_crd.py new file mode 100644 index 000000000..732bc3d60 --- /dev/null +++ b/tests/unit/test_gerrit_crd.py @@ -0,0 +1,626 @@ +#!/usr/bin/env python + +# Copyright 2012 Hewlett-Packard Development Company, L.P. +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from tests.base import ( + ZuulTestCase, + simple_layout, +) + + +class TestGerritCRD(ZuulTestCase): + tenant_config_file = 'config/single-tenant/main.yaml' + + def test_crd_gate(self): + "Test cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + AM2 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM2') + AM1 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM1') + AM2.setMerged() + AM1.setMerged() + + BM2 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM2') + BM1 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM1') + BM2.setMerged() + BM1.setMerged() + + # A -> AM1 -> AM2 + # B -> BM1 -> BM2 + # A Depends-On: B + # M2 is here to make sure it is never queried. If it is, it + # means zuul is walking down the entire history of merged + # changes. + + B.setDependsOn(BM1, 1) + BM1.setDependsOn(BM2, 1) + + A.setDependsOn(AM1, 1) + AM1.setDependsOn(AM2, 1) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + for connection in self.connections.connections.values(): + connection.maintainCache([]) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(AM2.queried, 0) + self.assertEqual(BM2.queried, 0) + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 1,1') + + def test_crd_branch(self): + "Test cross-repo dependencies in multiple branches" + + self.create_branch('org/project2', 'mp') + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C1 = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C1') + + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + C1.addApproval('Code-Review', 2) + + # A Depends-On: B+C1 + A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % ( + A.subject, B.data['url'], C1.data['url']) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + C1.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(C1.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + self.assertEqual(C1.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 3,1 1,1') + + def test_crd_multiline(self): + "Test multiple depends-on lines in commit" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + C.addApproval('Code-Review', 2) + + # A Depends-On: B+C + A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % ( + A.subject, B.data['url'], C.data['url']) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + C.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(C.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + self.assertEqual(C.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 3,1 1,1') + + def test_crd_unshared_gate(self): + "Test cross-repo dependencies in unshared gate queues" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + # A and B do not share a queue, make sure that A is unable to + # enqueue B (and therefore, A is unable to be enqueued). + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(len(self.history), 0) + + # Enqueue and merge B alone. + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(B.reported, 2) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + + def test_crd_gate_reverse(self): + "Test reverse cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + self.executor_server.hold_jobs_in_build = True + A.addApproval('Approved', 1) + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 1,1') + + def test_crd_cycle(self): + "Test cross-repo dependency cycles" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A -> B -> A (via commit-depends) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.data['url']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + def test_crd_gate_unknown(self): + "Test unknown projects in dependent pipeline" + self.init_repo("org/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + # Unknown projects cannot share a queue with any other + # since they don't have common jobs with any other (they have no jobs). + # Changes which depend on unknown project changes + # should not be processed in dependent pipeline + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(len(self.history), 0) + + # Simulate change B being gated outside this layout Set the + # change merged before submitting the event so that when the + # event triggers a gerrit query to update the change, we get + # the information that it was merged. + B.setMerged() + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 0) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(B.reported, 0) + + def test_crd_check(self): + "Test cross-repo dependencies in independent pipelines" + + self.executor_server.hold_jobs_in_build = True + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + + self.assertTrue(self.builds[0].hasChanges(A, B)) + + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 0) + + self.assertEqual(self.history[0].changes, '2,1 1,1') + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_git_depends(self): + "Test single-repo dependencies in independent pipelines" + self.gearman_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + + # Add two git-dependent changes and make sure they both report + # success. + B.setDependsOn(A, 1) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.orderedRelease() + self.gearman_server.hold_jobs_in_build = False + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + + self.assertEqual(self.history[0].changes, '1,1') + self.assertEqual(self.history[-1].changes, '1,1 2,1') + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.messages[0]) + self.assertIn('Build succeeded', B.messages[0]) + + def test_crd_check_duplicate(self): + "Test duplicate check in independent pipelines" + self.executor_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + + # Add two git-dependent changes... + B.setDependsOn(A, 1) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...make sure the live one is not duplicated... + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...but the non-live one is able to be. + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 3) + + # Release jobs in order to avoid races with change A jobs + # finishing before change B jobs. + self.orderedRelease() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + + self.assertEqual(self.history[0].changes, '1,1 2,1') + self.assertEqual(self.history[1].changes, '1,1') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.messages[0]) + self.assertIn('Build succeeded', B.messages[0]) + + def _test_crd_check_reconfiguration(self, project1, project2): + "Test cross-repo dependencies re-enqueued in independent pipelines" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange(project1, 'master', 'A') + B = self.fake_gerrit.addFakeChange(project2, 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.sched.reconfigure(self.config) + + # Make sure the items still share a change queue, and the + # first one is not live. + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1) + queue = tenant.layout.pipelines['check'].queues[0] + first_item = queue.queue[0] + for item in queue.queue: + self.assertEqual(item.queue, first_item.queue) + self.assertFalse(first_item.live) + self.assertTrue(queue.queue[1].live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 0) + + self.assertEqual(self.history[0].changes, '2,1 1,1') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_reconfiguration(self): + self._test_crd_check_reconfiguration('org/project1', 'org/project2') + + def test_crd_undefined_project(self): + """Test that undefined projects in dependencies are handled for + independent pipelines""" + # It's a hack for fake gerrit, + # as it implies repo creation upon the creation of any change + self.init_repo("org/unknown", tag='init') + self._test_crd_check_reconfiguration('org/project1', 'org/unknown') + + @simple_layout('layouts/ignore-dependencies.yaml') + def test_crd_check_ignore_dependencies(self): + "Test cross-repo dependencies can be ignored" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + # C git-depends on B + C.setDependsOn(B, 1) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Make sure none of the items share a change queue, and all + # are live. + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + self.assertEqual(len(check_pipeline.queues), 3) + self.assertEqual(len(check_pipeline.getAllItems()), 3) + for item in check_pipeline.getAllItems(): + self.assertTrue(item.live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(C.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + self.assertEqual(C.reported, 1) + + # Each job should have tested exactly one change + for job in self.history: + self.assertEqual(len(job.changes.split()), 1) + + @simple_layout('layouts/three-projects.yaml') + def test_crd_check_transitive(self): + "Test transitive cross-repo dependencies" + # Specifically, if A -> B -> C, and C gets a new patchset and + # A gets a new patchset, ensure the test of A,2 includes B,1 + # and C,2 (not C,1 which would indicate stale data in the + # cache for B). + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project3', 'master', 'C') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + # B Depends-On: C + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, C.data['url']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1 2,1 1,1') + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1 2,1') + + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1') + + C.addPatchset() + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,2') + + A.addPatchset() + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,2 2,1 1,2') + + def test_crd_check_unknown(self): + "Test unknown projects in independent pipeline" + self.init_repo("org/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'D') + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + + # Make sure zuul has seen an event on B. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(B.reported, 0) + + def test_crd_cycle_join(self): + "Test an updated change creates a cycle" + A = self.fake_gerrit.addFakeChange('org/project2', 'master', 'A') + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(A.reported, 1) + + # Create B->A + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.data['url']) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Dep is there so zuul should have reported on B + self.assertEqual(B.reported, 1) + + # Update A to add A->B (a cycle). + A.addPatchset() + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['url']) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + + # Dependency cycle injected so zuul should not have reported again on A + self.assertEqual(A.reported, 1) + + # Now if we update B to remove the depends-on, everything + # should be okay. B; A->B + + B.addPatchset() + B.data['commitMessage'] = '%s\n' % (B.subject,) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + + # Cycle was removed so now zuul should have reported again on A + self.assertEqual(A.reported, 2) + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(B.reported, 2) diff --git a/tests/unit/test_gerrit_legacy_crd.py b/tests/unit/test_gerrit_legacy_crd.py new file mode 100644 index 000000000..c711e4d95 --- /dev/null +++ b/tests/unit/test_gerrit_legacy_crd.py @@ -0,0 +1,629 @@ +#!/usr/bin/env python + +# Copyright 2012 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from tests.base import ( + ZuulTestCase, + simple_layout, +) + + +class TestGerritLegacyCRD(ZuulTestCase): + tenant_config_file = 'config/single-tenant/main.yaml' + + def test_crd_gate(self): + "Test cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + AM2 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM2') + AM1 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM1') + AM2.setMerged() + AM1.setMerged() + + BM2 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM2') + BM1 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM1') + BM2.setMerged() + BM1.setMerged() + + # A -> AM1 -> AM2 + # B -> BM1 -> BM2 + # A Depends-On: B + # M2 is here to make sure it is never queried. If it is, it + # means zuul is walking down the entire history of merged + # changes. + + B.setDependsOn(BM1, 1) + BM1.setDependsOn(BM2, 1) + + A.setDependsOn(AM1, 1) + AM1.setDependsOn(AM2, 1) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + for connection in self.connections.connections.values(): + connection.maintainCache([]) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(AM2.queried, 0) + self.assertEqual(BM2.queried, 0) + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 1,1') + + def test_crd_branch(self): + "Test cross-repo dependencies in multiple branches" + + self.create_branch('org/project2', 'mp') + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C1 = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C1') + C2 = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C2', + status='ABANDONED') + C1.data['id'] = B.data['id'] + C2.data['id'] = B.data['id'] + + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + C1.addApproval('Code-Review', 2) + + # A Depends-On: B+C1 + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + C1.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(C1.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + self.assertEqual(C1.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 3,1 1,1') + + def test_crd_multiline(self): + "Test multiple depends-on lines in commit" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + C.addApproval('Code-Review', 2) + + # A Depends-On: B+C + A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % ( + A.subject, B.data['id'], C.data['id']) + + self.executor_server.hold_jobs_in_build = True + B.addApproval('Approved', 1) + C.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(C.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + self.assertEqual(C.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 3,1 1,1') + + def test_crd_unshared_gate(self): + "Test cross-repo dependencies in unshared gate queues" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + # A and B do not share a queue, make sure that A is unable to + # enqueue B (and therefore, A is unable to be enqueued). + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(len(self.history), 0) + + # Enqueue and merge B alone. + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(B.reported, 2) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + + def test_crd_gate_reverse(self): + "Test reverse cross-repo dependencies" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + self.executor_server.hold_jobs_in_build = True + A.addApproval('Approved', 1) + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.release('.*-merge') + self.waitUntilSettled() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.reported, 2) + + changes = self.getJobFromHistory( + 'project-merge', 'org/project1').changes + self.assertEqual(changes, '2,1 1,1') + + def test_crd_cycle(self): + "Test cross-repo dependency cycles" + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A -> B -> A (via commit-depends) + + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.data['id']) + + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + + def test_crd_gate_unknown(self): + "Test unknown projects in dependent pipeline" + self.init_repo("org/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'B') + A.addApproval('Code-Review', 2) + B.addApproval('Code-Review', 2) + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + B.addApproval('Approved', 1) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + # Unknown projects cannot share a queue with any other + # since they don't have common jobs with any other (they have no jobs). + # Changes which depend on unknown project changes + # should not be processed in dependent pipeline + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 0) + self.assertEqual(B.reported, 0) + self.assertEqual(len(self.history), 0) + + # Simulate change B being gated outside this layout Set the + # change merged before submitting the event so that when the + # event triggers a gerrit query to update the change, we get + # the information that it was merged. + B.setMerged() + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 0) + + # Now that B is merged, A should be able to be enqueued and + # merged. + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'MERGED') + self.assertEqual(A.reported, 2) + self.assertEqual(B.data['status'], 'MERGED') + self.assertEqual(B.reported, 0) + + def test_crd_check(self): + "Test cross-repo dependencies in independent pipelines" + + self.executor_server.hold_jobs_in_build = True + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.executor_server.release('.*-merge') + self.waitUntilSettled() + + self.assertTrue(self.builds[0].hasChanges(A, B)) + + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 0) + + self.assertEqual(self.history[0].changes, '2,1 1,1') + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_git_depends(self): + "Test single-repo dependencies in independent pipelines" + self.gearman_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + + # Add two git-dependent changes and make sure they both report + # success. + B.setDependsOn(A, 1) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.orderedRelease() + self.gearman_server.hold_jobs_in_build = False + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + + self.assertEqual(self.history[0].changes, '1,1') + self.assertEqual(self.history[-1].changes, '1,1 2,1') + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.messages[0]) + self.assertIn('Build succeeded', B.messages[0]) + + def test_crd_check_duplicate(self): + "Test duplicate check in independent pipelines" + self.executor_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + + # Add two git-dependent changes... + B.setDependsOn(A, 1) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...make sure the live one is not duplicated... + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 2) + + # ...but the non-live one is able to be. + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(check_pipeline.getAllItems()), 3) + + # Release jobs in order to avoid races with change A jobs + # finishing before change B jobs. + self.orderedRelease() + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + + self.assertEqual(self.history[0].changes, '1,1 2,1') + self.assertEqual(self.history[1].changes, '1,1') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + self.assertIn('Build succeeded', A.messages[0]) + self.assertIn('Build succeeded', B.messages[0]) + + def _test_crd_check_reconfiguration(self, project1, project2): + "Test cross-repo dependencies re-enqueued in independent pipelines" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange(project1, 'master', 'A') + B = self.fake_gerrit.addFakeChange(project2, 'master', 'B') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.sched.reconfigure(self.config) + + # Make sure the items still share a change queue, and the + # first one is not live. + tenant = self.sched.abide.tenants.get('tenant-one') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1) + queue = tenant.layout.pipelines['check'].queues[0] + first_item = queue.queue[0] + for item in queue.queue: + self.assertEqual(item.queue, first_item.queue) + self.assertFalse(first_item.live) + self.assertTrue(queue.queue[1].live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 0) + + self.assertEqual(self.history[0].changes, '2,1 1,1') + self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) + + def test_crd_check_reconfiguration(self): + self._test_crd_check_reconfiguration('org/project1', 'org/project2') + + def test_crd_undefined_project(self): + """Test that undefined projects in dependencies are handled for + independent pipelines""" + # It's a hack for fake gerrit, + # as it implies repo creation upon the creation of any change + self.init_repo("org/unknown", tag='init') + self._test_crd_check_reconfiguration('org/project1', 'org/unknown') + + @simple_layout('layouts/ignore-dependencies.yaml') + def test_crd_check_ignore_dependencies(self): + "Test cross-repo dependencies can be ignored" + + self.gearman_server.hold_jobs_in_queue = True + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + # C git-depends on B + C.setDependsOn(B, 1) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Make sure none of the items share a change queue, and all + # are live. + tenant = self.sched.abide.tenants.get('tenant-one') + check_pipeline = tenant.layout.pipelines['check'] + self.assertEqual(len(check_pipeline.queues), 3) + self.assertEqual(len(check_pipeline.getAllItems()), 3) + for item in check_pipeline.getAllItems(): + self.assertTrue(item.live) + + self.gearman_server.hold_jobs_in_queue = False + self.gearman_server.release() + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(C.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.reported, 1) + self.assertEqual(C.reported, 1) + + # Each job should have tested exactly one change + for job in self.history: + self.assertEqual(len(job.changes.split()), 1) + + @simple_layout('layouts/three-projects.yaml') + def test_crd_check_transitive(self): + "Test transitive cross-repo dependencies" + # Specifically, if A -> B -> C, and C gets a new patchset and + # A gets a new patchset, ensure the test of A,2 includes B,1 + # and C,2 (not C,1 which would indicate stale data in the + # cache for B). + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') + C = self.fake_gerrit.addFakeChange('org/project3', 'master', 'C') + + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + # B Depends-On: C + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, C.data['id']) + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1 2,1 1,1') + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1 2,1') + + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,1') + + C.addPatchset() + self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,2') + + A.addPatchset() + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(self.history[-1].changes, '3,2 2,1 1,2') + + def test_crd_check_unknown(self): + "Test unknown projects in independent pipeline" + self.init_repo("org/unknown", tag='init') + A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') + B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'D') + # A Depends-On: B + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + + # Make sure zuul has seen an event on B. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(B.reported, 0) + + def test_crd_cycle_join(self): + "Test an updated change creates a cycle" + A = self.fake_gerrit.addFakeChange('org/project2', 'master', 'A') + + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(A.reported, 1) + + # Create B->A + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + B.subject, A.data['id']) + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + # Dep is there so zuul should have reported on B + self.assertEqual(B.reported, 1) + + # Update A to add A->B (a cycle). + A.addPatchset() + A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( + A.subject, B.data['id']) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + + # Dependency cycle injected so zuul should not have reported again on A + self.assertEqual(A.reported, 1) + + # Now if we update B to remove the depends-on, everything + # should be okay. B; A->B + + B.addPatchset() + B.data['commitMessage'] = '%s\n' % (B.subject,) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + + # Cycle was removed so now zuul should have reported again on A + self.assertEqual(A.reported, 2) + + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2)) + self.waitUntilSettled() + self.assertEqual(B.reported, 2) diff --git a/tests/unit/test_git_driver.py b/tests/unit/test_git_driver.py index 1cfadf470..b9e6c6e92 100644 --- a/tests/unit/test_git_driver.py +++ b/tests/unit/test_git_driver.py @@ -12,7 +12,12 @@ # License for the specific language governing permissions and limitations # under the License. -from tests.base import ZuulTestCase + +import os +import time +import yaml + +from tests.base import ZuulTestCase, simple_layout class TestGitDriver(ZuulTestCase): @@ -23,7 +28,7 @@ class TestGitDriver(ZuulTestCase): super(TestGitDriver, self).setup_config() self.config.set('connection git', 'baseurl', self.upstream_root) - def test_git_driver(self): + def test_basic(self): tenant = self.sched.abide.tenants.get('tenant-one') # Check that we have the git source for common-config and the # gerrit source for the project. @@ -40,3 +45,127 @@ class TestGitDriver(ZuulTestCase): self.waitUntilSettled() self.assertEqual(len(self.history), 1) self.assertEqual(A.reported, 1) + + def test_config_refreshed(self): + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 1) + self.assertEqual(A.reported, 1) + self.assertEqual(self.history[0].name, 'project-test1') + + # Update zuul.yaml to force a tenant reconfiguration + path = os.path.join(self.upstream_root, 'common-config', 'zuul.yaml') + config = yaml.load(open(path, 'r').read()) + change = { + 'name': 'org/project', + 'check': { + 'jobs': [ + 'project-test2' + ] + } + } + config[4]['project'] = change + files = {'zuul.yaml': yaml.dump(config)} + self.addCommitToRepo( + 'common-config', 'Change zuul.yaml configuration', files) + + # Let some time for the tenant reconfiguration to happen + time.sleep(2) + self.waitUntilSettled() + + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 2) + self.assertEqual(A.reported, 1) + # We make sure the new job has run + self.assertEqual(self.history[1].name, 'project-test2') + + # Let's stop the git Watcher to let us merge some changes commits + # We want to verify that config changes are detected for commits + # on the range oldrev..newrev + self.sched.connections.getSource('git').connection.w_pause = True + # Add a config change + change = { + 'name': 'org/project', + 'check': { + 'jobs': [ + 'project-test1' + ] + } + } + config[4]['project'] = change + files = {'zuul.yaml': yaml.dump(config)} + self.addCommitToRepo( + 'common-config', 'Change zuul.yaml configuration', files) + # Add two other changes + self.addCommitToRepo( + 'common-config', 'Adding f1', + {'f1': "Content"}) + self.addCommitToRepo( + 'common-config', 'Adding f2', + {'f2': "Content"}) + # Restart the git watcher + self.sched.connections.getSource('git').connection.w_pause = False + + # Let some time for the tenant reconfiguration to happen + time.sleep(2) + self.waitUntilSettled() + + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertEqual(len(self.history), 3) + self.assertEqual(A.reported, 1) + # We make sure the new job has run + self.assertEqual(self.history[2].name, 'project-test1') + + def ensure_watcher_has_context(self): + # Make sure watcher have read initial refs shas + cnx = self.sched.connections.getSource('git').connection + delay = 0.1 + max_delay = 1 + while not cnx.projects_refs: + time.sleep(delay) + max_delay -= delay + if max_delay <= 0: + raise Exception("Timeout waiting for initial read") + + @simple_layout('layouts/basic-git.yaml', driver='git') + def test_ref_updated_event(self): + self.ensure_watcher_has_context() + # Add a commit to trigger a ref-updated event + self.addCommitToRepo( + 'org/project', 'A change for ref-updated', {'f1': 'Content'}) + # Let some time for the git watcher to detect the ref-update event + time.sleep(0.2) + self.waitUntilSettled() + self.assertEqual(len(self.history), 1) + self.assertEqual('SUCCESS', + self.getJobFromHistory('post-job').result) + + @simple_layout('layouts/basic-git.yaml', driver='git') + def test_ref_created(self): + self.ensure_watcher_has_context() + # Tag HEAD to trigger a ref-updated event + self.addTagToRepo( + 'org/project', 'atag', 'HEAD') + # Let some time for the git watcher to detect the ref-update event + time.sleep(0.2) + self.waitUntilSettled() + self.assertEqual(len(self.history), 1) + self.assertEqual('SUCCESS', + self.getJobFromHistory('tag-job').result) + + @simple_layout('layouts/basic-git.yaml', driver='git') + def test_ref_deleted(self): + self.ensure_watcher_has_context() + # Delete default tag init to trigger a ref-updated event + self.delTagFromRepo( + 'org/project', 'init') + # Let some time for the git watcher to detect the ref-update event + time.sleep(0.2) + self.waitUntilSettled() + # Make sure no job as run as ignore-delete is True by default + self.assertEqual(len(self.history), 0) diff --git a/tests/unit/test_github_driver.py b/tests/unit/test_github_driver.py index ebb5e1c85..3942b0be8 100644 --- a/tests/unit/test_github_driver.py +++ b/tests/unit/test_github_driver.py @@ -50,6 +50,12 @@ class TestGithubDriver(ZuulTestCase): self.assertEqual(str(A.head_sha), zuulvars['patchset']) self.assertEqual('master', zuulvars['branch']) self.assertEqual(1, len(A.comments)) + self.assertThat( + A.comments[0], + MatchesRegex('.*\[project-test1 \]\(.*\).*', re.DOTALL)) + self.assertThat( + A.comments[0], + MatchesRegex('.*\[project-test2 \]\(.*\).*', re.DOTALL)) self.assertEqual(2, len(self.history)) # test_pull_unmatched_branch_event(self): @@ -243,19 +249,28 @@ class TestGithubDriver(ZuulTestCase): @simple_layout('layouts/basic-github.yaml', driver='github') def test_git_https_url(self): """Test that git_ssh option gives git url with ssh""" - url = self.fake_github.real_getGitUrl('org/project') + tenant = self.sched.abide.tenants.get('tenant-one') + _, project = tenant.getProject('org/project') + + url = self.fake_github.real_getGitUrl(project) self.assertEqual('https://github.com/org/project', url) @simple_layout('layouts/basic-github.yaml', driver='github') def test_git_ssh_url(self): """Test that git_ssh option gives git url with ssh""" - url = self.fake_github_ssh.real_getGitUrl('org/project') + tenant = self.sched.abide.tenants.get('tenant-one') + _, project = tenant.getProject('org/project') + + url = self.fake_github_ssh.real_getGitUrl(project) self.assertEqual('ssh://git@github.com/org/project.git', url) @simple_layout('layouts/basic-github.yaml', driver='github') def test_git_enterprise_url(self): """Test that git_url option gives git url with proper host""" - url = self.fake_github_ent.real_getGitUrl('org/project') + tenant = self.sched.abide.tenants.get('tenant-one') + _, project = tenant.getProject('org/project') + + url = self.fake_github_ent.real_getGitUrl(project) self.assertEqual('ssh://git@github.enterprise.io/org/project.git', url) @simple_layout('layouts/reporting-github.yaml', driver='github') diff --git a/tests/unit/test_inventory.py b/tests/unit/test_inventory.py index 1c41f5fa5..b7e35ebd2 100644 --- a/tests/unit/test_inventory.py +++ b/tests/unit/test_inventory.py @@ -37,6 +37,12 @@ class TestInventory(ZuulTestCase): inv_path = os.path.join(build.jobdir.root, 'ansible', 'inventory.yaml') return yaml.safe_load(open(inv_path, 'r')) + def _get_setup_inventory(self, name): + build = self.getBuildByName(name) + setup_inv_path = os.path.join(build.jobdir.root, 'ansible', + 'setup-inventory.yaml') + return yaml.safe_load(open(setup_inv_path, 'r')) + def test_single_inventory(self): inventory = self._get_build_inventory('single-inventory') @@ -119,5 +125,35 @@ class TestInventory(ZuulTestCase): self.assertEqual( inventory['all']['hosts'][node_name]['ansible_user'], username) + # check if the nodes use the correct or no ansible_connection + if node_name == 'windows': + self.assertEqual( + inventory['all']['hosts'][node_name]['ansible_connection'], + 'winrm') + else: + self.assertEqual( + 'local', + inventory['all']['hosts'][node_name]['ansible_connection']) + + self.executor_server.release() + self.waitUntilSettled() + + def test_setup_inventory(self): + + setup_inventory = self._get_setup_inventory('hostvars-inventory') + inventory = self._get_build_inventory('hostvars-inventory') + + self.assertIn('all', inventory) + self.assertIn('hosts', inventory['all']) + + self.assertIn('default', setup_inventory['all']['hosts']) + self.assertIn('fakeuser', setup_inventory['all']['hosts']) + self.assertIn('windows', setup_inventory['all']['hosts']) + self.assertNotIn('network', setup_inventory['all']['hosts']) + self.assertIn('default', inventory['all']['hosts']) + self.assertIn('fakeuser', inventory['all']['hosts']) + self.assertIn('windows', inventory['all']['hosts']) + self.assertIn('network', inventory['all']['hosts']) + self.executor_server.release() self.waitUntilSettled() diff --git a/tests/unit/test_scheduler.py b/tests/unit/test_scheduler.py index aacc81e00..5db20b317 100755 --- a/tests/unit/test_scheduler.py +++ b/tests/unit/test_scheduler.py @@ -4196,7 +4196,7 @@ For CI problems and help debugging, contact ci@example.org""" running_item = running_items[0] self.assertEqual([], running_item['failing_reasons']) self.assertEqual([], running_item['items_behind']) - self.assertEqual('https://hostname/1', running_item['url']) + self.assertEqual('https://review.example.com/1', running_item['url']) self.assertIsNone(running_item['item_ahead']) self.assertEqual('org/project', running_item['project']) self.assertIsNone(running_item['remaining_time']) @@ -4247,611 +4247,6 @@ For CI problems and help debugging, contact ci@example.org""" 'SUCCESS') self.assertEqual(A.reported, 1) - def test_crd_gate(self): - "Test cross-repo dependencies" - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - - AM2 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM2') - AM1 = self.fake_gerrit.addFakeChange('org/project1', 'master', 'AM1') - AM2.setMerged() - AM1.setMerged() - - BM2 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM2') - BM1 = self.fake_gerrit.addFakeChange('org/project2', 'master', 'BM1') - BM2.setMerged() - BM1.setMerged() - - # A -> AM1 -> AM2 - # B -> BM1 -> BM2 - # A Depends-On: B - # M2 is here to make sure it is never queried. If it is, it - # means zuul is walking down the entire history of merged - # changes. - - B.setDependsOn(BM1, 1) - BM1.setDependsOn(BM2, 1) - - A.setDependsOn(AM1, 1) - AM1.setDependsOn(AM2, 1) - - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - - for connection in self.connections.connections.values(): - connection.maintainCache([]) - - self.executor_server.hold_jobs_in_build = True - B.addApproval('Approved', 1) - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(AM2.queried, 0) - self.assertEqual(BM2.queried, 0) - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - self.assertEqual(B.reported, 2) - - changes = self.getJobFromHistory( - 'project-merge', 'org/project1').changes - self.assertEqual(changes, '2,1 1,1') - - def test_crd_branch(self): - "Test cross-repo dependencies in multiple branches" - - self.create_branch('org/project2', 'mp') - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - C1 = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C1') - C2 = self.fake_gerrit.addFakeChange('org/project2', 'mp', 'C2', - status='ABANDONED') - C1.data['id'] = B.data['id'] - C2.data['id'] = B.data['id'] - - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - C1.addApproval('Code-Review', 2) - - # A Depends-On: B+C1 - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - self.executor_server.hold_jobs_in_build = True - B.addApproval('Approved', 1) - C1.addApproval('Approved', 1) - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(C1.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - self.assertEqual(B.reported, 2) - self.assertEqual(C1.reported, 2) - - changes = self.getJobFromHistory( - 'project-merge', 'org/project1').changes - self.assertEqual(changes, '2,1 3,1 1,1') - - def test_crd_multiline(self): - "Test multiple depends-on lines in commit" - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - C.addApproval('Code-Review', 2) - - # A Depends-On: B+C - A.data['commitMessage'] = '%s\n\nDepends-On: %s\nDepends-On: %s\n' % ( - A.subject, B.data['id'], C.data['id']) - - self.executor_server.hold_jobs_in_build = True - B.addApproval('Approved', 1) - C.addApproval('Approved', 1) - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(C.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - self.assertEqual(B.reported, 2) - self.assertEqual(C.reported, 2) - - changes = self.getJobFromHistory( - 'project-merge', 'org/project1').changes - self.assertEqual(changes, '2,1 3,1 1,1') - - def test_crd_unshared_gate(self): - "Test cross-repo dependencies in unshared gate queues" - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project', 'master', 'B') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - # A and B do not share a queue, make sure that A is unable to - # enqueue B (and therefore, A is unable to be enqueued). - B.addApproval('Approved', 1) - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 0) - self.assertEqual(B.reported, 0) - self.assertEqual(len(self.history), 0) - - # Enqueue and merge B alone. - self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(B.reported, 2) - - # Now that B is merged, A should be able to be enqueued and - # merged. - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - - def test_crd_gate_reverse(self): - "Test reverse cross-repo dependencies" - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - - # A Depends-On: B - - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - - self.executor_server.hold_jobs_in_build = True - A.addApproval('Approved', 1) - self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.release('.*-merge') - self.waitUntilSettled() - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - self.assertEqual(B.reported, 2) - - changes = self.getJobFromHistory( - 'project-merge', 'org/project1').changes - self.assertEqual(changes, '2,1 1,1') - - def test_crd_cycle(self): - "Test cross-repo dependency cycles" - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - - # A -> B -> A (via commit-depends) - - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - B.subject, A.data['id']) - - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.reported, 0) - self.assertEqual(B.reported, 0) - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - - def test_crd_gate_unknown(self): - "Test unknown projects in dependent pipeline" - self.init_repo("org/unknown", tag='init') - A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'B') - A.addApproval('Code-Review', 2) - B.addApproval('Code-Review', 2) - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - B.addApproval('Approved', 1) - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - # Unknown projects cannot share a queue with any other - # since they don't have common jobs with any other (they have no jobs). - # Changes which depend on unknown project changes - # should not be processed in dependent pipeline - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 0) - self.assertEqual(B.reported, 0) - self.assertEqual(len(self.history), 0) - - # Simulate change B being gated outside this layout Set the - # change merged before submitting the event so that when the - # event triggers a gerrit query to update the change, we get - # the information that it was merged. - B.setMerged() - self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) - self.waitUntilSettled() - self.assertEqual(len(self.history), 0) - - # Now that B is merged, A should be able to be enqueued and - # merged. - self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'MERGED') - self.assertEqual(A.reported, 2) - self.assertEqual(B.data['status'], 'MERGED') - self.assertEqual(B.reported, 0) - - def test_crd_check(self): - "Test cross-repo dependencies in independent pipelines" - - self.executor_server.hold_jobs_in_build = True - self.gearman_server.hold_jobs_in_queue = True - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - self.gearman_server.hold_jobs_in_queue = False - self.gearman_server.release() - self.waitUntilSettled() - - self.executor_server.release('.*-merge') - self.waitUntilSettled() - - self.assertTrue(self.builds[0].hasChanges(A, B)) - - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.reported, 0) - - self.assertEqual(self.history[0].changes, '2,1 1,1') - tenant = self.sched.abide.tenants.get('tenant-one') - self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) - - def test_crd_check_git_depends(self): - "Test single-repo dependencies in independent pipelines" - self.gearman_server.hold_jobs_in_build = True - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') - - # Add two git-dependent changes and make sure they both report - # success. - B.setDependsOn(A, 1) - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - self.orderedRelease() - self.gearman_server.hold_jobs_in_build = False - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.reported, 1) - - self.assertEqual(self.history[0].changes, '1,1') - self.assertEqual(self.history[-1].changes, '1,1 2,1') - tenant = self.sched.abide.tenants.get('tenant-one') - self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) - - self.assertIn('Build succeeded', A.messages[0]) - self.assertIn('Build succeeded', B.messages[0]) - - def test_crd_check_duplicate(self): - "Test duplicate check in independent pipelines" - self.executor_server.hold_jobs_in_build = True - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') - tenant = self.sched.abide.tenants.get('tenant-one') - check_pipeline = tenant.layout.pipelines['check'] - - # Add two git-dependent changes... - B.setDependsOn(A, 1) - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(len(check_pipeline.getAllItems()), 2) - - # ...make sure the live one is not duplicated... - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(len(check_pipeline.getAllItems()), 2) - - # ...but the non-live one is able to be. - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(len(check_pipeline.getAllItems()), 3) - - # Release jobs in order to avoid races with change A jobs - # finishing before change B jobs. - self.orderedRelease() - self.executor_server.hold_jobs_in_build = False - self.executor_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.reported, 1) - - self.assertEqual(self.history[0].changes, '1,1 2,1') - self.assertEqual(self.history[1].changes, '1,1') - self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) - - self.assertIn('Build succeeded', A.messages[0]) - self.assertIn('Build succeeded', B.messages[0]) - - def _test_crd_check_reconfiguration(self, project1, project2): - "Test cross-repo dependencies re-enqueued in independent pipelines" - - self.gearman_server.hold_jobs_in_queue = True - A = self.fake_gerrit.addFakeChange(project1, 'master', 'A') - B = self.fake_gerrit.addFakeChange(project2, 'master', 'B') - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - self.sched.reconfigure(self.config) - - # Make sure the items still share a change queue, and the - # first one is not live. - tenant = self.sched.abide.tenants.get('tenant-one') - self.assertEqual(len(tenant.layout.pipelines['check'].queues), 1) - queue = tenant.layout.pipelines['check'].queues[0] - first_item = queue.queue[0] - for item in queue.queue: - self.assertEqual(item.queue, first_item.queue) - self.assertFalse(first_item.live) - self.assertTrue(queue.queue[1].live) - - self.gearman_server.hold_jobs_in_queue = False - self.gearman_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.reported, 0) - - self.assertEqual(self.history[0].changes, '2,1 1,1') - self.assertEqual(len(tenant.layout.pipelines['check'].queues), 0) - - def test_crd_check_reconfiguration(self): - self._test_crd_check_reconfiguration('org/project1', 'org/project2') - - def test_crd_undefined_project(self): - """Test that undefined projects in dependencies are handled for - independent pipelines""" - # It's a hack for fake gerrit, - # as it implies repo creation upon the creation of any change - self.init_repo("org/unknown", tag='init') - self._test_crd_check_reconfiguration('org/project1', 'org/unknown') - - @simple_layout('layouts/ignore-dependencies.yaml') - def test_crd_check_ignore_dependencies(self): - "Test cross-repo dependencies can be ignored" - - self.gearman_server.hold_jobs_in_queue = True - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - C = self.fake_gerrit.addFakeChange('org/project2', 'master', 'C') - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - # C git-depends on B - C.setDependsOn(B, 1) - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - # Make sure none of the items share a change queue, and all - # are live. - tenant = self.sched.abide.tenants.get('tenant-one') - check_pipeline = tenant.layout.pipelines['check'] - self.assertEqual(len(check_pipeline.queues), 3) - self.assertEqual(len(check_pipeline.getAllItems()), 3) - for item in check_pipeline.getAllItems(): - self.assertTrue(item.live) - - self.gearman_server.hold_jobs_in_queue = False - self.gearman_server.release() - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(C.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.reported, 1) - self.assertEqual(C.reported, 1) - - # Each job should have tested exactly one change - for job in self.history: - self.assertEqual(len(job.changes.split()), 1) - - @simple_layout('layouts/three-projects.yaml') - def test_crd_check_transitive(self): - "Test transitive cross-repo dependencies" - # Specifically, if A -> B -> C, and C gets a new patchset and - # A gets a new patchset, ensure the test of A,2 includes B,1 - # and C,2 (not C,1 which would indicate stale data in the - # cache for B). - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B') - C = self.fake_gerrit.addFakeChange('org/project3', 'master', 'C') - - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - # B Depends-On: C - B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - B.subject, C.data['id']) - - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(self.history[-1].changes, '3,1 2,1 1,1') - - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(self.history[-1].changes, '3,1 2,1') - - self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(self.history[-1].changes, '3,1') - - C.addPatchset() - self.fake_gerrit.addEvent(C.getPatchsetCreatedEvent(2)) - self.waitUntilSettled() - self.assertEqual(self.history[-1].changes, '3,2') - - A.addPatchset() - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) - self.waitUntilSettled() - self.assertEqual(self.history[-1].changes, '3,2 2,1 1,2') - - def test_crd_check_unknown(self): - "Test unknown projects in independent pipeline" - self.init_repo("org/unknown", tag='init') - A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A') - B = self.fake_gerrit.addFakeChange('org/unknown', 'master', 'D') - # A Depends-On: B - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - - # Make sure zuul has seen an event on B. - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - self.assertEqual(A.data['status'], 'NEW') - self.assertEqual(A.reported, 1) - self.assertEqual(B.data['status'], 'NEW') - self.assertEqual(B.reported, 0) - - def test_crd_cycle_join(self): - "Test an updated change creates a cycle" - A = self.fake_gerrit.addFakeChange('org/project2', 'master', 'A') - - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - self.assertEqual(A.reported, 1) - - # Create B->A - B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') - B.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - B.subject, A.data['id']) - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) - self.waitUntilSettled() - - # Dep is there so zuul should have reported on B - self.assertEqual(B.reported, 1) - - # Update A to add A->B (a cycle). - A.addPatchset() - A.data['commitMessage'] = '%s\n\nDepends-On: %s\n' % ( - A.subject, B.data['id']) - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) - self.waitUntilSettled() - - # Dependency cycle injected so zuul should not have reported again on A - self.assertEqual(A.reported, 1) - - # Now if we update B to remove the depends-on, everything - # should be okay. B; A->B - - B.addPatchset() - B.data['commitMessage'] = '%s\n' % (B.subject,) - self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(2)) - self.waitUntilSettled() - - # Cycle was removed so now zuul should have reported again on A - self.assertEqual(A.reported, 2) - - self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(2)) - self.waitUntilSettled() - self.assertEqual(B.reported, 2) - @simple_layout('layouts/disable_at.yaml') def test_disable_at(self): "Test a pipeline will only report to the disabled trigger when failing" @@ -6070,6 +5465,77 @@ class TestSemaphoreMultiTenant(ZuulTestCase): self.assertEqual(B.reported, 1) +class TestImplicitProject(ZuulTestCase): + tenant_config_file = 'config/implicit-project/main.yaml' + + def test_implicit_project(self): + # config project should work with implicit project name + A = self.fake_gerrit.addFakeChange('common-config', 'master', 'A') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + + # untrusted project should work with implicit project name + B = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1) + self.assertEqual(B.data['status'], 'NEW') + self.assertEqual(B.reported, 1) + self.assertHistory([ + dict(name='test-common', result='SUCCESS', changes='1,1'), + dict(name='test-common', result='SUCCESS', changes='2,1'), + dict(name='test-project', result='SUCCESS', changes='2,1'), + ], ordered=False) + + # now test adding a further project in repo + in_repo_conf = textwrap.dedent( + """ + - job: + name: test-project + run: playbooks/test-project.yaml + - job: + name: test2-project + run: playbooks/test-project.yaml + + - project: + check: + jobs: + - test-project + gate: + jobs: + - test-project + + - project: + check: + jobs: + - test2-project + gate: + jobs: + - test2-project + + """) + file_dict = {'.zuul.yaml': in_repo_conf} + C = self.fake_gerrit.addFakeChange('org/project', 'master', 'A', + files=file_dict) + C.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(C.addApproval('Approved', 1)) + self.waitUntilSettled() + + # change C must be merged + self.assertEqual(C.data['status'], 'MERGED') + self.assertEqual(C.reported, 2) + self.assertHistory([ + dict(name='test-common', result='SUCCESS', changes='1,1'), + dict(name='test-common', result='SUCCESS', changes='2,1'), + dict(name='test-project', result='SUCCESS', changes='2,1'), + dict(name='test-common', result='SUCCESS', changes='3,1'), + dict(name='test-project', result='SUCCESS', changes='3,1'), + dict(name='test2-project', result='SUCCESS', changes='3,1'), + ], ordered=False) + + class TestSemaphoreInRepo(ZuulTestCase): config_file = 'zuul-connections-gerrit-and-github.conf' tenant_config_file = 'config/in-repo/main.yaml' diff --git a/tests/unit/test_log_streamer.py b/tests/unit/test_streaming.py index 27368e33a..b999106c8 100644 --- a/tests/unit/test_log_streamer.py +++ b/tests/unit/test_streaming.py @@ -28,6 +28,7 @@ import time import zuul.web import zuul.lib.log_streamer +import zuul.lib.fingergw import tests.base @@ -40,13 +41,13 @@ class TestLogStreamer(tests.base.BaseTestCase): def startStreamer(self, port, root=None): if not root: root = tempfile.gettempdir() - return zuul.lib.log_streamer.LogStreamer(None, self.host, port, root) + return zuul.lib.log_streamer.LogStreamer(self.host, port, root) def test_start_stop(self): - port = 7900 - streamer = self.startStreamer(port) + streamer = self.startStreamer(0) self.addCleanup(streamer.stop) + port = streamer.server.socket.getsockname()[1] s = socket.create_connection((self.host, port)) s.close() @@ -60,7 +61,7 @@ class TestLogStreamer(tests.base.BaseTestCase): class TestStreaming(tests.base.AnsibleZuulTestCase): tenant_config_file = 'config/streamer/main.yaml' - log = logging.getLogger("zuul.test.test_log_streamer.TestStreaming") + log = logging.getLogger("zuul.test_streaming") def setUp(self): super(TestStreaming, self).setUp() @@ -76,12 +77,13 @@ class TestStreaming(tests.base.AnsibleZuulTestCase): def startStreamer(self, port, build_uuid, root=None): if not root: root = tempfile.gettempdir() - self.streamer = zuul.lib.log_streamer.LogStreamer(None, self.host, + self.streamer = zuul.lib.log_streamer.LogStreamer(self.host, port, root) + port = self.streamer.server.socket.getsockname()[1] s = socket.create_connection((self.host, port)) self.addCleanup(s.close) - req = '%s\n' % build_uuid + req = '%s\r\n' % build_uuid s.sendall(req.encode('utf-8')) self.test_streaming_event.set() @@ -128,10 +130,9 @@ class TestStreaming(tests.base.AnsibleZuulTestCase): # Create a thread to stream the log. We need this to be happening # before we create the flag file to tell the job to complete. - port = 7901 streamer_thread = threading.Thread( target=self.startStreamer, - args=(port, build.uuid, self.executor_server.jobdir_root,) + args=(0, build.uuid, self.executor_server.jobdir_root,) ) streamer_thread.start() self.addCleanup(self.stopStreamer) @@ -181,9 +182,38 @@ class TestStreaming(tests.base.AnsibleZuulTestCase): loop.run_until_complete(client(loop, build_uuid, event)) loop.close() + def runFingerClient(self, build_uuid, gateway_address, event): + # Wait until the gateway is started + while True: + try: + # NOTE(Shrews): This causes the gateway to begin to handle + # a request for which it never receives data, and thus + # causes the getCommand() method to timeout (seen in the + # test results, but is harmless). + with socket.create_connection(gateway_address) as s: + break + except ConnectionRefusedError: + time.sleep(0.1) + + with socket.create_connection(gateway_address) as s: + msg = "%s\r\n" % build_uuid + s.sendall(msg.encode('utf-8')) + event.set() # notify we are connected and req sent + while True: + data = s.recv(1024) + if not data: + break + self.streaming_data += data.decode('utf-8') + s.shutdown(socket.SHUT_RDWR) + def test_websocket_streaming(self): + # Start the finger streamer daemon + streamer = zuul.lib.log_streamer.LogStreamer( + self.host, 0, self.executor_server.jobdir_root) + self.addCleanup(streamer.stop) + # Need to set the streaming port before submitting the job - finger_port = 7902 + finger_port = streamer.server.socket.getsockname()[1] self.executor_server.log_streaming_port = finger_port A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') @@ -216,11 +246,6 @@ class TestStreaming(tests.base.AnsibleZuulTestCase): logfile = open(ansible_log, 'r') self.addCleanup(logfile.close) - # Start the finger streamer daemon - streamer = zuul.lib.log_streamer.LogStreamer( - None, self.host, finger_port, self.executor_server.jobdir_root) - self.addCleanup(streamer.stop) - # Start the web server web_server = zuul.web.ZuulWeb( listen_address='::', listen_port=9000, @@ -265,3 +290,83 @@ class TestStreaming(tests.base.AnsibleZuulTestCase): self.log.debug("\n\nFile contents: %s\n\n", file_contents) self.log.debug("\n\nStreamed: %s\n\n", self.ws_client_results) self.assertEqual(file_contents, self.ws_client_results) + + def test_finger_gateway(self): + # Start the finger streamer daemon + streamer = zuul.lib.log_streamer.LogStreamer( + self.host, 0, self.executor_server.jobdir_root) + self.addCleanup(streamer.stop) + finger_port = streamer.server.socket.getsockname()[1] + + # Need to set the streaming port before submitting the job + self.executor_server.log_streaming_port = finger_port + + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + + # We don't have any real synchronization for the ansible jobs, so + # just wait until we get our running build. + while not len(self.builds): + time.sleep(0.1) + build = self.builds[0] + self.assertEqual(build.name, 'python27') + + build_dir = os.path.join(self.executor_server.jobdir_root, build.uuid) + while not os.path.exists(build_dir): + time.sleep(0.1) + + # Need to wait to make sure that jobdir gets set + while build.jobdir is None: + time.sleep(0.1) + build = self.builds[0] + + # Wait for the job to begin running and create the ansible log file. + # The job waits to complete until the flag file exists, so we can + # safely access the log here. We only open it (to force a file handle + # to be kept open for it after the job finishes) but wait to read the + # contents until the job is done. + ansible_log = os.path.join(build.jobdir.log_root, 'job-output.txt') + while not os.path.exists(ansible_log): + time.sleep(0.1) + logfile = open(ansible_log, 'r') + self.addCleanup(logfile.close) + + # Start the finger gateway daemon + gateway = zuul.lib.fingergw.FingerGateway( + ('127.0.0.1', self.gearman_server.port, None, None, None), + (self.host, 0), + user=None, + command_socket=None, + pid_file=None + ) + gateway.start() + self.addCleanup(gateway.stop) + + gateway_port = gateway.server.socket.getsockname()[1] + gateway_address = (self.host, gateway_port) + + # Start a thread with the finger client + finger_client_event = threading.Event() + self.finger_client_results = '' + finger_client_thread = threading.Thread( + target=self.runFingerClient, + args=(build.uuid, gateway_address, finger_client_event) + ) + finger_client_thread.start() + finger_client_event.wait() + + # Allow the job to complete + flag_file = os.path.join(build_dir, 'test_wait') + open(flag_file, 'w').close() + + # Wait for the finger client to complete, which it should when + # it's received the full log. + finger_client_thread.join() + + self.waitUntilSettled() + + file_contents = logfile.read() + logfile.close() + self.log.debug("\n\nFile contents: %s\n\n", file_contents) + self.log.debug("\n\nStreamed: %s\n\n", self.streaming_data) + self.assertEqual(file_contents, self.streaming_data) diff --git a/tests/unit/test_v3.py b/tests/unit/test_v3.py index c5d19ceb4..163a58b90 100755 --- a/tests/unit/test_v3.py +++ b/tests/unit/test_v3.py @@ -647,11 +647,23 @@ class TestInRepoConfig(ZuulTestCase): name: project-test2 run: playbooks/project-test2.yaml + - job: + name: project-test3 + run: playbooks/project-test2.yaml + + # add a job by the short project name - project: name: org/project tenant-one-gate: jobs: - project-test2 + + # add a job by the canonical project name + - project: + name: review.example.com/org/project + tenant-one-gate: + jobs: + - project-test3 """) in_repo_playbook = textwrap.dedent( @@ -673,7 +685,9 @@ class TestInRepoConfig(ZuulTestCase): self.assertIn('tenant-one-gate', A.messages[1], "A should transit tenant-one gate") self.assertHistory([ - dict(name='project-test2', result='SUCCESS', changes='1,1')]) + dict(name='project-test2', result='SUCCESS', changes='1,1'), + dict(name='project-test3', result='SUCCESS', changes='1,1'), + ], ordered=False) self.fake_gerrit.addEvent(A.getChangeMergedEvent()) self.waitUntilSettled() @@ -688,7 +702,10 @@ class TestInRepoConfig(ZuulTestCase): 'SUCCESS') self.assertHistory([ dict(name='project-test2', result='SUCCESS', changes='1,1'), - dict(name='project-test2', result='SUCCESS', changes='2,1')]) + dict(name='project-test3', result='SUCCESS', changes='1,1'), + dict(name='project-test2', result='SUCCESS', changes='2,1'), + dict(name='project-test3', result='SUCCESS', changes='2,1'), + ], ordered=False) def test_dynamic_template(self): # Tests that a project can't update a template in another @@ -1039,6 +1056,27 @@ class TestInRepoConfig(ZuulTestCase): self.assertIn('not a dictionary', A.messages[0], "A should have a syntax error reported") + def test_yaml_duplicate_key_error(self): + in_repo_conf = textwrap.dedent( + """ + - job: + name: foo + name: bar + """) + + file_dict = {'.zuul.yaml': in_repo_conf} + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A', + files=file_dict) + A.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1, + "A should report failure") + self.assertIn('appears more than once', A.messages[0], + "A should have a syntax error reported") + def test_yaml_key_error(self): in_repo_conf = textwrap.dedent( """ @@ -1635,6 +1673,32 @@ class TestInRepoConfig(ZuulTestCase): C.messages[0], "C should have an error reported") + def test_pipeline_debug(self): + in_repo_conf = textwrap.dedent( + """ + - job: + name: project-test1 + run: playbooks/project-test1.yaml + - project: + name: org/project + check: + debug: True + jobs: + - project-test1 + """) + + file_dict = {'.zuul.yaml': in_repo_conf} + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A', + files=file_dict) + self.fake_gerrit.addEvent(A.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + + self.assertEqual(A.data['status'], 'NEW') + self.assertEqual(A.reported, 1, + "A should report success") + self.assertIn('Debug information:', + A.messages[0], "A should have debug info") + class TestInRepoJoin(ZuulTestCase): # In this config, org/project is not a member of any pipelines, so @@ -2315,6 +2379,115 @@ class TestPragma(ZuulTestCase): self.assertIsNone(job.branch_matcher) +class TestPragmaMultibranch(ZuulTestCase): + tenant_config_file = 'config/pragma-multibranch/main.yaml' + + def test_no_branch_matchers(self): + self.create_branch('org/project1', 'stable/pike') + self.create_branch('org/project2', 'stable/jewel') + self.fake_gerrit.addEvent( + self.fake_gerrit.getFakeBranchCreatedEvent( + 'org/project1', 'stable/pike')) + self.fake_gerrit.addEvent( + self.fake_gerrit.getFakeBranchCreatedEvent( + 'org/project2', 'stable/jewel')) + self.waitUntilSettled() + # We want the jobs defined on the stable/pike branch of + # project1 to apply to the stable/jewel branch of project2. + + # First, without the pragma line, the jobs should not run + # because in project1 they have branch matchers for pike, so + # they will not match a jewel change. + B = self.fake_gerrit.addFakeChange('org/project2', 'stable/jewel', 'B') + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertHistory([]) + + # Add a pragma line to disable implied branch matchers in + # project1, so that the jobs and templates apply to both + # branches. + with open(os.path.join(FIXTURE_DIR, + 'config/pragma-multibranch/git/', + 'org_project1/zuul.yaml')) as f: + config = f.read() + extra_conf = textwrap.dedent( + """ + - pragma: + implied-branch-matchers: False + """) + config = extra_conf + config + file_dict = {'zuul.yaml': config} + A = self.fake_gerrit.addFakeChange('org/project1', 'stable/pike', 'A', + files=file_dict) + A.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + self.fake_gerrit.addEvent(A.getChangeMergedEvent()) + self.waitUntilSettled() + + # Now verify that when we propose a change to jewel, we get + # the pike/jewel jobs. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertHistory([ + dict(name='test-job1', result='SUCCESS', changes='1,1'), + dict(name='test-job2', result='SUCCESS', changes='1,1'), + ], ordered=False) + + def test_supplied_branch_matchers(self): + self.create_branch('org/project1', 'stable/pike') + self.create_branch('org/project2', 'stable/jewel') + self.fake_gerrit.addEvent( + self.fake_gerrit.getFakeBranchCreatedEvent( + 'org/project1', 'stable/pike')) + self.fake_gerrit.addEvent( + self.fake_gerrit.getFakeBranchCreatedEvent( + 'org/project2', 'stable/jewel')) + self.waitUntilSettled() + # We want the jobs defined on the stable/pike branch of + # project1 to apply to the stable/jewel branch of project2. + + # First, without the pragma line, the jobs should not run + # because in project1 they have branch matchers for pike, so + # they will not match a jewel change. + B = self.fake_gerrit.addFakeChange('org/project2', 'stable/jewel', 'B') + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertHistory([]) + + # Add a pragma line to disable implied branch matchers in + # project1, so that the jobs and templates apply to both + # branches. + with open(os.path.join(FIXTURE_DIR, + 'config/pragma-multibranch/git/', + 'org_project1/zuul.yaml')) as f: + config = f.read() + extra_conf = textwrap.dedent( + """ + - pragma: + implied-branches: + - stable/pike + - stable/jewel + """) + config = extra_conf + config + file_dict = {'zuul.yaml': config} + A = self.fake_gerrit.addFakeChange('org/project1', 'stable/pike', 'A', + files=file_dict) + A.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + self.waitUntilSettled() + self.fake_gerrit.addEvent(A.getChangeMergedEvent()) + self.waitUntilSettled() + # Now verify that when we propose a change to jewel, we get + # the pike/jewel jobs. + self.fake_gerrit.addEvent(B.getPatchsetCreatedEvent(1)) + self.waitUntilSettled() + self.assertHistory([ + dict(name='test-job1', result='SUCCESS', changes='1,1'), + dict(name='test-job2', result='SUCCESS', changes='1,1'), + ], ordered=False) + + class TestBaseJobs(ZuulTestCase): tenant_config_file = 'config/base-jobs/main.yaml' diff --git a/tests/unit/test_web.py b/tests/unit/test_web.py new file mode 100644 index 000000000..6881a83ea --- /dev/null +++ b/tests/unit/test_web.py @@ -0,0 +1,145 @@ +#!/usr/bin/env python + +# Copyright 2014 Hewlett-Packard Development Company, L.P. +# Copyright 2014 Rackspace Australia +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import asyncio +import threading +import os +import json +import urllib +import time +import socket +from unittest import skip + +import webob + +import zuul.web + +from tests.base import ZuulTestCase, FIXTURE_DIR + + +class TestWeb(ZuulTestCase): + tenant_config_file = 'config/single-tenant/main.yaml' + + def setUp(self): + super(TestWeb, self).setUp() + self.executor_server.hold_jobs_in_build = True + A = self.fake_gerrit.addFakeChange('org/project', 'master', 'A') + A.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(A.addApproval('Approved', 1)) + B = self.fake_gerrit.addFakeChange('org/project1', 'master', 'B') + B.addApproval('Code-Review', 2) + self.fake_gerrit.addEvent(B.addApproval('Approved', 1)) + self.waitUntilSettled() + + # Start the web server + self.web = zuul.web.ZuulWeb( + listen_address='127.0.0.1', listen_port=0, + gear_server='127.0.0.1', gear_port=self.gearman_server.port) + loop = asyncio.new_event_loop() + loop.set_debug(True) + ws_thread = threading.Thread(target=self.web.run, args=(loop,)) + ws_thread.start() + self.addCleanup(loop.close) + self.addCleanup(ws_thread.join) + self.addCleanup(self.web.stop) + + self.host = 'localhost' + # Wait until web server is started + while True: + time.sleep(0.1) + if self.web.server is None: + continue + self.port = self.web.server.sockets[0].getsockname()[1] + print(self.host, self.port) + try: + with socket.create_connection((self.host, self.port)): + break + except ConnectionRefusedError: + pass + + def tearDown(self): + self.executor_server.hold_jobs_in_build = False + self.executor_server.release() + self.waitUntilSettled() + super(TestWeb, self).tearDown() + + def test_web_status(self): + "Test that we can filter to only certain changes in the webapp." + + req = urllib.request.Request( + "http://localhost:%s/tenant-one/status.json" % self.port) + f = urllib.request.urlopen(req) + data = json.loads(f.read().decode('utf8')) + + self.assertIn('pipelines', data) + + def test_web_bad_url(self): + # do we 404 correctly + req = urllib.request.Request( + "http://localhost:%s/status/foo" % self.port) + self.assertRaises(urllib.error.HTTPError, urllib.request.urlopen, req) + + @skip("This is not supported by zuul-web") + def test_web_find_change(self): + # can we filter by change id + req = urllib.request.Request( + "http://localhost:%s/tenant-one/status/change/1,1" % self.port) + f = urllib.request.urlopen(req) + data = json.loads(f.read().decode('utf8')) + + self.assertEqual(1, len(data), data) + self.assertEqual("org/project", data[0]['project']) + + req = urllib.request.Request( + "http://localhost:%s/tenant-one/status/change/2,1" % self.port) + f = urllib.request.urlopen(req) + data = json.loads(f.read().decode('utf8')) + + self.assertEqual(1, len(data), data) + self.assertEqual("org/project1", data[0]['project'], data) + + def test_web_keys(self): + with open(os.path.join(FIXTURE_DIR, 'public.pem'), 'rb') as f: + public_pem = f.read() + + req = urllib.request.Request( + "http://localhost:%s/tenant-one/org/project.pub" % + self.port) + f = urllib.request.urlopen(req) + self.assertEqual(f.read(), public_pem) + + @skip("This may not apply to zuul-web") + def test_web_custom_handler(self): + def custom_handler(path, tenant_name, request): + return webob.Response(body='ok') + + self.webapp.register_path('/custom', custom_handler) + req = urllib.request.Request( + "http://localhost:%s/custom" % self.port) + f = urllib.request.urlopen(req) + self.assertEqual(b'ok', f.read()) + + self.webapp.unregister_path('/custom') + self.assertRaises(urllib.error.HTTPError, urllib.request.urlopen, req) + + @skip("This returns a 500") + def test_web_404_on_unknown_tenant(self): + req = urllib.request.Request( + "http://localhost:{}/non-tenant/status.json".format(self.port)) + e = self.assertRaises( + urllib.error.HTTPError, urllib.request.urlopen, req) + self.assertEqual(404, e.code) diff --git a/tests/unit/test_zuultrigger.py b/tests/unit/test_zuultrigger.py index 3954a215d..55758537e 100644 --- a/tests/unit/test_zuultrigger.py +++ b/tests/unit/test_zuultrigger.py @@ -126,5 +126,5 @@ class TestZuulTriggerProjectChangeMerged(ZuulTestCase): "dependencies was unable to be automatically merged with the " "current state of its repository. Please rebase the change and " "upload a new patchset.") - self.assertEqual(self.fake_gerrit.queries[1], - "project:org/project status:open") + self.assertIn("project:org/project status:open", + self.fake_gerrit.queries) diff --git a/tools/encrypt_secret.py b/tools/encrypt_secret.py index 9b528467d..c0ee9be64 100755 --- a/tools/encrypt_secret.py +++ b/tools/encrypt_secret.py @@ -43,10 +43,7 @@ def main(): parser.add_argument('url', help="The base URL of the zuul server and tenant. " "E.g., https://zuul.example.com/tenant-name") - # TODO(jeblair,mordred): When projects have canonical names, use that here. # TODO(jeblair): Throw a fit if SSL is not used. - parser.add_argument('source', - help="The Zuul source of the project.") parser.add_argument('project', help="The name of the project.") parser.add_argument('--infile', @@ -61,8 +58,7 @@ def main(): "to standard output.") args = parser.parse_args() - req = Request("%s/keys/%s/%s.pub" % ( - args.url, args.source, args.project)) + req = Request("%s/%s.pub" % (args.url.rstrip('/'), args.project)) pubkey = urlopen(req) if args.infile: diff --git a/tools/github-debugging.py b/tools/github-debugging.py new file mode 100644 index 000000000..171627ab9 --- /dev/null +++ b/tools/github-debugging.py @@ -0,0 +1,55 @@ +import github3 +import logging +import time + +# This is a template with boilerplate code for debugging github issues + +# TODO: for real use override the following variables +url = 'https://example.com' +api_token = 'xxxx' +org = 'org' +project = 'project' +pull_nr = 3 + + +# Send the logs to stderr as well +stream_handler = logging.StreamHandler() + + +logger_urllib3 = logging.getLogger('requests.packages.logger_urllib3') +# logger_urllib3.addHandler(stream_handler) +logger_urllib3.setLevel(logging.DEBUG) + +logger = logging.getLogger('github3') +# logger.addHandler(stream_handler) +logger.setLevel(logging.DEBUG) + + +github = github3.GitHubEnterprise(url) + + +# This is the currently broken cache adapter, enable or replace it to debug +# caching + +# import cachecontrol +# from cachecontrol.cache import DictCache +# cache_adapter = cachecontrol.CacheControlAdapter( +# DictCache(), +# cache_etags=True) +# +# github.session.mount('http://', cache_adapter) +# github.session.mount('https://', cache_adapter) + + +github.login(token=api_token) + +i = 0 +while True: + pr = github.pull_request(org, project, pull_nr) + prdict = pr.as_dict() + issue = pr.issue() + labels = list(issue.labels()) + print(labels) + i += 1 + print(i) + time.sleep(1) @@ -41,9 +41,6 @@ commands = python setup.py build_sphinx [testenv:venv] commands = {posargs} -[testenv:validate-layout] -commands = zuul-server -c etc/zuul.conf-sample -t -l {posargs} - [testenv:nodepool] setenv = OS_TEST_PATH = ./tests/nodepool diff --git a/zuul/cmd/__init__.py b/zuul/cmd/__init__.py index 236fd9f44..07d4a8d08 100755 --- a/zuul/cmd/__init__.py +++ b/zuul/cmd/__init__.py @@ -181,8 +181,9 @@ class ZuulDaemonApp(ZuulApp): else: # Exercise the pidfile before we do anything else (including # logging or daemonizing) - with daemon.DaemonContext(pidfile=pid): + with pid: pass + with daemon.DaemonContext(pidfile=pid): self.run() diff --git a/zuul/cmd/executor.py b/zuul/cmd/executor.py index ade9715c2..ad7aaa837 100755 --- a/zuul/cmd/executor.py +++ b/zuul/cmd/executor.py @@ -14,10 +14,8 @@ # License for the specific language governing permissions and limitations # under the License. -import grp import logging import os -import pwd import sys import signal import tempfile @@ -64,7 +62,7 @@ class Executor(zuul.cmd.ZuulDaemonApp): self.log.info("Starting log streamer") streamer = zuul.lib.log_streamer.LogStreamer( - self.user, '::', self.finger_port, self.job_dir) + '::', self.finger_port, self.job_dir) # Keep running until the parent dies: pipe_read = os.fdopen(pipe_read) @@ -76,22 +74,6 @@ class Executor(zuul.cmd.ZuulDaemonApp): os.close(pipe_read) self.log_streamer_pid = child_pid - def change_privs(self): - ''' - Drop our privileges to the zuul user. - ''' - if os.getuid() != 0: - return - pw = pwd.getpwnam(self.user) - # get a list of supplementary groups for the target user, and make sure - # we set them when dropping privileges. - groups = [g.gr_gid for g in grp.getgrall() if self.user in g.gr_mem] - os.setgroups(groups) - os.setgid(pw.pw_gid) - os.setuid(pw.pw_uid) - os.chdir(pw.pw_dir) - os.umask(0o022) - def run(self): if self.args.command in zuul.executor.server.COMMANDS: self.send_command(self.args.command) @@ -99,8 +81,6 @@ class Executor(zuul.cmd.ZuulDaemonApp): self.configure_connections(source_only=True) - self.user = get_default(self.config, 'executor', 'user', 'zuul') - if self.config.has_option('executor', 'job_dir'): self.job_dir = os.path.expanduser( self.config.get('executor', 'job_dir')) @@ -120,7 +100,6 @@ class Executor(zuul.cmd.ZuulDaemonApp): ) self.start_log_streamer() - self.change_privs() ExecutorServer = zuul.executor.server.ExecutorServer self.executor = ExecutorServer(self.config, self.connections, diff --git a/zuul/cmd/fingergw.py b/zuul/cmd/fingergw.py new file mode 100644 index 000000000..920eed8f2 --- /dev/null +++ b/zuul/cmd/fingergw.py @@ -0,0 +1,109 @@ +#!/usr/bin/env python +# Copyright 2017 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import signal +import sys + +import zuul.cmd +import zuul.lib.fingergw + +from zuul.lib.config import get_default + + +class FingerGatewayApp(zuul.cmd.ZuulDaemonApp): + ''' + Class for the daemon that will distribute any finger requests to the + appropriate Zuul executor handling the specified build UUID. + ''' + app_name = 'fingergw' + app_description = 'The Zuul finger gateway.' + + def __init__(self): + super(FingerGatewayApp, self).__init__() + self.gateway = None + + def createParser(self): + parser = super(FingerGatewayApp, self).createParser() + parser.add_argument('command', + choices=zuul.lib.fingergw.COMMANDS, + nargs='?') + return parser + + def parseArguments(self, args=None): + super(FingerGatewayApp, self).parseArguments() + if self.args.command: + self.args.nodaemon = True + + def run(self): + ''' + Main entry point for the FingerGatewayApp. + + Called by the main() method of the parent class. + ''' + if self.args.command in zuul.lib.fingergw.COMMANDS: + self.send_command(self.args.command) + sys.exit(0) + + self.setup_logging('fingergw', 'log_config') + self.log = logging.getLogger('zuul.fingergw') + + # Get values from configuration file + host = get_default(self.config, 'fingergw', 'listen_address', '::') + port = int(get_default(self.config, 'fingergw', 'port', 79)) + user = get_default(self.config, 'fingergw', 'user', 'zuul') + cmdsock = get_default( + self.config, 'fingergw', 'command_socket', + '/var/lib/zuul/%s.socket' % self.app_name) + gear_server = get_default(self.config, 'gearman', 'server') + gear_port = get_default(self.config, 'gearman', 'port', 4730) + ssl_key = get_default(self.config, 'gearman', 'ssl_key') + ssl_cert = get_default(self.config, 'gearman', 'ssl_cert') + ssl_ca = get_default(self.config, 'gearman', 'ssl_ca') + + self.gateway = zuul.lib.fingergw.FingerGateway( + (gear_server, gear_port, ssl_key, ssl_cert, ssl_ca), + (host, port), + user, + cmdsock, + self.getPidFile(), + ) + + self.log.info('Starting Zuul finger gateway app') + self.gateway.start() + + if self.args.nodaemon: + # NOTE(Shrews): When running in non-daemon mode, although sending + # the 'stop' command via the command socket will shutdown the + # gateway, it's still necessary to Ctrl+C to stop the app. + while True: + try: + signal.pause() + except KeyboardInterrupt: + print("Ctrl + C: asking gateway to exit nicely...\n") + self.stop() + break + else: + self.gateway.wait() + + self.log.info('Stopped Zuul finger gateway app') + + def stop(self): + if self.gateway: + self.gateway.stop() + + +def main(): + FingerGatewayApp().main() diff --git a/zuul/cmd/web.py b/zuul/cmd/web.py index ad3062ff0..4687de653 100755 --- a/zuul/cmd/web.py +++ b/zuul/cmd/web.py @@ -106,6 +106,8 @@ class WebServer(zuul.cmd.ZuulDaemonApp): self.configure_connections() + signal.signal(signal.SIGUSR2, zuul.cmd.stack_dump_handler) + try: self._run() except Exception: diff --git a/zuul/configloader.py b/zuul/configloader.py index 669cd8bbd..d62237043 100644 --- a/zuul/configloader.py +++ b/zuul/configloader.py @@ -152,6 +152,40 @@ class ProjectNotPermittedError(Exception): super(ProjectNotPermittedError, self).__init__(message) +class YAMLDuplicateKeyError(ConfigurationSyntaxError): + def __init__(self, key, node, context, start_mark): + intro = textwrap.fill(textwrap.dedent("""\ + Zuul encountered a syntax error while parsing its configuration in the + repo {repo} on branch {branch}. The error was:""".format( + repo=context.project.name, + branch=context.branch, + ))) + + e = textwrap.fill(textwrap.dedent("""\ + The key "{key}" appears more than once; duplicate keys are not + permitted. + """.format( + key=key, + ))) + + m = textwrap.dedent("""\ + {intro} + + {error} + + The error appears in the following stanza: + + {content} + + {start_mark}""") + + m = m.format(intro=intro, + error=indent(str(e)), + content=indent(start_mark.snippet.rstrip()), + start_mark=str(start_mark)) + super(YAMLDuplicateKeyError, self).__init__(m) + + def indent(s): return '\n'.join([' ' + x for x in s.split('\n')]) @@ -249,6 +283,14 @@ class ZuulSafeLoader(yaml.SafeLoader): self.zuul_stream = stream def construct_mapping(self, node, deep=False): + keys = set() + for k, v in node.value: + if k.value in keys: + mark = ZuulMark(node.start_mark, node.end_mark, + self.zuul_stream) + raise YAMLDuplicateKeyError(k.value, node, self.zuul_context, + mark) + keys.add(k.value) r = super(ZuulSafeLoader, self).construct_mapping(node, deep) keys = frozenset(r.keys()) if len(keys) == 1 and keys.intersection(self.zuul_node_types): @@ -316,6 +358,7 @@ class EncryptedPKCS1_OAEP(yaml.YAMLObject): class PragmaParser(object): pragma = { 'implied-branch-matchers': bool, + 'implied-branches': to_list(str), '_source_context': model.SourceContext, '_start_mark': ZuulMark, } @@ -330,11 +373,14 @@ class PragmaParser(object): self.schema(conf) bm = conf.get('implied-branch-matchers') - if bm is None: - return source_context = conf['_source_context'] - source_context.implied_branch_matchers = bm + if bm is not None: + source_context.implied_branch_matchers = bm + + branches = conf.get('implied-branches') + if branches is not None: + source_context.implied_branches = as_list(branches) class NodeSetParser(object): @@ -488,6 +534,8 @@ class JobParser(object): # If the user has set a pragma directive for this, use the # value (if unset, the value is None). if job.source_context.implied_branch_matchers is True: + if job.source_context.implied_branches is not None: + return job.source_context.implied_branches return [job.source_context.branch] elif job.source_context.implied_branch_matchers is False: return None @@ -503,6 +551,8 @@ class JobParser(object): if len(branches) == 1: return None + if job.source_context.implied_branches is not None: + return job.source_context.implied_branches return [job.source_context.branch] @staticmethod @@ -741,7 +791,11 @@ class ProjectTemplateParser(object): job = {str: vs.Any(str, JobParser.job_attributes)} job_list = [vs.Any(str, job)] - pipeline_contents = {'queue': str, 'jobs': job_list} + pipeline_contents = { + 'queue': str, + 'debug': bool, + 'jobs': job_list, + } for p in self.layout.pipelines.values(): project_template[p.name] = pipeline_contents @@ -761,6 +815,7 @@ class ProjectTemplateParser(object): project_pipeline = model.ProjectPipelineConfig() project_template.pipelines[pipeline.name] = project_pipeline project_pipeline.queue_name = conf_pipeline.get('queue') + project_pipeline.debug = conf_pipeline.get('debug') self.parseJobList( conf_pipeline.get('jobs', []), source_context, start_mark, project_pipeline.job_list) @@ -799,7 +854,7 @@ class ProjectParser(object): def getSchema(self): project = { - vs.Required('name'): str, + 'name': str, 'description': str, 'templates': [str], 'merge-mode': vs.Any('merge', 'merge-resolve', @@ -811,7 +866,11 @@ class ProjectParser(object): job = {str: vs.Any(str, JobParser.job_attributes)} job_list = [vs.Any(str, job)] - pipeline_contents = {'queue': str, 'jobs': job_list} + pipeline_contents = { + 'queue': str, + 'debug': bool, + 'jobs': job_list + } for p in self.layout.pipelines.values(): project[p.name] = pipeline_contents @@ -872,6 +931,7 @@ class ProjectParser(object): for pipeline in self.layout.pipelines.values(): project_pipeline = model.ProjectPipelineConfig() queue_name = None + debug = False # For every template, iterate over the job tree and replace or # create the jobs in the final definition as needed. pipeline_defined = False @@ -884,8 +944,12 @@ class ProjectParser(object): implied_branch) if template_pipeline.queue_name: queue_name = template_pipeline.queue_name + if template_pipeline.debug is not None: + debug = template_pipeline.debug if queue_name: project_pipeline.queue_name = queue_name + if debug: + project_pipeline.debug = True if pipeline_defined: project_config.pipelines[pipeline.name] = project_pipeline return project_config @@ -1166,8 +1230,8 @@ class TenantParser(object): tenant.config_projects, tenant.untrusted_projects, cached, tenant) - unparsed_config.extend(tenant.config_projects_config, tenant=tenant) - unparsed_config.extend(tenant.untrusted_projects_config, tenant=tenant) + unparsed_config.extend(tenant.config_projects_config, tenant) + unparsed_config.extend(tenant.untrusted_projects_config, tenant) tenant.layout = TenantParser._parseLayout(base, tenant, unparsed_config, scheduler, @@ -1422,10 +1486,10 @@ class TenantParser(object): (job.project,)) if job.config_project: config_projects_config.extend( - job.project.unparsed_config) + job.project.unparsed_config, tenant) else: untrusted_projects_config.extend( - job.project.unparsed_config) + job.project.unparsed_config, tenant) continue TenantParser.log.debug("Waiting for cat job %s" % (job,)) job.wait() @@ -1456,17 +1520,18 @@ class TenantParser(object): branch = source_context.branch if source_context.trusted: incdata = TenantParser._parseConfigProjectLayout( - job.files[fn], source_context) - config_projects_config.extend(incdata) + job.files[fn], source_context, tenant) + config_projects_config.extend(incdata, tenant) else: incdata = TenantParser._parseUntrustedProjectLayout( - job.files[fn], source_context) - untrusted_projects_config.extend(incdata) - new_project_unparsed_config[project].extend(incdata) + job.files[fn], source_context, tenant) + untrusted_projects_config.extend(incdata, tenant) + new_project_unparsed_config[project].extend( + incdata, tenant) if branch in new_project_unparsed_branch_config.get( project, {}): new_project_unparsed_branch_config[project][branch].\ - extend(incdata) + extend(incdata, tenant) # Now that we've sucessfully loaded all of the configuration, # cache the unparsed data on the project objects. for project, data in new_project_unparsed_config.items(): @@ -1478,18 +1543,18 @@ class TenantParser(object): return config_projects_config, untrusted_projects_config @staticmethod - def _parseConfigProjectLayout(data, source_context): + def _parseConfigProjectLayout(data, source_context, tenant): # This is the top-level configuration for a tenant. config = model.UnparsedTenantConfig() with early_configuration_exceptions(source_context): - config.extend(safe_load_yaml(data, source_context)) + config.extend(safe_load_yaml(data, source_context), tenant) return config @staticmethod - def _parseUntrustedProjectLayout(data, source_context): + def _parseUntrustedProjectLayout(data, source_context, tenant): config = model.UnparsedTenantConfig() with early_configuration_exceptions(source_context): - config.extend(safe_load_yaml(data, source_context)) + config.extend(safe_load_yaml(data, source_context), tenant) if config.pipelines: with configuration_exceptions('pipeline', config.pipelines[0]): raise PipelineNotPermittedError() @@ -1691,7 +1756,7 @@ class ConfigLoader(object): else: incdata = project.unparsed_branch_config.get(branch) if incdata: - config.extend(incdata) + config.extend(incdata, tenant) continue # Otherwise, do not use the cached config (even if the # files are empty as that likely means they were deleted). @@ -1720,12 +1785,12 @@ class ConfigLoader(object): if trusted: incdata = TenantParser._parseConfigProjectLayout( - data, source_context) + data, source_context, tenant) else: incdata = TenantParser._parseUntrustedProjectLayout( - data, source_context) + data, source_context, tenant) - config.extend(incdata) + config.extend(incdata, tenant) def createDynamicLayout(self, tenant, files, include_config_projects=False, diff --git a/zuul/driver/gerrit/gerritconnection.py b/zuul/driver/gerrit/gerritconnection.py index f4b090d40..d3b3c008b 100644 --- a/zuul/driver/gerrit/gerritconnection.py +++ b/zuul/driver/gerrit/gerritconnection.py @@ -442,8 +442,19 @@ class GerritConnection(BaseConnection): # In case this change is already in the history we have a # cyclic dependency and don't need to update ourselves again # as this gets done in a previous frame of the call stack. - # NOTE(jeblair): I don't think it's possible to hit this case - # anymore as all paths hit the change cache first. + # NOTE(jeblair): The only case where this can still be hit is + # when we get an event for a change with no associated + # patchset; for instance, when the gerrit topic is changed. + # In that case, we will update change 1234,None, which will be + # inserted into the cache as its own entry, but then we will + # resolve the patchset before adding it to the history list, + # then if there are dependencies, we can walk down and then + # back up to the version of this change with a patchset which + # will match the history list but will have bypassed the + # change cache because the previous object had a patchset of + # None. All paths hit the change cache first. To be able to + # drop history, we need to resolve the patchset on events with + # no patchsets before adding the entry to the change cache. if (history and change.number and change.patchset and (change.number, change.patchset) in history): self.log.debug("Change %s is in history" % (change,)) @@ -461,6 +472,11 @@ class GerritConnection(BaseConnection): change.project = self.source.getProject(data['project']) change.branch = data['branch'] change.url = data['url'] + change.uris = [ + '%s/%s' % (self.server, change.number), + '%s/#/c/%s' % (self.server, change.number), + ] + max_ps = 0 files = [] for ps in data['patchSets']: @@ -481,6 +497,7 @@ class GerritConnection(BaseConnection): change.open = data['open'] change.status = data['status'] change.owner = data['owner'] + change.message = data['commitMessage'] if change.is_merged: # This change is merged, so we don't need to look any further @@ -494,7 +511,8 @@ class GerritConnection(BaseConnection): history = history[:] history.append((change.number, change.patchset)) - needs_changes = [] + needs_changes = set() + git_needs_changes = [] if 'dependsOn' in data: parts = data['dependsOn'][0]['ref'].split('/') dep_num, dep_ps = parts[3], parts[4] @@ -505,8 +523,11 @@ class GerritConnection(BaseConnection): # already merged. So even if it is "ABANDONED", we should not # ignore it. if (not dep.is_merged) and dep not in needs_changes: - needs_changes.append(dep) + git_needs_changes.append(dep) + needs_changes.add(dep) + change.git_needs_changes = git_needs_changes + compat_needs_changes = [] for record in self._getDependsOnFromCommit(data['commitMessage'], change): dep_num = record['number'] @@ -516,10 +537,12 @@ class GerritConnection(BaseConnection): (change, dep_num, dep_ps)) dep = self._getChange(dep_num, dep_ps, history=history) if dep.open and dep not in needs_changes: - needs_changes.append(dep) - change.needs_changes = needs_changes + compat_needs_changes.append(dep) + needs_changes.add(dep) + change.compat_needs_changes = compat_needs_changes - needed_by_changes = [] + needed_by_changes = set() + git_needed_by_changes = [] if 'neededBy' in data: for needed in data['neededBy']: parts = needed['ref'].split('/') @@ -527,9 +550,13 @@ class GerritConnection(BaseConnection): self.log.debug("Updating %s: Getting git-needed change %s,%s" % (change, dep_num, dep_ps)) dep = self._getChange(dep_num, dep_ps, history=history) - if dep.open and dep.is_current_patchset: - needed_by_changes.append(dep) + if (dep.open and dep.is_current_patchset and + dep not in needed_by_changes): + git_needed_by_changes.append(dep) + needed_by_changes.add(dep) + change.git_needed_by_changes = git_needed_by_changes + compat_needed_by_changes = [] for record in self._getNeededByFromCommit(data['id'], change): dep_num = record['number'] dep_ps = record['currentPatchSet']['number'] @@ -543,9 +570,13 @@ class GerritConnection(BaseConnection): refresh = (dep_num, dep_ps) not in history dep = self._getChange( dep_num, dep_ps, refresh=refresh, history=history) - if dep.open and dep.is_current_patchset: - needed_by_changes.append(dep) - change.needed_by_changes = needed_by_changes + if (dep.open and dep.is_current_patchset + and dep not in needed_by_changes): + compat_needed_by_changes.append(dep) + needed_by_changes.add(dep) + change.compat_needed_by_changes = compat_needed_by_changes + + self.sched.onChangeUpdated(change) return change diff --git a/zuul/driver/gerrit/gerritsource.py b/zuul/driver/gerrit/gerritsource.py index 7141080ac..9e327b93a 100644 --- a/zuul/driver/gerrit/gerritsource.py +++ b/zuul/driver/gerrit/gerritsource.py @@ -12,12 +12,15 @@ # License for the specific language governing permissions and limitations # under the License. +import re +import urllib import logging import voluptuous as vs from zuul.source import BaseSource from zuul.model import Project from zuul.driver.gerrit.gerritmodel import GerritRefFilter from zuul.driver.util import scalar_or_list, to_list +from zuul.lib.dependson import find_dependency_headers class GerritSource(BaseSource): @@ -44,6 +47,61 @@ class GerritSource(BaseSource): def getChange(self, event, refresh=False): return self.connection.getChange(event, refresh) + change_re = re.compile(r"/(\#\/c\/)?(\d+)[\w]*") + + def getChangeByURL(self, url): + try: + parsed = urllib.parse.urlparse(url) + except ValueError: + return None + m = self.change_re.match(parsed.path) + if not m: + return None + try: + change_no = int(m.group(2)) + except ValueError: + return None + query = "change:%s" % (change_no,) + results = self.connection.simpleQuery(query) + if not results: + return None + change = self.connection._getChange( + results[0]['number'], results[0]['currentPatchSet']['number']) + return change + + def getChangesDependingOn(self, change, projects): + changes = [] + if not change.uris: + return changes + queries = set() + for uri in change.uris: + queries.add('message:%s' % uri) + query = '(' + ' OR '.join(queries) + ')' + results = self.connection.simpleQuery(query) + seen = set() + for result in results: + for match in find_dependency_headers(result['commitMessage']): + found = False + for uri in change.uris: + if uri in match: + found = True + break + if not found: + continue + key = (result['number'], result['currentPatchSet']['number']) + if key in seen: + continue + seen.add(key) + change = self.connection._getChange( + result['number'], result['currentPatchSet']['number']) + changes.append(change) + return changes + + def getCachedChanges(self): + for x in self.connection._change_cache.values(): + for y in x.values(): + yield y + def getProject(self, name): p = self.connection.getProject(name) if not p: diff --git a/zuul/driver/gerrit/gerrittrigger.py b/zuul/driver/gerrit/gerrittrigger.py index cfedd4e14..67608ad81 100644 --- a/zuul/driver/gerrit/gerrittrigger.py +++ b/zuul/driver/gerrit/gerrittrigger.py @@ -63,16 +63,6 @@ class GerritTrigger(BaseTrigger): return efilters -def validate_conf(trigger_conf): - """Validates the layout's trigger data.""" - events_with_ref = ('ref-updated', ) - for event in trigger_conf: - if event['event'] not in events_with_ref and event.get('ref', False): - raise v.Invalid( - "The event %s does not include ref information, Zuul cannot " - "use ref filter 'ref: %s'" % (event['event'], event['ref'])) - - def getSchema(): variable_dict = v.Schema(dict) diff --git a/zuul/driver/git/__init__.py b/zuul/driver/git/__init__.py index 0faa0365a..1fe43f643 100644 --- a/zuul/driver/git/__init__.py +++ b/zuul/driver/git/__init__.py @@ -15,6 +15,7 @@ from zuul.driver import Driver, ConnectionInterface, SourceInterface from zuul.driver.git import gitconnection from zuul.driver.git import gitsource +from zuul.driver.git import gittrigger class GitDriver(Driver, ConnectionInterface, SourceInterface): @@ -23,9 +24,15 @@ class GitDriver(Driver, ConnectionInterface, SourceInterface): def getConnection(self, name, config): return gitconnection.GitConnection(self, name, config) + def getTrigger(self, connection, config=None): + return gittrigger.GitTrigger(self, connection, config) + def getSource(self, connection): return gitsource.GitSource(self, connection) + def getTriggerSchema(self): + return gittrigger.getSchema() + def getRequireSchema(self): return {} diff --git a/zuul/driver/git/gitconnection.py b/zuul/driver/git/gitconnection.py index f93824d2f..03b24cadc 100644 --- a/zuul/driver/git/gitconnection.py +++ b/zuul/driver/git/gitconnection.py @@ -13,12 +13,119 @@ # License for the specific language governing permissions and limitations # under the License. +import os +import git +import time import logging import urllib +import threading import voluptuous as v from zuul.connection import BaseConnection +from zuul.driver.git.gitmodel import GitTriggerEvent, EMPTY_GIT_REF +from zuul.model import Ref, Branch + + +class GitWatcher(threading.Thread): + log = logging.getLogger("connection.git.GitWatcher") + + def __init__(self, git_connection, baseurl, poll_delay): + threading.Thread.__init__(self) + self.daemon = True + self.git_connection = git_connection + self.baseurl = baseurl + self.poll_delay = poll_delay + self._stopped = False + self.projects_refs = self.git_connection.projects_refs + + def compareRefs(self, project, refs): + partial_events = [] + # Fetch previous refs state + base_refs = self.projects_refs.get(project) + # Create list of created refs + rcreateds = set(refs.keys()) - set(base_refs.keys()) + # Create list of deleted refs + rdeleteds = set(base_refs.keys()) - set(refs.keys()) + # Create the list of updated refs + updateds = {} + for ref, sha in refs.items(): + if ref in base_refs and base_refs[ref] != sha: + updateds[ref] = sha + for ref in rcreateds: + event = { + 'ref': ref, + 'branch_created': True, + 'oldrev': EMPTY_GIT_REF, + 'newrev': refs[ref] + } + partial_events.append(event) + for ref in rdeleteds: + event = { + 'ref': ref, + 'branch_deleted': True, + 'oldrev': base_refs[ref], + 'newrev': EMPTY_GIT_REF + } + partial_events.append(event) + for ref, sha in updateds.items(): + event = { + 'ref': ref, + 'branch_updated': True, + 'oldrev': base_refs[ref], + 'newrev': sha + } + partial_events.append(event) + events = [] + for pevent in partial_events: + event = GitTriggerEvent() + event.type = 'ref-updated' + event.project_hostname = self.git_connection.canonical_hostname + event.project_name = project + for attr in ('ref', 'oldrev', 'newrev', 'branch_created', + 'branch_deleted', 'branch_updated'): + if attr in pevent: + setattr(event, attr, pevent[attr]) + events.append(event) + return events + + def _run(self): + self.log.debug("Walk through projects refs for connection: %s" % + self.git_connection.connection_name) + try: + for project in self.git_connection.projects: + refs = self.git_connection.lsRemote(project) + self.log.debug("Read refs %s for project %s" % (refs, project)) + if not self.projects_refs.get(project): + # State for this project does not exist yet so add it. + # No event will be triggered in this loop as + # projects_refs['project'] and refs are equal + self.projects_refs[project] = refs + events = self.compareRefs(project, refs) + self.projects_refs[project] = refs + # Send events to the scheduler + for event in events: + self.log.debug("Handling event: %s" % event) + # Force changes cache update before passing + # the event to the scheduler + self.git_connection.getChange(event) + self.git_connection.logEvent(event) + # Pass the event to the scheduler + self.git_connection.sched.addEvent(event) + except Exception as e: + self.log.debug("Unexpected issue in _run loop: %s" % str(e)) + + def run(self): + while not self._stopped: + if not self.git_connection.w_pause: + self._run() + # Polling wait delay + else: + self.log.debug("Watcher is on pause") + time.sleep(self.poll_delay) + + def stop(self): + self._stopped = True class GitConnection(BaseConnection): @@ -32,6 +139,8 @@ class GitConnection(BaseConnection): raise Exception('baseurl is required for git connections in ' '%s' % self.connection_name) self.baseurl = self.connection_config.get('baseurl') + self.poll_timeout = float( + self.connection_config.get('poll_delay', 3600 * 2)) self.canonical_hostname = self.connection_config.get( 'canonical_hostname') if not self.canonical_hostname: @@ -40,7 +149,10 @@ class GitConnection(BaseConnection): self.canonical_hostname = r.hostname else: self.canonical_hostname = 'localhost' + self.w_pause = False self.projects = {} + self.projects_refs = {} + self._change_cache = {} def getProject(self, name): return self.projects.get(name) @@ -48,15 +160,97 @@ class GitConnection(BaseConnection): def addProject(self, project): self.projects[project.name] = project + def getChangeFilesUpdated(self, project_name, branch, tosha): + job = self.sched.merger.getFilesChanges( + self.connection_name, project_name, branch, tosha) + self.log.debug("Waiting for fileschanges job %s" % job) + job.wait() + if not job.updated: + raise Exception("Fileschanges job %s failed" % job) + self.log.debug("Fileschanges job %s got changes on files %s" % + (job, job.files)) + return job.files + + def lsRemote(self, project): + refs = {} + client = git.cmd.Git() + output = client.ls_remote( + os.path.join(self.baseurl, project)) + for line in output.splitlines(): + sha, ref = line.split('\t') + if ref.startswith('refs/'): + refs[ref] = sha + return refs + + def maintainCache(self, relevant): + remove = {} + for branch, refschange in self._change_cache.items(): + for ref, change in refschange.items(): + if change not in relevant: + remove.setdefault(branch, []).append(ref) + for branch, refs in remove.items(): + for ref in refs: + del self._change_cache[branch][ref] + if not self._change_cache[branch]: + del self._change_cache[branch] + + def getChange(self, event, refresh=False): + if event.ref and event.ref.startswith('refs/heads/'): + branch = event.ref[len('refs/heads/'):] + change = self._change_cache.get(branch, {}).get(event.newrev) + if change: + return change + project = self.getProject(event.project_name) + change = Branch(project) + change.branch = branch + for attr in ('ref', 'oldrev', 'newrev'): + setattr(change, attr, getattr(event, attr)) + change.url = "" + change.files = self.getChangeFilesUpdated( + event.project_name, change.branch, event.oldrev) + self._change_cache.setdefault(branch, {})[event.newrev] = change + elif event.ref: + # catch-all ref (ie, not a branch or head) + project = self.getProject(event.project_name) + change = Ref(project) + for attr in ('ref', 'oldrev', 'newrev'): + setattr(change, attr, getattr(event, attr)) + change.url = "" + else: + self.log.warning("Unable to get change for %s" % (event,)) + change = None + return change + def getProjectBranches(self, project, tenant): - # TODO(jeblair): implement; this will need to handle local or - # remote git urls. - return ['master'] + refs = self.lsRemote(project.name) + branches = [ref[len('refs/heads/'):] for ref in + refs if ref.startswith('refs/heads/')] + return branches def getGitUrl(self, project): url = '%s/%s' % (self.baseurl, project.name) return url + def onLoad(self): + self.log.debug("Starting Git Watcher") + self._start_watcher_thread() + + def onStop(self): + self.log.debug("Stopping Git Watcher") + self._stop_watcher_thread() + + def _stop_watcher_thread(self): + if self.watcher_thread: + self.watcher_thread.stop() + self.watcher_thread.join() + + def _start_watcher_thread(self): + self.watcher_thread = GitWatcher( + self, + self.baseurl, + self.poll_timeout) + self.watcher_thread.start() + def getSchema(): git_connection = v.Any(str, v.Schema(dict)) diff --git a/zuul/driver/git/gitmodel.py b/zuul/driver/git/gitmodel.py new file mode 100644 index 000000000..5d12b36da --- /dev/null +++ b/zuul/driver/git/gitmodel.py @@ -0,0 +1,86 @@ +# Copyright 2017 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import re + +from zuul.model import TriggerEvent +from zuul.model import EventFilter + + +EMPTY_GIT_REF = '0' * 40 # git sha of all zeros, used during creates/deletes + + +class GitTriggerEvent(TriggerEvent): + """Incoming event from an external system.""" + + def __repr__(self): + ret = '<GitTriggerEvent %s %s' % (self.type, + self.project_name) + + if self.branch: + ret += " %s" % self.branch + ret += " oldrev:%s" % self.oldrev + ret += " newrev:%s" % self.newrev + ret += '>' + + return ret + + +class GitEventFilter(EventFilter): + def __init__(self, trigger, types=[], refs=[], + ignore_deletes=True): + + super().__init__(trigger) + + self._refs = refs + self.types = types + self.refs = [re.compile(x) for x in refs] + self.ignore_deletes = ignore_deletes + + def __repr__(self): + ret = '<GitEventFilter' + + if self.types: + ret += ' types: %s' % ', '.join(self.types) + if self._refs: + ret += ' refs: %s' % ', '.join(self._refs) + if self.ignore_deletes: + ret += ' ignore_deletes: %s' % self.ignore_deletes + ret += '>' + + return ret + + def matches(self, event, change): + # event types are ORed + matches_type = False + for etype in self.types: + if etype == event.type: + matches_type = True + if self.types and not matches_type: + return False + + # refs are ORed + matches_ref = False + if event.ref is not None: + for ref in self.refs: + if ref.match(event.ref): + matches_ref = True + if self.refs and not matches_ref: + return False + if self.ignore_deletes and event.newrev == EMPTY_GIT_REF: + # If the updated ref has an empty git sha (all 0s), + # then the ref is being deleted + return False + + return True diff --git a/zuul/driver/git/gitsource.py b/zuul/driver/git/gitsource.py index 8d85c082f..a7d42be12 100644 --- a/zuul/driver/git/gitsource.py +++ b/zuul/driver/git/gitsource.py @@ -36,7 +36,16 @@ class GitSource(BaseSource): raise NotImplemented() def getChange(self, event, refresh=False): - raise NotImplemented() + return self.connection.getChange(event, refresh) + + def getChangeByURL(self, url): + return None + + def getChangesDependingOn(self, change, projects): + return [] + + def getCachedChanges(self): + return [] def getProject(self, name): p = self.connection.getProject(name) diff --git a/zuul/driver/git/gittrigger.py b/zuul/driver/git/gittrigger.py new file mode 100644 index 000000000..28852307e --- /dev/null +++ b/zuul/driver/git/gittrigger.py @@ -0,0 +1,49 @@ +# Copyright 2017 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import voluptuous as v +from zuul.trigger import BaseTrigger +from zuul.driver.git.gitmodel import GitEventFilter +from zuul.driver.util import scalar_or_list, to_list + + +class GitTrigger(BaseTrigger): + name = 'git' + log = logging.getLogger("zuul.GitTrigger") + + def getEventFilters(self, trigger_conf): + efilters = [] + for trigger in to_list(trigger_conf): + f = GitEventFilter( + trigger=self, + types=to_list(trigger['event']), + refs=to_list(trigger.get('ref')), + ignore_deletes=trigger.get( + 'ignore-deletes', True) + ) + efilters.append(f) + + return efilters + + +def getSchema(): + git_trigger = { + v.Required('event'): + scalar_or_list(v.Any('ref-updated')), + 'ref': scalar_or_list(str), + 'ignore-deletes': bool, + } + + return git_trigger diff --git a/zuul/driver/github/githubconnection.py b/zuul/driver/github/githubconnection.py index f987f4712..a7aefe0cd 100644 --- a/zuul/driver/github/githubconnection.py +++ b/zuul/driver/github/githubconnection.py @@ -24,6 +24,7 @@ import re import cachecontrol from cachecontrol.cache import DictCache +from cachecontrol.heuristics import BaseHeuristic import iso8601 import jwt import requests @@ -34,14 +35,12 @@ import github3 import github3.exceptions from zuul.connection import BaseConnection -from zuul.model import Ref, Branch, Tag +from zuul.model import Ref, Branch, Tag, Project from zuul.exceptions import MergeFailure from zuul.driver.github.githubmodel import PullRequest, GithubTriggerEvent -ACCESS_TOKEN_URL = 'https://api.github.com/installations/%s/access_tokens' +GITHUB_BASE_URL = 'https://api.github.com' PREVIEW_JSON_ACCEPT = 'application/vnd.github.machine-man-preview+json' -INSTALLATIONS_URL = 'https://api.github.com/app/installations' -REPOS_URL = 'https://api.github.com/installation/repositories' def _sign_request(body, secret): @@ -137,7 +136,6 @@ class GithubEventConnector(threading.Thread): """Move events from GitHub into the scheduler""" log = logging.getLogger("zuul.GithubEventConnector") - delay = 10.0 def __init__(self, connection): super(GithubEventConnector, self).__init__() @@ -153,14 +151,6 @@ class GithubEventConnector(threading.Thread): ts, json_body, event_type = self.connection.getEvent() if self._stopped: return - # Github can produce inconsistent data immediately after an - # event, So ensure that we do not deliver the event to Zuul - # until at least a certain amount of time has passed. Note - # that if we receive several events in succession, we will - # only need to delay for the first event. In essence, Zuul - # should always be a constant number of seconds behind Github. - now = time.time() - time.sleep(max((ts + self.delay) - now, 0.0)) # If there's any installation mapping information in the body then # update the project mapping before any requests are made. @@ -351,7 +341,9 @@ class GithubEventConnector(threading.Thread): def _get_sender(self, body): login = body.get('sender').get('login') if login: - return self.connection.getUser(login) + # TODO(tobiash): it might be better to plumb in the installation id + project = body.get('repository', {}).get('full_name') + return self.connection.getUser(login, project=project) def run(self): while True: @@ -415,6 +407,11 @@ class GithubConnection(BaseConnection): self.source = driver.getSource(self) self.event_queue = queue.Queue() + if self.server == 'github.com': + self.base_url = GITHUB_BASE_URL + else: + self.base_url = 'https://%s/api/v3' % self.server + # ssl verification must default to true verify_ssl = self.connection_config.get('verify_ssl', 'true') self.verify_ssl = True @@ -431,9 +428,26 @@ class GithubConnection(BaseConnection): # NOTE(jamielennox): Better here would be to cache to memcache or file # or something external - but zuul already sucks at restarting so in # memory probably doesn't make this much worse. + + # NOTE(tobiash): Unlike documented cachecontrol doesn't priorize + # the etag caching but doesn't even re-request until max-age was + # elapsed. + # + # Thus we need to add a custom caching heuristic which simply drops + # the cache-control header containing max-age. This way we force + # cachecontrol to only rely on the etag headers. + # + # http://cachecontrol.readthedocs.io/en/latest/etags.html + # http://cachecontrol.readthedocs.io/en/latest/custom_heuristics.html + class NoAgeHeuristic(BaseHeuristic): + def update_headers(self, response): + if 'cache-control' in response.headers: + del response.headers['cache-control'] + self.cache_adapter = cachecontrol.CacheControlAdapter( DictCache(), - cache_etags=True) + cache_etags=True, + heuristic=NoAgeHeuristic()) # The regex is based on the connection host. We do not yet support # cross-connection dependency gathering @@ -530,12 +544,21 @@ class GithubConnection(BaseConnection): return headers - def _get_installation_key(self, project, user_id=None, inst_id=None): + def _get_installation_key(self, project, user_id=None, inst_id=None, + reprime=False): installation_id = inst_id if project is not None: installation_id = self.installation_map.get(project) if not installation_id: + if reprime: + # prime installation map and try again without refreshing + self._prime_installation_map() + return self._get_installation_key(project, + user_id=user_id, + inst_id=inst_id, + reprime=False) + self.log.error("No installation ID available for project %s", project) return '' @@ -546,7 +569,10 @@ class GithubConnection(BaseConnection): if ((not expiry) or (not token) or (now >= expiry)): headers = self._get_app_auth_headers() - url = ACCESS_TOKEN_URL % installation_id + + url = "%s/installations/%s/access_tokens" % (self.base_url, + installation_id) + json_data = {'user_id': user_id} if user_id else None response = requests.post(url, headers=headers, json=json_data) @@ -568,7 +594,8 @@ class GithubConnection(BaseConnection): if not self.app_id: return - url = INSTALLATIONS_URL + url = '%s/app/installations' % self.base_url + headers = self._get_app_auth_headers() self.log.debug("Fetching installations for GitHub app") response = requests.get(url, headers=headers) @@ -581,7 +608,9 @@ class GithubConnection(BaseConnection): token = self._get_installation_key(project=None, inst_id=inst_id) headers = {'Accept': PREVIEW_JSON_ACCEPT, 'Authorization': 'token %s' % token} - url = REPOS_URL + + url = '%s/installation/repositories' % self.base_url + self.log.debug("Fetching repos for install %s" % inst_id) response = requests.get(url, headers=headers) response.raise_for_status() @@ -617,9 +646,12 @@ class GithubConnection(BaseConnection): return self._github def maintainCache(self, relevant): + remove = set() for key, change in self._change_cache.items(): if change not in relevant: - del self._change_cache[key] + remove.add(key) + for key in remove: + del self._change_cache[key] def getChange(self, event, refresh=False): """Get the change representing an event.""" @@ -629,7 +661,9 @@ class GithubConnection(BaseConnection): change = self._getChange(project, event.change_number, event.patch_number, refresh=refresh) change.url = event.change_url - change.updated_at = self._ghTimestampToDate(event.updated_at) + change.uris = [ + '%s/%s/pull/%s' % (self.server, project, change.number), + ] change.source_event = event change.is_current_patchset = (change.pr.get('head').get('sha') == event.patch_number) @@ -670,57 +704,72 @@ class GithubConnection(BaseConnection): raise return change - def _getDependsOnFromPR(self, body): - prs = [] - seen = set() - - for match in self.depends_on_re.findall(body): - if match in seen: - self.log.debug("Ignoring duplicate Depends-On: %s" % (match,)) - continue - seen.add(match) - # Get the github url - url = match.rsplit()[-1] - # break it into the parts we need - _, org, proj, _, num = url.rsplit('/', 4) - # Get a pull object so we can get the head sha - pull = self.getPull('%s/%s' % (org, proj), int(num)) - prs.append(pull) - - return prs - - def _getNeededByFromPR(self, change): - prs = [] - seen = set() - # This shouldn't return duplicate issues, but code as if it could - - # This leaves off the protocol, but looks for the specific GitHub - # hostname, the org/project, and the pull request number. - pattern = 'Depends-On %s/%s/pull/%s' % (self.server, - change.project.name, - change.number) + def getChangesDependingOn(self, change, projects): + changes = [] + if not change.uris: + return changes + + # Get a list of projects with unique installation ids + installation_ids = set() + installation_projects = set() + + if projects: + # We only need to find changes in projects in the supplied + # ChangeQueue. Find all of the github installations for + # all of those projects, and search using each of them, so + # that if we get the right results based on the + # permissions granted to each of the installations. The + # common case for this is likely to be just one + # installation -- change queues aren't likely to span more + # than one installation. + for project in projects: + installation_id = self.installation_map.get(project) + if installation_id not in installation_ids: + installation_ids.add(installation_id) + installation_projects.add(project) + else: + # We aren't in the context of a change queue and we just + # need to query all installations. This currently only + # happens if certain features of the zuul trigger are + # used; generally it should be avoided. + for project, installation_id in self.installation_map.items(): + if installation_id not in installation_ids: + installation_ids.add(installation_id) + installation_projects.add(project) + + keys = set() + pattern = ' OR '.join(change.uris) query = '%s type:pr is:open in:body' % pattern - github = self.getGithubClient() - for issue in github.search_issues(query=query): - pr = issue.issue.pull_request().as_dict() - if not pr.get('url'): - continue - if issue in seen: - continue - # the issue provides no good description of the project :\ - org, proj, _, num = pr.get('url').split('/')[-4:] - self.log.debug("Found PR %s/%s/%s needs %s/%s" % - (org, proj, num, change.project.name, - change.number)) - prs.append(pr) - seen.add(issue) - - self.log.debug("Ran search issues: %s", query) - log_rate_limit(self.log, github) - return prs + # Repeat the search for each installation id (project) + for installation_project in installation_projects: + github = self.getGithubClient(installation_project) + for issue in github.search_issues(query=query): + pr = issue.issue.pull_request().as_dict() + if not pr.get('url'): + continue + # the issue provides no good description of the project :\ + org, proj, _, num = pr.get('url').split('/')[-4:] + proj = pr.get('base').get('repo').get('full_name') + sha = pr.get('head').get('sha') + key = (proj, num, sha) + if key in keys: + continue + self.log.debug("Found PR %s/%s needs %s/%s" % + (proj, num, change.project.name, + change.number)) + keys.add(key) + self.log.debug("Ran search issues: %s", query) + log_rate_limit(self.log, github) - def _updateChange(self, change, history=None): + for key in keys: + (proj, num, sha) = key + project = self.source.getProject(proj) + change = self._getChange(project, int(num), patchset=sha) + changes.append(change) + return changes + + def _updateChange(self, change, history=None): # If this change is already in the history, we have a cyclic # dependency loop and we do not need to update again, since it # was done in a previous frame. @@ -740,10 +789,10 @@ class GithubConnection(BaseConnection): change.reviews = self.getPullReviews(change.project, change.number) change.labels = change.pr.get('labels') - change.body = change.pr.get('body') - # ensure body is at least an empty string - if not change.body: - change.body = '' + # ensure message is at least an empty string + change.message = change.pr.get('body') or '' + change.updated_at = self._ghTimestampToDate( + change.pr.get('updated_at')) if history is None: history = [] @@ -751,52 +800,32 @@ class GithubConnection(BaseConnection): history = history[:] history.append((change.project.name, change.number)) - needs_changes = [] - - # Get all the PRs this may depend on - for pr in self._getDependsOnFromPR(change.body): - proj = pr.get('base').get('repo').get('full_name') - pull = pr.get('number') - self.log.debug("Updating %s: Getting dependent " - "pull request %s/%s" % - (change, proj, pull)) - project = self.source.getProject(proj) - dep = self._getChange(project, pull, - patchset=pr.get('head').get('sha'), - history=history) - if (not dep.is_merged) and dep not in needs_changes: - needs_changes.append(dep) - - change.needs_changes = needs_changes - - needed_by_changes = [] - for pr in self._getNeededByFromPR(change): - proj = pr.get('base').get('repo').get('full_name') - pull = pr.get('number') - self.log.debug("Updating %s: Getting needed " - "pull request %s/%s" % - (change, proj, pull)) - project = self.source.getProject(proj) - dep = self._getChange(project, pull, - patchset=pr.get('head').get('sha'), - history=history) - if not dep.is_merged: - needed_by_changes.append(dep) - change.needed_by_changes = needed_by_changes + self.sched.onChangeUpdated(change) return change - def getGitUrl(self, project): + def getGitUrl(self, project: Project): if self.git_ssh_key: - return 'ssh://git@%s/%s.git' % (self.server, project) + return 'ssh://git@%s/%s.git' % (self.server, project.name) + + # if app_id is configured but self.app_id is empty we are not + # authenticated yet against github as app + if not self.app_id and self.connection_config.get('app_id', None): + self._authenticateGithubAPI() + self._prime_installation_map() if self.app_id: - installation_key = self._get_installation_key(project) + # We may be in the context of a merger or executor here. The + # mergers and executors don't receive webhook events so they miss + # new repository installations. In order to cope with this we need + # to reprime the installation map if we don't find the repo there. + installation_key = self._get_installation_key(project.name, + reprime=True) return 'https://x-access-token:%s@%s/%s' % (installation_key, self.server, - project) + project.name) - return 'https://%s/%s' % (self.server, project) + return 'https://%s/%s' % (self.server, project.name) def getGitwebUrl(self, project, sha=None): url = 'https://%s/%s' % (self.server, project) @@ -956,8 +985,8 @@ class GithubConnection(BaseConnection): log_rate_limit(self.log, github) return reviews - def getUser(self, login): - return GithubUser(self.getGithubClient(), login) + def getUser(self, login, project=None): + return GithubUser(self.getGithubClient(project), login) def getUserUri(self, login): return 'https://%s/%s' % (self.server, login) diff --git a/zuul/driver/github/githubmodel.py b/zuul/driver/github/githubmodel.py index ffd1c3f94..0731dd733 100644 --- a/zuul/driver/github/githubmodel.py +++ b/zuul/driver/github/githubmodel.py @@ -37,7 +37,8 @@ class PullRequest(Change): self.labels = [] def isUpdateOf(self, other): - if (hasattr(other, 'number') and self.number == other.number and + if (self.project == other.project and + hasattr(other, 'number') and self.number == other.number and hasattr(other, 'patchset') and self.patchset != other.patchset and hasattr(other, 'updated_at') and self.updated_at > other.updated_at): diff --git a/zuul/driver/github/githubreporter.py b/zuul/driver/github/githubreporter.py index 505757fa2..848ae1b3a 100644 --- a/zuul/driver/github/githubreporter.py +++ b/zuul/driver/github/githubreporter.py @@ -75,6 +75,14 @@ class GithubReporter(BaseReporter): msg = self._formatItemReportMergeFailure(item) self.addPullComment(item, msg) + def _formatItemReportJobs(self, item): + # Return the list of jobs portion of the report + ret = '' + jobs_fields = self._getItemReportJobsFields(item) + for job_fields in jobs_fields: + ret += '- [%s](%s) : %s%s%s%s\n' % job_fields + return ret + def addPullComment(self, item, comment=None): message = comment or self._formatItemReport(item) project = item.change.project.name diff --git a/zuul/driver/github/githubsource.py b/zuul/driver/github/githubsource.py index 1e7e07a88..33f8f7cae 100644 --- a/zuul/driver/github/githubsource.py +++ b/zuul/driver/github/githubsource.py @@ -12,6 +12,8 @@ # License for the specific language governing permissions and limitations # under the License. +import re +import urllib import logging import time import voluptuous as v @@ -44,6 +46,8 @@ class GithubSource(BaseSource): if not change.number: # Not a pull request, considering merged. return True + # We don't need to perform another query because the API call + # to perform the merge will ensure this is updated. return change.is_merged def canMerge(self, change, allow_needs): @@ -61,6 +65,38 @@ class GithubSource(BaseSource): def getChange(self, event, refresh=False): return self.connection.getChange(event, refresh) + change_re = re.compile(r"/(.*?)/(.*?)/pull/(\d+)[\w]*") + + def getChangeByURL(self, url): + try: + parsed = urllib.parse.urlparse(url) + except ValueError: + return None + m = self.change_re.match(parsed.path) + if not m: + return None + org = m.group(1) + proj = m.group(2) + try: + num = int(m.group(3)) + except ValueError: + return None + pull = self.connection.getPull('%s/%s' % (org, proj), int(num)) + if not pull: + return None + proj = pull.get('base').get('repo').get('full_name') + project = self.getProject(proj) + change = self.connection._getChange( + project, num, + patchset=pull.get('head').get('sha')) + return change + + def getChangesDependingOn(self, change, projects): + return self.connection.getChangesDependingOn(change, projects) + + def getCachedChanges(self): + return self.connection._change_cache.values() + def getProject(self, name): p = self.connection.getProject(name) if not p: diff --git a/zuul/driver/sql/alembic.ini b/zuul/driver/sql/alembic.ini new file mode 100644 index 000000000..e94d496e1 --- /dev/null +++ b/zuul/driver/sql/alembic.ini @@ -0,0 +1,2 @@ +[alembic] +script_location = alembic diff --git a/zuul/driver/sql/alembic/env.py b/zuul/driver/sql/alembic/env.py index 4542a2227..8cf2ecf2b 100644 --- a/zuul/driver/sql/alembic/env.py +++ b/zuul/driver/sql/alembic/env.py @@ -55,6 +55,13 @@ def run_migrations_online(): prefix='sqlalchemy.', poolclass=pool.NullPool) + # we can get the table prefix via the tag object + tag = context.get_tag_argument() + if tag and isinstance(tag, dict): + table_prefix = tag.get('table_prefix', '') + else: + table_prefix = '' + with connectable.connect() as connection: context.configure( connection=connection, @@ -62,7 +69,7 @@ def run_migrations_online(): ) with context.begin_transaction(): - context.run_migrations() + context.run_migrations(table_prefix=table_prefix) if context.is_offline_mode(): diff --git a/zuul/driver/sql/alembic/versions/19d3a3ebfe1d_change_patchset_to_string.py b/zuul/driver/sql/alembic/versions/19d3a3ebfe1d_change_patchset_to_string.py new file mode 100644 index 000000000..505a1ed73 --- /dev/null +++ b/zuul/driver/sql/alembic/versions/19d3a3ebfe1d_change_patchset_to_string.py @@ -0,0 +1,29 @@ +"""Change patchset to string + +Revision ID: 19d3a3ebfe1d +Revises: cfc0dc45f341 +Create Date: 2018-01-10 07:42:16.546751 + +""" + +# revision identifiers, used by Alembic. +revision = '19d3a3ebfe1d' +down_revision = 'cfc0dc45f341' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + +BUILDSET_TABLE = 'zuul_buildset' + + +def upgrade(table_prefix=''): + op.alter_column(table_prefix + BUILDSET_TABLE, + 'patchset', + type_=sa.String(255), + existing_nullable=True) + + +def downgrade(): + raise Exception("Downgrades not supported") diff --git a/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py b/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py index b153cabf7..f42c2f397 100644 --- a/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py +++ b/zuul/driver/sql/alembic/versions/1dd914d4a482_allow_score_to_be_null.py @@ -16,8 +16,8 @@ from alembic import op import sqlalchemy as sa -def upgrade(): - op.alter_column('zuul_buildset', 'score', nullable=True, +def upgrade(table_prefix=''): + op.alter_column(table_prefix + 'zuul_buildset', 'score', nullable=True, existing_type=sa.Integer) diff --git a/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py b/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py index 12e7c094a..906df2131 100644 --- a/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py +++ b/zuul/driver/sql/alembic/versions/20126015a87d_add_indexes.py @@ -32,24 +32,28 @@ BUILDSET_TABLE = 'zuul_buildset' BUILD_TABLE = 'zuul_build' -def upgrade(): +def upgrade(table_prefix=''): + prefixed_buildset = table_prefix + BUILDSET_TABLE + prefixed_build = table_prefix + BUILD_TABLE + # To allow a dashboard to show a per-project view, optionally filtered # by pipeline. op.create_index( - 'project_pipeline_idx', BUILDSET_TABLE, ['project', 'pipeline']) + 'project_pipeline_idx', prefixed_buildset, ['project', 'pipeline']) # To allow a dashboard to show a per-project-change view op.create_index( - 'project_change_idx', BUILDSET_TABLE, ['project', 'change']) + 'project_change_idx', prefixed_buildset, ['project', 'change']) # To allow a dashboard to show a per-change view - op.create_index('change_idx', BUILDSET_TABLE, ['change']) + op.create_index('change_idx', prefixed_buildset, ['change']) # To allow a dashboard to show a job lib view. buildset_id is included # so that it's a covering index and can satisfy the join back to buildset # without an additional lookup. op.create_index( - 'job_name_buildset_id_idx', BUILD_TABLE, ['job_name', 'buildset_id']) + 'job_name_buildset_id_idx', prefixed_build, + ['job_name', 'buildset_id']) def downgrade(): diff --git a/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py b/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py index 783196f06..b78f8305c 100644 --- a/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py +++ b/zuul/driver/sql/alembic/versions/4d3ebd7f06b9_set_up_initial_reporter_tables.py @@ -19,9 +19,9 @@ BUILDSET_TABLE = 'zuul_buildset' BUILD_TABLE = 'zuul_build' -def upgrade(): +def upgrade(table_prefix=''): op.create_table( - BUILDSET_TABLE, + table_prefix + BUILDSET_TABLE, sa.Column('id', sa.Integer, primary_key=True), sa.Column('zuul_ref', sa.String(255)), sa.Column('pipeline', sa.String(255)), @@ -34,10 +34,10 @@ def upgrade(): ) op.create_table( - BUILD_TABLE, + table_prefix + BUILD_TABLE, sa.Column('id', sa.Integer, primary_key=True), sa.Column('buildset_id', sa.Integer, - sa.ForeignKey(BUILDSET_TABLE + ".id")), + sa.ForeignKey(table_prefix + BUILDSET_TABLE + ".id")), sa.Column('uuid', sa.String(36)), sa.Column('job_name', sa.String(255)), sa.Column('result', sa.String(255)), diff --git a/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py b/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py index f9c353519..5502425a5 100644 --- a/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py +++ b/zuul/driver/sql/alembic/versions/5efb477fa963_add_ref_url_column.py @@ -30,8 +30,9 @@ from alembic import op import sqlalchemy as sa -def upgrade(): - op.add_column('zuul_buildset', sa.Column('ref_url', sa.String(255))) +def upgrade(table_prefix=''): + op.add_column( + table_prefix + 'zuul_buildset', sa.Column('ref_url', sa.String(255))) def downgrade(): diff --git a/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py b/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py index 985eb0c39..67581a6f9 100644 --- a/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py +++ b/zuul/driver/sql/alembic/versions/60c119eb1e3f_use_build_set_results.py @@ -18,8 +18,9 @@ import sqlalchemy as sa BUILDSET_TABLE = 'zuul_buildset' -def upgrade(): - op.add_column(BUILDSET_TABLE, sa.Column('result', sa.String(255))) +def upgrade(table_prefix=''): + op.add_column( + table_prefix + BUILDSET_TABLE, sa.Column('result', sa.String(255))) connection = op.get_bind() connection.execute( @@ -29,9 +30,9 @@ def upgrade(): SELECT CASE score WHEN 1 THEN 'SUCCESS' ELSE 'FAILURE' END) - """.format(buildset_table=BUILDSET_TABLE)) + """.format(buildset_table=table_prefix + BUILDSET_TABLE)) - op.drop_column(BUILDSET_TABLE, 'score') + op.drop_column(table_prefix + BUILDSET_TABLE, 'score') def downgrade(): diff --git a/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py b/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py index dc75983a9..3e60866e0 100644 --- a/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py +++ b/zuul/driver/sql/alembic/versions/ba4cdce9b18c_add_rev_columns.py @@ -16,9 +16,11 @@ from alembic import op import sqlalchemy as sa -def upgrade(): - op.add_column('zuul_buildset', sa.Column('oldrev', sa.String(255))) - op.add_column('zuul_buildset', sa.Column('newrev', sa.String(255))) +def upgrade(table_prefix=''): + op.add_column( + table_prefix + 'zuul_buildset', sa.Column('oldrev', sa.String(255))) + op.add_column( + table_prefix + 'zuul_buildset', sa.Column('newrev', sa.String(255))) def downgrade(): diff --git a/zuul/driver/sql/alembic/versions/cfc0dc45f341_change_patchset_to_string.py b/zuul/driver/sql/alembic/versions/cfc0dc45f341_change_patchset_to_string.py new file mode 100644 index 000000000..3fde8e545 --- /dev/null +++ b/zuul/driver/sql/alembic/versions/cfc0dc45f341_change_patchset_to_string.py @@ -0,0 +1,30 @@ +"""Change patchset to string + +Revision ID: cfc0dc45f341 +Revises: ba4cdce9b18c +Create Date: 2018-01-09 16:44:31.506958 + +""" + +# revision identifiers, used by Alembic. +revision = 'cfc0dc45f341' +down_revision = 'ba4cdce9b18c' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + +BUILDSET_TABLE = 'zuul_buildset' + + +def upgrade(table_prefix=''): + op.alter_column(table_prefix + BUILDSET_TABLE, + 'patchset', + sa.String(255), + existing_nullable=True, + existing_type=sa.Integer) + + +def downgrade(): + raise Exception("Downgrades not supported") diff --git a/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py b/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py index 4087af368..84fd0efd4 100644 --- a/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py +++ b/zuul/driver/sql/alembic/versions/f86c9871ee67_add_tenant_column.py @@ -30,8 +30,9 @@ from alembic import op import sqlalchemy as sa -def upgrade(): - op.add_column('zuul_buildset', sa.Column('tenant', sa.String(255))) +def upgrade(table_prefix=''): + op.add_column( + table_prefix + 'zuul_buildset', sa.Column('tenant', sa.String(255))) def downgrade(): diff --git a/zuul/driver/sql/sqlconnection.py b/zuul/driver/sql/sqlconnection.py index b964c0be3..715d72bba 100644 --- a/zuul/driver/sql/sqlconnection.py +++ b/zuul/driver/sql/sqlconnection.py @@ -15,6 +15,7 @@ import logging import alembic +import alembic.command import alembic.config import sqlalchemy as sa import sqlalchemy.pool @@ -39,6 +40,8 @@ class SQLConnection(BaseConnection): self.engine = None self.connection = None self.tables_established = False + self.table_prefix = self.connection_config.get('table_prefix', '') + try: self.dburi = self.connection_config.get('dburi') # Recycle connections if they've been idle for more than 1 second. @@ -49,7 +52,6 @@ class SQLConnection(BaseConnection): poolclass=sqlalchemy.pool.QueuePool, pool_recycle=self.connection_config.get('pool_recycle', 1)) self._migrate() - self._setup_tables() self.zuul_buildset_table, self.zuul_build_table \ = self._setup_tables() self.tables_established = True @@ -75,20 +77,22 @@ class SQLConnection(BaseConnection): config.set_main_option("sqlalchemy.url", self.connection_config.get('dburi')) - alembic.command.upgrade(config, 'head') + # Alembic lets us add arbitrary data in the tag argument. We can + # leverage that to tell the upgrade scripts about the table prefix. + tag = {'table_prefix': self.table_prefix} + alembic.command.upgrade(config, 'head', tag=tag) - @staticmethod - def _setup_tables(): + def _setup_tables(self): metadata = sa.MetaData() zuul_buildset_table = sa.Table( - BUILDSET_TABLE, metadata, + self.table_prefix + BUILDSET_TABLE, metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('zuul_ref', sa.String(255)), sa.Column('pipeline', sa.String(255)), sa.Column('project', sa.String(255)), sa.Column('change', sa.Integer, nullable=True), - sa.Column('patchset', sa.Integer, nullable=True), + sa.Column('patchset', sa.String(255), nullable=True), sa.Column('ref', sa.String(255)), sa.Column('oldrev', sa.String(255)), sa.Column('newrev', sa.String(255)), @@ -99,10 +103,11 @@ class SQLConnection(BaseConnection): ) zuul_build_table = sa.Table( - BUILD_TABLE, metadata, + self.table_prefix + BUILD_TABLE, metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('buildset_id', sa.Integer, - sa.ForeignKey(BUILDSET_TABLE + ".id")), + sa.ForeignKey(self.table_prefix + + BUILDSET_TABLE + ".id")), sa.Column('uuid', sa.String(36)), sa.Column('job_name', sa.String(255)), sa.Column('result', sa.String(255)), diff --git a/zuul/driver/zuul/__init__.py b/zuul/driver/zuul/__init__.py index 0f6ec7da8..e381137a5 100644 --- a/zuul/driver/zuul/__init__.py +++ b/zuul/driver/zuul/__init__.py @@ -90,7 +90,18 @@ class ZuulDriver(Driver, TriggerInterface): if not hasattr(change, 'needed_by_changes'): self.log.debug(" %s does not support dependencies" % type(change)) return - for needs in change.needed_by_changes: + + # This is very inefficient, especially on systems with large + # numbers of github installations. This can be improved later + # with persistent storage of dependency information. + needed_by_changes = set(change.needed_by_changes) + for source in self.sched.connections.getSources(): + self.log.debug(" Checking source: %s", source) + needed_by_changes.update( + source.getChangesDependingOn(change, None)) + self.log.debug(" Following changes: %s", needed_by_changes) + + for needs in needed_by_changes: self._createParentChangeEnqueuedEvent(needs, pipeline) def _createParentChangeEnqueuedEvent(self, change, pipeline): diff --git a/zuul/executor/client.py b/zuul/executor/client.py index 06c2087f7..b21a290d5 100644 --- a/zuul/executor/client.py +++ b/zuul/executor/client.py @@ -245,7 +245,7 @@ class ExecutorClient(object): for change in dependent_changes: # We have to find the project this way because it may not # be registered in the tenant (ie, a foreign project). - source = self.sched.connections.getSourceByHostname( + source = self.sched.connections.getSourceByCanonicalHostname( change['project']['canonical_hostname']) project = source.getProject(change['project']['name']) if project not in projects: diff --git a/zuul/executor/server.py b/zuul/executor/server.py index 3a919953d..a8ab8c45e 100644 --- a/zuul/executor/server.py +++ b/zuul/executor/server.py @@ -44,7 +44,8 @@ from zuul.lib import commandsocket BUFFER_LINES_FOR_SYNTAX = 200 COMMANDS = ['stop', 'pause', 'unpause', 'graceful', 'verbose', 'unverbose', 'keep', 'nokeep'] -DEFAULT_FINGER_PORT = 79 +DEFAULT_FINGER_PORT = 7900 +BLACKLISTED_ANSIBLE_CONNECTION_TYPES = ['network_cli'] class StopException(Exception): @@ -347,6 +348,8 @@ class JobDir(object): pass self.known_hosts = os.path.join(ssh_dir, 'known_hosts') self.inventory = os.path.join(self.ansible_root, 'inventory.yaml') + self.setup_inventory = os.path.join(self.ansible_root, + 'setup-inventory.yaml') self.logging_json = os.path.join(self.ansible_root, 'logging.json') self.playbooks = [] # The list of candidate playbooks self.playbook = None # A pointer to the candidate we have chosen @@ -493,6 +496,26 @@ def _copy_ansible_files(python_module, target_dir): shutil.copy(os.path.join(library_path, fn), target_dir) +def make_setup_inventory_dict(nodes): + + hosts = {} + for node in nodes: + if (node['host_vars']['ansible_connection'] in + BLACKLISTED_ANSIBLE_CONNECTION_TYPES): + continue + + for name in node['name']: + hosts[name] = node['host_vars'] + + inventory = { + 'all': { + 'hosts': hosts, + } + } + + return inventory + + def make_inventory_dict(nodes, groups, all_vars): hosts = {} @@ -931,6 +954,10 @@ class AnsibleJob(object): if username: host_vars['ansible_user'] = username + connection_type = node.get('connection_type') + if connection_type: + host_vars['ansible_connection'] = connection_type + host_keys = [] for key in node.get('host_keys'): if port != 22: @@ -959,13 +986,11 @@ class AnsibleJob(object): "non-trusted repo." % (entry, path)) def findPlaybook(self, path, trusted=False): - for ext in ['', '.yaml', '.yml']: - fn = path + ext - if os.path.exists(fn): - if not trusted: - playbook_dir = os.path.dirname(os.path.abspath(fn)) - self._blockPluginDirs(playbook_dir) - return fn + if os.path.exists(path): + if not trusted: + playbook_dir = os.path.dirname(os.path.abspath(path)) + self._blockPluginDirs(playbook_dir) + return path raise ExecutorError("Unable to find playbook %s" % path) def preparePlaybooks(self, args): @@ -1155,8 +1180,13 @@ class AnsibleJob(object): result_data_file=self.jobdir.result_data_file) nodes = self.getHostList(args) + setup_inventory = make_setup_inventory_dict(nodes) inventory = make_inventory_dict(nodes, args['groups'], all_vars) + with open(self.jobdir.setup_inventory, 'w') as setup_inventory_yaml: + setup_inventory_yaml.write( + yaml.safe_dump(setup_inventory, default_flow_style=False)) + with open(self.jobdir.inventory, 'w') as inventory_yaml: inventory_yaml.write( yaml.safe_dump(inventory, default_flow_style=False)) @@ -1421,6 +1451,7 @@ class AnsibleJob(object): verbose = '-v' cmd = ['ansible', '*', verbose, '-m', 'setup', + '-i', self.jobdir.setup_inventory, '-a', 'gather_subset=!all'] result, code = self.runAnsible( @@ -1708,6 +1739,7 @@ class ExecutorServer(object): self.merger_worker.registerFunction("merger:merge") self.merger_worker.registerFunction("merger:cat") self.merger_worker.registerFunction("merger:refstate") + self.merger_worker.registerFunction("merger:fileschanges") def register_work(self): if self._running: @@ -1861,6 +1893,9 @@ class ExecutorServer(object): elif job.name == 'merger:refstate': self.log.debug("Got refstate job: %s" % job.unique) self.refstate(job) + elif job.name == 'merger:fileschanges': + self.log.debug("Got fileschanges job: %s" % job.unique) + self.fileschanges(job) else: self.log.error("Unable to handle job %s" % job.name) job.sendWorkFail() @@ -1972,6 +2007,19 @@ class ExecutorServer(object): files=files) job.sendWorkComplete(json.dumps(result)) + def fileschanges(self, job): + args = json.loads(job.arguments) + task = self.update(args['connection'], args['project']) + task.wait() + with self.merger_lock: + files = self.merger.getFilesChanges( + args['connection'], args['project'], + args['branch'], + args['tosha']) + result = dict(updated=True, + files=files) + job.sendWorkComplete(json.dumps(result)) + def refstate(self, job): args = json.loads(job.arguments) with self.merger_lock: diff --git a/zuul/lib/connections.py b/zuul/lib/connections.py index 262490a60..33c66f9a0 100644 --- a/zuul/lib/connections.py +++ b/zuul/lib/connections.py @@ -14,6 +14,7 @@ import logging import re +from collections import OrderedDict import zuul.driver.zuul import zuul.driver.gerrit @@ -38,7 +39,7 @@ class ConnectionRegistry(object): log = logging.getLogger("zuul.ConnectionRegistry") def __init__(self): - self.connections = {} + self.connections = OrderedDict() self.drivers = {} self.registerDriver(zuul.driver.zuul.ZuulDriver()) @@ -85,7 +86,7 @@ class ConnectionRegistry(object): def configure(self, config, source_only=False): # Register connections from the config - connections = {} + connections = OrderedDict() for section_name in config.sections(): con_match = re.match(r'^connection ([\'\"]?)(.*)(\1)$', @@ -154,6 +155,13 @@ class ConnectionRegistry(object): connection = self.connections[connection_name] return connection.driver.getSource(connection) + def getSources(self): + sources = [] + for connection in self.connections.values(): + if hasattr(connection.driver, 'getSource'): + sources.append(connection.driver.getSource(connection)) + return sources + def getReporter(self, connection_name, config=None): connection = self.connections[connection_name] return connection.driver.getReporter(connection, config) @@ -162,7 +170,7 @@ class ConnectionRegistry(object): connection = self.connections[connection_name] return connection.driver.getTrigger(connection, config) - def getSourceByHostname(self, canonical_hostname): + def getSourceByCanonicalHostname(self, canonical_hostname): for connection in self.connections.values(): if hasattr(connection, 'canonical_hostname'): if connection.canonical_hostname == canonical_hostname: diff --git a/zuul/lib/dependson.py b/zuul/lib/dependson.py new file mode 100644 index 000000000..cd0f6efa3 --- /dev/null +++ b/zuul/lib/dependson.py @@ -0,0 +1,29 @@ +# Copyright 2018 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import re + + +DEPENDS_ON_RE = re.compile(r"^Depends-On: (.*?)\s*$", + re.MULTILINE | re.IGNORECASE) + + +def find_dependency_headers(message): + # Search for Depends-On headers + dependencies = [] + for match in DEPENDS_ON_RE.findall(message): + if match in dependencies: + continue + dependencies.append(match) + return dependencies diff --git a/zuul/lib/fingergw.py b/zuul/lib/fingergw.py new file mode 100644 index 000000000..b56fe0461 --- /dev/null +++ b/zuul/lib/fingergw.py @@ -0,0 +1,214 @@ +#!/usr/bin/env python +# Copyright 2017 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import functools +import logging +import socket +import threading + +import zuul.rpcclient + +from zuul.lib import commandsocket +from zuul.lib import streamer_utils + + +COMMANDS = ['stop'] + + +class RequestHandler(streamer_utils.BaseFingerRequestHandler): + ''' + Class implementing the logic for handling a single finger request. + ''' + + log = logging.getLogger("zuul.fingergw") + + def __init__(self, *args, **kwargs): + self.rpc = kwargs.pop('rpc') + super(RequestHandler, self).__init__(*args, **kwargs) + + def _fingerClient(self, server, port, build_uuid): + ''' + Open a finger connection and return all streaming results. + + :param server: The remote server. + :param port: The remote port. + :param build_uuid: The build UUID to stream. + + Both IPv4 and IPv6 are supported. + ''' + with socket.create_connection((server, port), timeout=10) as s: + msg = "%s\n" % build_uuid # Must have a trailing newline! + s.sendall(msg.encode('utf-8')) + while True: + data = s.recv(1024) + if data: + self.request.sendall(data) + else: + break + + def handle(self): + ''' + This method is called by the socketserver framework to handle an + incoming request. + ''' + try: + build_uuid = self.getCommand() + port_location = self.rpc.get_job_log_stream_address(build_uuid) + + if not port_location: + msg = 'Invalid build UUID %s' % build_uuid + self.request.sendall(msg.encode('utf-8')) + return + + self._fingerClient( + port_location['server'], + port_location['port'], + build_uuid, + ) + except BrokenPipeError: # Client disconnect + return + except Exception: + self.log.exception('Finger request handling exception:') + msg = 'Internal streaming error' + self.request.sendall(msg.encode('utf-8')) + return + + +class FingerGateway(object): + ''' + Class implementing the finger multiplexing/gateway logic. + + For each incoming finger request, a new thread is started that will + be responsible for finding which Zuul executor is executing the + requested build (by asking Gearman), forwarding the request to that + executor, and streaming the results back to our client. + ''' + + log = logging.getLogger("zuul.fingergw") + + def __init__(self, gearman, address, user, command_socket, pid_file): + ''' + Initialize the finger gateway. + + :param tuple gearman: Gearman connection information. This should + include the server, port, SSL key, SSL cert, and SSL CA. + :param tuple address: The address and port to bind to for our gateway. + :param str user: The user to which we should drop privileges after + binding to our address. + :param str command_socket: Path to the daemon command socket. + :param str pid_file: Path to the daemon PID file. + ''' + self.gear_server = gearman[0] + self.gear_port = gearman[1] + self.gear_ssl_key = gearman[2] + self.gear_ssl_cert = gearman[3] + self.gear_ssl_ca = gearman[4] + self.address = address + self.user = user + self.pid_file = pid_file + + self.rpc = None + self.server = None + self.server_thread = None + + self.command_thread = None + self.command_running = False + self.command_socket = command_socket + + self.command_map = dict( + stop=self.stop, + ) + + def _runCommand(self): + while self.command_running: + try: + command = self.command_socket.get().decode('utf8') + if command != '_stop': + self.command_map[command]() + else: + return + except Exception: + self.log.exception("Exception while processing command") + + def _run(self): + try: + self.server.serve_forever() + except Exception: + self.log.exception('Abnormal termination:') + raise + + def start(self): + self.rpc = zuul.rpcclient.RPCClient( + self.gear_server, + self.gear_port, + self.gear_ssl_key, + self.gear_ssl_cert, + self.gear_ssl_ca) + + self.server = streamer_utils.CustomThreadingTCPServer( + self.address, + functools.partial(RequestHandler, rpc=self.rpc), + user=self.user, + pid_file=self.pid_file) + + # Start the command processor after the server and privilege drop + if self.command_socket: + self.log.debug("Starting command processor") + self.command_socket = commandsocket.CommandSocket( + self.command_socket) + self.command_socket.start() + self.command_running = True + self.command_thread = threading.Thread( + target=self._runCommand, name='command') + self.command_thread.daemon = True + self.command_thread.start() + + # The socketserver shutdown() call will hang unless the call + # to server_forever() happens in another thread. So let's do that. + self.server_thread = threading.Thread(target=self._run) + self.server_thread.daemon = True + self.server_thread.start() + self.log.info("Finger gateway is started") + + def stop(self): + if self.command_socket: + self.command_running = False + try: + self.command_socket.stop() + except Exception: + self.log.exception("Error stopping command socket:") + + if self.server: + try: + self.server.shutdown() + self.server.server_close() + self.server = None + except Exception: + self.log.exception("Error stopping TCP server:") + + if self.rpc: + try: + self.rpc.shutdown() + self.rpc = None + except Exception: + self.log.exception("Error stopping RCP client:") + + self.log.info("Finger gateway is stopped") + + def wait(self): + ''' + Wait on the gateway to shutdown. + ''' + self.server_thread.join() diff --git a/zuul/lib/log_streamer.py b/zuul/lib/log_streamer.py index 1906be734..f96f44279 100644 --- a/zuul/lib/log_streamer.py +++ b/zuul/lib/log_streamer.py @@ -18,14 +18,13 @@ import logging import os import os.path -import pwd import re import select -import socket -import socketserver import threading import time +from zuul.lib import streamer_utils + class Log(object): @@ -38,7 +37,7 @@ class Log(object): self.size = self.stat.st_size -class RequestHandler(socketserver.BaseRequestHandler): +class RequestHandler(streamer_utils.BaseFingerRequestHandler): ''' Class to handle a single log streaming request. @@ -46,53 +45,17 @@ class RequestHandler(socketserver.BaseRequestHandler): the (class/method/attribute) names were changed to protect the innocent. ''' - MAX_REQUEST_LEN = 1024 - REQUEST_TIMEOUT = 10 - - # NOTE(Shrews): We only use this to log exceptions since a new process - # is used per-request (and having multiple processes write to the same - # log file constantly is bad). - log = logging.getLogger("zuul.log_streamer.RequestHandler") - - def get_command(self): - poll = select.poll() - bitmask = (select.POLLIN | select.POLLERR | - select.POLLHUP | select.POLLNVAL) - poll.register(self.request, bitmask) - buffer = b'' - ret = None - start = time.time() - while True: - elapsed = time.time() - start - timeout = max(self.REQUEST_TIMEOUT - elapsed, 0) - if not timeout: - raise Exception("Timeout while waiting for input") - for fd, event in poll.poll(timeout): - if event & select.POLLIN: - buffer += self.request.recv(self.MAX_REQUEST_LEN) - else: - raise Exception("Received error event") - if len(buffer) >= self.MAX_REQUEST_LEN: - raise Exception("Request too long") - try: - ret = buffer.decode('utf-8') - x = ret.find('\n') - if x > 0: - return ret[:x] - except UnicodeDecodeError: - pass + log = logging.getLogger("zuul.log_streamer") def handle(self): try: - build_uuid = self.get_command() + build_uuid = self.getCommand() except Exception: - self.log.exception("Failure during get_command:") + self.log.exception("Failure during getCommand:") msg = 'Internal streaming error' self.request.sendall(msg.encode("utf-8")) return - build_uuid = build_uuid.rstrip() - # validate build ID if not re.match("[0-9A-Fa-f]+$", build_uuid): msg = 'Build ID %s is not valid' % build_uuid @@ -182,59 +145,11 @@ class RequestHandler(socketserver.BaseRequestHandler): return False -class CustomThreadingTCPServer(socketserver.ThreadingTCPServer): - ''' - Custom version that allows us to drop privileges after port binding. - ''' - address_family = socket.AF_INET6 +class LogStreamerServer(streamer_utils.CustomThreadingTCPServer): def __init__(self, *args, **kwargs): - self.user = kwargs.pop('user') self.jobdir_root = kwargs.pop('jobdir_root') - # For some reason, setting custom attributes does not work if we - # call the base class __init__ first. Wha?? - socketserver.ThreadingTCPServer.__init__(self, *args, **kwargs) - - def change_privs(self): - ''' - Drop our privileges to the zuul user. - ''' - if os.getuid() != 0: - return - pw = pwd.getpwnam(self.user) - os.setgroups([]) - os.setgid(pw.pw_gid) - os.setuid(pw.pw_uid) - os.umask(0o022) - - def server_bind(self): - self.allow_reuse_address = True - socketserver.ThreadingTCPServer.server_bind(self) - if self.user: - self.change_privs() - - def server_close(self): - ''' - Overridden from base class to shutdown the socket immediately. - ''' - try: - self.socket.shutdown(socket.SHUT_RD) - self.socket.close() - except socket.error as e: - # If it's already closed, don't error. - if e.errno == socket.EBADF: - return - raise - - def process_request(self, request, client_address): - ''' - Overridden from the base class to name the thread. - ''' - t = threading.Thread(target=self.process_request_thread, - name='FingerStreamer', - args=(request, client_address)) - t.daemon = self.daemon_threads - t.start() + super(LogStreamerServer, self).__init__(*args, **kwargs) class LogStreamer(object): @@ -242,13 +157,12 @@ class LogStreamer(object): Class implementing log streaming over the finger daemon port. ''' - def __init__(self, user, host, port, jobdir_root): - self.log = logging.getLogger('zuul.lib.LogStreamer') + def __init__(self, host, port, jobdir_root): + self.log = logging.getLogger('zuul.log_streamer') self.log.debug("LogStreamer starting on port %s", port) - self.server = CustomThreadingTCPServer((host, port), - RequestHandler, - user=user, - jobdir_root=jobdir_root) + self.server = LogStreamerServer((host, port), + RequestHandler, + jobdir_root=jobdir_root) # We start the actual serving within a thread so we can return to # the owner. diff --git a/zuul/lib/streamer_utils.py b/zuul/lib/streamer_utils.py new file mode 100644 index 000000000..3d2d561b9 --- /dev/null +++ b/zuul/lib/streamer_utils.py @@ -0,0 +1,131 @@ +#!/usr/bin/env python +# Copyright 2017 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +''' +This file contains code common to finger log streaming functionality. +The log streamer process within each executor, the finger gateway service, +and the web interface will all make use of this module. +''' + +import os +import pwd +import select +import socket +import socketserver +import threading +import time + + +class BaseFingerRequestHandler(socketserver.BaseRequestHandler): + ''' + Base class for common methods for handling finger requests. + ''' + + MAX_REQUEST_LEN = 1024 + REQUEST_TIMEOUT = 10 + + def getCommand(self): + poll = select.poll() + bitmask = (select.POLLIN | select.POLLERR | + select.POLLHUP | select.POLLNVAL) + poll.register(self.request, bitmask) + buffer = b'' + ret = None + start = time.time() + while True: + elapsed = time.time() - start + timeout = max(self.REQUEST_TIMEOUT - elapsed, 0) + if not timeout: + raise Exception("Timeout while waiting for input") + for fd, event in poll.poll(timeout): + if event & select.POLLIN: + buffer += self.request.recv(self.MAX_REQUEST_LEN) + else: + raise Exception("Received error event") + if len(buffer) >= self.MAX_REQUEST_LEN: + raise Exception("Request too long") + try: + ret = buffer.decode('utf-8') + x = ret.find('\n') + if x > 0: + # rstrip to remove any other unnecessary chars (e.g. \r) + return ret[:x].rstrip() + except UnicodeDecodeError: + pass + + +class CustomThreadingTCPServer(socketserver.ThreadingTCPServer): + ''' + Custom version that allows us to drop privileges after port binding. + ''' + + address_family = socket.AF_INET6 + + def __init__(self, *args, **kwargs): + self.user = kwargs.pop('user', None) + self.pid_file = kwargs.pop('pid_file', None) + socketserver.ThreadingTCPServer.__init__(self, *args, **kwargs) + + def change_privs(self): + ''' + Drop our privileges to another user. + ''' + if os.getuid() != 0: + return + + pw = pwd.getpwnam(self.user) + + # Change owner on our pid file so it can be removed by us after + # dropping privileges. May not exist if not a daemon. + if self.pid_file and os.path.exists(self.pid_file): + os.chown(self.pid_file, pw.pw_uid, pw.pw_gid) + + os.setgroups([]) + os.setgid(pw.pw_gid) + os.setuid(pw.pw_uid) + os.umask(0o022) + + def server_bind(self): + ''' + Overridden from the base class to allow address reuse and to drop + privileges after binding to the listening socket. + ''' + self.allow_reuse_address = True + socketserver.ThreadingTCPServer.server_bind(self) + if self.user: + self.change_privs() + + def server_close(self): + ''' + Overridden from base class to shutdown the socket immediately. + ''' + try: + self.socket.shutdown(socket.SHUT_RD) + self.socket.close() + except socket.error as e: + # If it's already closed, don't error. + if e.errno == socket.EBADF: + return + raise + + def process_request(self, request, client_address): + ''' + Overridden from the base class to name the thread. + ''' + t = threading.Thread(target=self.process_request_thread, + name='socketserver_Thread', + args=(request, client_address)) + t.daemon = self.daemon_threads + t.start() diff --git a/zuul/manager/__init__.py b/zuul/manager/__init__.py index d205afc23..b8a280fde 100644 --- a/zuul/manager/__init__.py +++ b/zuul/manager/__init__.py @@ -12,9 +12,11 @@ import logging import textwrap +import urllib from zuul import exceptions from zuul import model +from zuul.lib.dependson import find_dependency_headers class DynamicChangeQueueContextManager(object): @@ -343,6 +345,32 @@ class PipelineManager(object): self.dequeueItem(item) self.reportStats(item) + def updateCommitDependencies(self, change, change_queue): + # Search for Depends-On headers and find appropriate changes + self.log.debug(" Updating commit dependencies for %s", change) + change.refresh_deps = False + dependencies = [] + seen = set() + for match in find_dependency_headers(change.message): + self.log.debug(" Found Depends-On header: %s", match) + if match in seen: + continue + seen.add(match) + try: + url = urllib.parse.urlparse(match) + except ValueError: + continue + source = self.sched.connections.getSourceByCanonicalHostname( + url.hostname) + if not source: + continue + self.log.debug(" Found source: %s", source) + dep = source.getChangeByURL(match) + if dep and (not dep.is_merged) and dep not in dependencies: + self.log.debug(" Adding dependency: %s", dep) + dependencies.append(dep) + change.commit_needs_changes = dependencies + def provisionNodes(self, item): jobs = item.findJobsToRequest() if not jobs: diff --git a/zuul/manager/dependent.py b/zuul/manager/dependent.py index 5aef45357..20b376d6a 100644 --- a/zuul/manager/dependent.py +++ b/zuul/manager/dependent.py @@ -95,12 +95,29 @@ class DependentPipelineManager(PipelineManager): def enqueueChangesBehind(self, change, quiet, ignore_requirements, change_queue): self.log.debug("Checking for changes needing %s:" % change) - to_enqueue = [] - source = change.project.source if not hasattr(change, 'needed_by_changes'): self.log.debug(" %s does not support dependencies" % type(change)) return - for other_change in change.needed_by_changes: + + # for project in change_queue, project.source get changes, then dedup. + sources = set() + for project in change_queue.projects: + sources.add(project.source) + + seen = set(change.needed_by_changes) + needed_by_changes = change.needed_by_changes[:] + for source in sources: + self.log.debug(" Checking source: %s", source) + for c in source.getChangesDependingOn(change, + change_queue.projects): + if c not in seen: + seen.add(c) + needed_by_changes.append(c) + + self.log.debug(" Following changes: %s", needed_by_changes) + + to_enqueue = [] + for other_change in needed_by_changes: with self.getChangeQueue(other_change) as other_change_queue: if other_change_queue != change_queue: self.log.debug(" Change %s in project %s can not be " @@ -108,6 +125,7 @@ class DependentPipelineManager(PipelineManager): (other_change, other_change.project, change_queue)) continue + source = other_change.project.source if source.canMerge(other_change, self.getSubmitAllowNeeds()): self.log.debug(" Change %s needs %s and is ready to merge" % (other_change, change)) @@ -145,10 +163,12 @@ class DependentPipelineManager(PipelineManager): return True def checkForChangesNeededBy(self, change, change_queue): - self.log.debug("Checking for changes needed by %s:" % change) - source = change.project.source # Return true if okay to proceed enqueing this change, # false if the change should not be enqueued. + self.log.debug("Checking for changes needed by %s:" % change) + if (hasattr(change, 'commit_needs_changes') and + (change.refresh_deps or change.commit_needs_changes is None)): + self.updateCommitDependencies(change, change_queue) if not hasattr(change, 'needs_changes'): self.log.debug(" %s does not support dependencies" % type(change)) return True @@ -180,7 +200,8 @@ class DependentPipelineManager(PipelineManager): self.log.debug(" Needed change is already ahead " "in the queue") continue - if source.canMerge(needed_change, self.getSubmitAllowNeeds()): + if needed_change.project.source.canMerge( + needed_change, self.getSubmitAllowNeeds()): self.log.debug(" Change %s is needed" % needed_change) if needed_change not in changes_needed: changes_needed.append(needed_change) diff --git a/zuul/manager/independent.py b/zuul/manager/independent.py index 65f5ca070..0c2baf010 100644 --- a/zuul/manager/independent.py +++ b/zuul/manager/independent.py @@ -70,6 +70,9 @@ class IndependentPipelineManager(PipelineManager): self.log.debug("Checking for changes needed by %s:" % change) # Return true if okay to proceed enqueing this change, # false if the change should not be enqueued. + if (hasattr(change, 'commit_needs_changes') and + (change.refresh_deps or change.commit_needs_changes is None)): + self.updateCommitDependencies(change, None) if not hasattr(change, 'needs_changes'): self.log.debug(" %s does not support dependencies" % type(change)) return True diff --git a/zuul/merger/client.py b/zuul/merger/client.py index 2614e5887..c89a6fba8 100644 --- a/zuul/merger/client.py +++ b/zuul/merger/client.py @@ -131,6 +131,15 @@ class MergeClient(object): job = self.submitJob('merger:cat', data, None, precedence) return job + def getFilesChanges(self, connection_name, project_name, branch, + tosha=None, precedence=zuul.model.PRECEDENCE_HIGH): + data = dict(connection=connection_name, + project=project_name, + branch=branch, + tosha=tosha) + job = self.submitJob('merger:fileschanges', data, None, precedence) + return job + def onBuildCompleted(self, job): data = getJobData(job) merged = data.get('merged', False) diff --git a/zuul/merger/merger.py b/zuul/merger/merger.py index 06ec4b2b9..bd4ca58ee 100644 --- a/zuul/merger/merger.py +++ b/zuul/merger/merger.py @@ -314,6 +314,18 @@ class Repo(object): 'utf-8') return ret + def getFilesChanges(self, branch, tosha=None): + repo = self.createRepoObject() + files = set() + head = repo.heads[branch].commit + files.update(set(head.stats.files.keys())) + if tosha: + for cmt in head.iter_parents(): + if cmt.hexsha == tosha: + break + files.update(set(cmt.stats.files.keys())) + return list(files) + def deleteRemote(self, remote): repo = self.createRepoObject() repo.delete_remote(repo.remotes[remote]) @@ -581,3 +593,8 @@ class Merger(object): def getFiles(self, connection_name, project_name, branch, files, dirs=[]): repo = self.getRepo(connection_name, project_name) return repo.getFiles(files, dirs, branch=branch) + + def getFilesChanges(self, connection_name, project_name, branch, + tosha=None): + repo = self.getRepo(connection_name, project_name) + return repo.getFilesChanges(branch, tosha) diff --git a/zuul/merger/server.py b/zuul/merger/server.py index 576d41ed5..aa04fc206 100644 --- a/zuul/merger/server.py +++ b/zuul/merger/server.py @@ -81,6 +81,7 @@ class MergeServer(object): self.worker.registerFunction("merger:merge") self.worker.registerFunction("merger:cat") self.worker.registerFunction("merger:refstate") + self.worker.registerFunction("merger:fileschanges") def stop(self): self.log.debug("Stopping") @@ -117,6 +118,9 @@ class MergeServer(object): elif job.name == 'merger:refstate': self.log.debug("Got refstate job: %s" % job.unique) self.refstate(job) + elif job.name == 'merger:fileschanges': + self.log.debug("Got fileschanges job: %s" % job.unique) + self.fileschanges(job) else: self.log.error("Unable to handle job %s" % job.name) job.sendWorkFail() @@ -158,3 +162,12 @@ class MergeServer(object): result = dict(updated=True, files=files) job.sendWorkComplete(json.dumps(result)) + + def fileschanges(self, job): + args = json.loads(job.arguments) + self.merger.updateRepo(args['connection'], args['project']) + files = self.merger.getFilesChanges( + args['connection'], args['project'], args['branch'], args['tosha']) + result = dict(updated=True, + files=files) + job.sendWorkComplete(json.dumps(result)) diff --git a/zuul/model.py b/zuul/model.py index 04df7a80f..29c5a9d7e 100644 --- a/zuul/model.py +++ b/zuul/model.py @@ -384,6 +384,7 @@ class Node(object): self.private_ipv4 = None self.public_ipv6 = None self.connection_port = 22 + self.connection_type = None self._keys = [] self.az = None self.provider = None @@ -641,6 +642,7 @@ class SourceContext(object): self.path = path self.trusted = trusted self.implied_branch_matchers = None + self.implied_branches = None def __str__(self): return '%s/%s@%s' % (self.project, self.path, self.branch) @@ -1358,6 +1360,7 @@ class BuildSet(object): self.unable_to_merge = False self.config_error = None # None or an error message string. self.failing_reasons = [] + self.debug_messages = [] self.merge_state = self.NEW self.nodesets = {} # job -> nodeset self.node_requests = {} # job -> reqs @@ -1405,6 +1408,8 @@ class BuildSet(object): build.build_set = self def removeBuild(self, build): + if build.job.name not in self.builds: + return self.tries[build.job.name] += 1 del self.builds[build.job.name] @@ -1522,6 +1527,17 @@ class QueueItem(object): def setReportedResult(self, result): self.current_build_set.result = result + def debug(self, msg, indent=0): + ppc = self.layout.getProjectPipelineConfig(self.change.project, + self.pipeline) + if not ppc.debug: + return + if indent: + indent = ' ' * indent + else: + indent = '' + self.current_build_set.debug_messages.append(indent + msg) + def freezeJobGraph(self): """Find or create actual matching jobs for this item's change and store the resulting job tree.""" @@ -2109,11 +2125,28 @@ class Change(Branch): def __init__(self, project): super(Change, self).__init__(project) self.number = None + # The gitweb url for browsing the change self.url = None + # URIs for this change which may appear in depends-on headers. + # Note this omits the scheme; i.e., is hostname/path. + self.uris = [] self.patchset = None - self.needs_changes = [] - self.needed_by_changes = [] + # Changes that the source determined are needed due to the + # git DAG: + self.git_needs_changes = [] + self.git_needed_by_changes = [] + + # Changes that the source determined are needed by backwards + # compatible processing of Depends-On headers (Gerrit only): + self.compat_needs_changes = [] + self.compat_needed_by_changes = [] + + # Changes that the pipeline manager determined are needed due + # to Depends-On headers (all drivers): + self.commit_needs_changes = None + self.refresh_deps = False + self.is_current_patchset = True self.can_merge = False self.is_merged = False @@ -2122,6 +2155,11 @@ class Change(Branch): self.status = None self.owner = None + # This may be the commit message, or it may be a cover message + # in the case of a PR. Either way, it's the place where we + # look for depends-on headers. + self.message = None + self.source_event = None def _id(self): @@ -2135,8 +2173,18 @@ class Change(Branch): return True return False + @property + def needs_changes(self): + return (self.git_needs_changes + self.compat_needs_changes + + self.commit_needs_changes) + + @property + def needed_by_changes(self): + return (self.git_needed_by_changes + self.compat_needed_by_changes) + def isUpdateOf(self, other): - if ((hasattr(other, 'number') and self.number == other.number) and + if (self.project == other.project and + (hasattr(other, 'number') and self.number == other.number) and (hasattr(other, 'patchset') and self.patchset is not None and other.patchset is not None and @@ -2241,6 +2289,7 @@ class ProjectPipelineConfig(object): def __init__(self): self.job_list = JobList() self.queue_name = None + self.debug = False self.merge_mode = None @@ -2263,7 +2312,7 @@ class TenantProjectConfig(object): class ProjectConfig(object): - # Represents a project cofiguration + # Represents a project configuration def __init__(self, name, source_context=None): self.name = name # If this is a template, it will have a source_context, but @@ -2408,7 +2457,7 @@ class UnparsedTenantConfig(object): r.semaphores = copy.deepcopy(self.semaphores) return r - def extend(self, conf, tenant=None): + def extend(self, conf, tenant): if isinstance(conf, UnparsedTenantConfig): self.pragmas.extend(conf.pragmas) self.pipelines.extend(conf.pipelines) @@ -2416,16 +2465,14 @@ class UnparsedTenantConfig(object): self.project_templates.extend(conf.project_templates) for k, v in conf.projects.items(): name = k - # If we have the tenant add the projects to - # the according canonical name instead of the given project - # name. If it is not found, it's ok to add this to the given - # name. We also don't need to throw the + # Add the projects to the according canonical name instead of + # the given project name. If it is not found, it's ok to add + # this to the given name. We also don't need to throw the # ProjectNotFoundException here as semantic validation occurs # later where it will fail then. - if tenant is not None: - trusted, project = tenant.getProject(k) - if project is not None: - name = project.canonical_name + trusted, project = tenant.getProject(k) + if project is not None: + name = project.canonical_name self.projects.setdefault(name, []).extend(v) self.nodesets.extend(conf.nodesets) self.secrets.extend(conf.secrets) @@ -2442,7 +2489,12 @@ class UnparsedTenantConfig(object): raise ConfigItemMultipleKeysError() key, value = list(item.items())[0] if key == 'project': - name = value['name'] + name = value.get('name') + if not name: + # There is no name defined so implicitly add the name + # of the project where it is defined. + name = value['_source_context'].project.canonical_name + value['name'] = name self.projects.setdefault(name, []).append(value) elif key == 'job': self.jobs.append(value) @@ -2465,6 +2517,8 @@ class UnparsedTenantConfig(object): class Layout(object): """Holds all of the Pipelines.""" + log = logging.getLogger("zuul.layout") + def __init__(self, tenant): self.uuid = uuid4().hex self.tenant = tenant @@ -2564,7 +2618,8 @@ class Layout(object): def addProjectConfig(self, project_config): self.project_configs[project_config.name] = project_config - def collectJobs(self, jobname, change, path=None, jobs=None, stack=None): + def collectJobs(self, item, jobname, change, path=None, jobs=None, + stack=None): if stack is None: stack = [] if jobs is None: @@ -2573,9 +2628,20 @@ class Layout(object): path = [] path.append(jobname) matched = False + indent = len(path) + 1 + item.debug("Collecting job variants for {jobname}".format( + jobname=jobname), indent=indent) for variant in self.getJobs(jobname): if not variant.changeMatches(change): + self.log.debug("Variant %s did not match %s", repr(variant), + change) + item.debug("Variant {variant} did not match".format( + variant=repr(variant)), indent=indent) continue + else: + self.log.debug("Variant %s matched %s", repr(variant), change) + item.debug("Variant {variant} matched".format( + variant=repr(variant)), indent=indent) if not variant.isBase(): parent = variant.parent if not jobs and parent is None: @@ -2585,27 +2651,38 @@ class Layout(object): if parent and parent not in path: if parent in stack: raise Exception("Dependency cycle in jobs: %s" % stack) - self.collectJobs(parent, change, path, jobs, stack + [jobname]) + self.collectJobs(item, parent, change, path, jobs, + stack + [jobname]) matched = True jobs.append(variant) if not matched: + self.log.debug("No matching parents for job %s and change %s", + jobname, change) + item.debug("No matching parent for {jobname}".format( + jobname=repr(jobname)), indent=indent) raise NoMatchingParentError() return jobs def _createJobGraph(self, item, job_list, job_graph): change = item.change pipeline = item.pipeline + item.debug("Freezing job graph") for jobname in job_list.jobs: # This is the final job we are constructing frozen_job = None + self.log.debug("Collecting jobs %s for %s", jobname, change) + item.debug("Freezing job {jobname}".format( + jobname=jobname), indent=1) try: - variants = self.collectJobs(jobname, change) + variants = self.collectJobs(item, jobname, change) except NoMatchingParentError: variants = None if not variants: # A change must match at least one defined job variant # (that is to say that it must match more than just # the job that is defined in the tree). + item.debug("No matching variants for {jobname}".format( + jobname=jobname), indent=2) continue for variant in variants: if frozen_job is None: @@ -2622,9 +2699,20 @@ class Layout(object): if variant.changeMatches(change): frozen_job.applyVariant(variant) matched = True + self.log.debug("Pipeline variant %s matched %s", + repr(variant), change) + item.debug("Pipeline variant {variant} matched".format( + variant=repr(variant)), indent=2) + else: + self.log.debug("Pipeline variant %s did not match %s", + repr(variant), change) + item.debug("Pipeline variant {variant} did not match". + format(variant=repr(variant)), indent=2) if not matched: # A change must match at least one project pipeline # job variant. + item.debug("No matching pipeline variants for {jobname}". + format(jobname=jobname), indent=2) continue if (frozen_job.allowed_projects and change.project.name not in frozen_job.allowed_projects): diff --git a/zuul/reporter/__init__.py b/zuul/reporter/__init__.py index 49181a77f..1bff5cb94 100644 --- a/zuul/reporter/__init__.py +++ b/zuul/reporter/__init__.py @@ -64,6 +64,10 @@ class BaseReporter(object, metaclass=abc.ABCMeta): a reporter taking free-form text.""" ret = self._getFormatter()(item, with_jobs) + if item.current_build_set.debug_messages: + debug = '\n '.join(item.current_build_set.debug_messages) + ret += '\nDebug information:\n ' + debug + '\n' + if item.pipeline.footer_message: ret += '\n' + item.pipeline.footer_message @@ -105,12 +109,10 @@ class BaseReporter(object, metaclass=abc.ABCMeta): else: return self._formatItemReport(item) - def _formatItemReportJobs(self, item): - # Return the list of jobs portion of the report - ret = '' - + def _getItemReportJobsFields(self, item): + # Extract the report elements from an item config = self.connection.sched.config - + jobs_fields = [] for job in item.getJobs(): build = item.current_build_set.getBuild(job.name) (result, url) = item.formatJobResult(job) @@ -143,6 +145,13 @@ class BaseReporter(object, metaclass=abc.ABCMeta): else: error = '' name = job.name + ' ' - ret += '- %s%s : %s%s%s%s\n' % (name, url, result, error, - elapsed, voting) + jobs_fields.append((name, url, result, error, elapsed, voting)) + return jobs_fields + + def _formatItemReportJobs(self, item): + # Return the list of jobs portion of the report + ret = '' + jobs_fields = self._getItemReportJobsFields(item) + for job_fields in jobs_fields: + ret += '- %s%s : %s%s%s%s\n' % job_fields return ret diff --git a/zuul/rpclistener.py b/zuul/rpclistener.py index d40505e00..e5016dfab 100644 --- a/zuul/rpclistener.py +++ b/zuul/rpclistener.py @@ -303,8 +303,7 @@ class RPCListener(object): def handle_key_get(self, job): args = json.loads(job.arguments) - source_name, project_name = args.get("source"), args.get("project") - source = self.sched.connections.getSource(source_name) - project = source.getProject(project_name) + tenant = self.sched.abide.tenants.get(args.get("tenant")) + (trusted, project) = tenant.getProject(args.get("project")) job.sendWorkComplete( encryption.serialize_rsa_public_key(project.public_key)) diff --git a/zuul/scheduler.py b/zuul/scheduler.py index b978979d3..a2e3b6eb1 100644 --- a/zuul/scheduler.py +++ b/zuul/scheduler.py @@ -823,8 +823,7 @@ class Scheduler(threading.Thread): if self.statsd: self.log.debug("Statsd enabled") else: - self.log.debug("Statsd disabled because python statsd " - "package not found") + self.log.debug("Statsd not configured") while True: self.log.debug("Run handler sleeping") self.wake_event.wait() @@ -1089,3 +1088,25 @@ class Scheduler(threading.Thread): for pipeline in tenant.layout.pipelines.values(): pipelines.append(pipeline.formatStatusJSON(websocket_url)) return json.dumps(data) + + def onChangeUpdated(self, change): + """Remove stale dependency references on change update. + + When a change is updated with a new patchset, other changes in + the system may still have a reference to the old patchset in + their dependencies. Search for those (across all sources) and + mark that their dependencies are out of date. This will cause + them to be refreshed the next time the queue processor + examines them. + """ + + self.log.debug("Change %s has been updated, clearing dependent " + "change caches", change) + for source in self.connections.getSources(): + for other_change in source.getCachedChanges(): + if other_change.commit_needs_changes is None: + continue + for dep in other_change.commit_needs_changes: + if change.isUpdateOf(dep): + other_change.refresh_deps = True + change.refresh_deps = True diff --git a/zuul/source/__init__.py b/zuul/source/__init__.py index 0396aff49..00dfc9c3a 100644 --- a/zuul/source/__init__.py +++ b/zuul/source/__init__.py @@ -52,6 +52,29 @@ class BaseSource(object, metaclass=abc.ABCMeta): """Get the change representing an event.""" @abc.abstractmethod + def getChangeByURL(self, url): + """Get the change corresponding to the supplied URL. + + The URL may may not correspond to this source; if it doesn't, + or there is no change at that URL, return None. + + """ + + @abc.abstractmethod + def getChangesDependingOn(self, change, projects): + """Return changes which depend on changes at the supplied URIs. + + Search this source for changes which depend on the supplied + change. Generally the Change.uris attribute should be used to + perform the search, as it contains a list of URLs without the + scheme which represent a single change + + If the projects argument is None, search across all known + projects. If it is supplied, the search may optionally be + restricted to only those projects. + """ + + @abc.abstractmethod def getProjectOpenChanges(self, project): """Get the open changes for a project.""" diff --git a/zuul/web/__init__.py b/zuul/web/__init__.py index e4a361205..a98a6c80c 100755 --- a/zuul/web/__init__.py +++ b/zuul/web/__init__.py @@ -42,17 +42,6 @@ class LogStreamingHandler(object): def setEventLoop(self, event_loop): self.event_loop = event_loop - def _getPortLocation(self, job_uuid): - """ - Query Gearman for the executor running the given job. - - :param str job_uuid: The job UUID we want to stream. - """ - # TODO: Fetch the entire list of uuid/file/server/ports once and - # share that, and fetch a new list on cache misses perhaps? - ret = self.rpc.get_job_log_stream_address(job_uuid) - return ret - async def _fingerClient(self, ws, server, port, job_uuid): """ Create a client to connect to the finger streamer and pull results. @@ -94,7 +83,10 @@ class LogStreamingHandler(object): # Schedule the blocking gearman work in an Executor gear_task = self.event_loop.run_in_executor( - None, self._getPortLocation, request['uuid']) + None, + self.rpc.get_job_log_stream_address, + request['uuid'], + ) try: port_location = await asyncio.wait_for(gear_task, 10) @@ -190,12 +182,14 @@ class GearmanHandler(object): def job_list(self, request): tenant = request.match_info["tenant"] job = self.rpc.submitJob('zuul:job_list', {'tenant': tenant}) - return web.json_response(json.loads(job.data[0])) + resp = web.json_response(json.loads(job.data[0])) + resp.headers['Access-Control-Allow-Origin'] = '*' + return resp def key_get(self, request): - source = request.match_info["source"] + tenant = request.match_info["tenant"] project = request.match_info["project"] - job = self.rpc.submitJob('zuul:key_get', {'source': source, + job = self.rpc.submitJob('zuul:key_get', {'tenant': tenant, 'project': project}) return web.Response(body=job.data[0]) @@ -290,6 +284,7 @@ class SqlHandler(object): raise ValueError("Unknown parameter %s" % k) data = self.get_builds(args) resp = web.json_response(data) + resp.headers['Access-Control-Allow-Origin'] = '*' except Exception as e: self.log.exception("Jobs exception:") resp = web.json_response({'error_description': 'Internal error'}, @@ -310,6 +305,7 @@ class ZuulWeb(object): self.listen_port = listen_port self.event_loop = None self.term = None + self.server = None self.static_cache_expiry = static_cache_expiry # instanciate handlers self.rpc = zuul.rpcclient.RPCClient(gear_server, gear_port, @@ -375,7 +371,7 @@ class ZuulWeb(object): ('GET', '/{tenant}/status.json', self._handleStatusRequest), ('GET', '/{tenant}/jobs.json', self._handleJobsRequest), ('GET', '/{tenant}/console-stream', self._handleWebsocket), - ('GET', '/{source}/{project}.pub', self._handleKeyRequest), + ('GET', '/{tenant}/{project:.*}.pub', self._handleKeyRequest), ('GET', '/{tenant}/status.html', self._handleStaticRequest), ('GET', '/{tenant}/jobs.html', self._handleStaticRequest), ('GET', '/{tenant}/stream.html', self._handleStaticRequest), diff --git a/zuul/web/static/builds.html b/zuul/web/static/builds.html index 921c9e2a5..ace1e0a8f 100644 --- a/zuul/web/static/builds.html +++ b/zuul/web/static/builds.html @@ -17,10 +17,10 @@ under the License. <html> <head> <title>Zuul Builds</title> - <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css"> + <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" href="../static/styles/zuul.css" /> - <script src="/static/js/jquery.min.js"></script> - <script src="/static/js/angular.min.js"></script> + <script src="../static/js/jquery.min.js"></script> + <script src="../static/js/angular.min.js"></script> <script src="../static/javascripts/zuul.angular.js"></script> </head> <body ng-app="zuulBuilds" ng-controller="mainController"><div class="container-fluid"> @@ -50,33 +50,25 @@ under the License. <table class="table table-hover table-condensed"> <thead> <tr> - <th width="20px">id</th> <th>Job</th> <th>Project</th> <th>Pipeline</th> <th>Change</th> - <th>Newrev</th> <th>Duration</th> <th>Log url</th> - <th>Node name</th> <th>Start time</th> - <th>End time</th> <th>Result</th> </tr> </thead> <tbody> <tr ng-repeat="build in builds" ng-class="rowClass(build)"> - <td>{{ build.id }}</td> <td>{{ build.job_name }}</td> <td>{{ build.project }}</td> <td>{{ build.pipeline }}</td> <td><a href="{{ build.ref_url }}" target="_self">change</a></td> - <td>{{ build.newrev }}</td> <td>{{ build.duration }} seconds</td> <td><a ng-if="build.log_url" href="{{ build.log_url }}" target="_self">logs</a></td> - <td>{{ build.node_name }}</td> <td>{{ build.start_time }}</td> - <td>{{ build.end_time }}</td> <td>{{ build.result }}</td> </tr> </tbody> diff --git a/zuul/web/static/index.html b/zuul/web/static/index.html index 6747e66ff..d20a1ea5f 100644 --- a/zuul/web/static/index.html +++ b/zuul/web/static/index.html @@ -17,10 +17,10 @@ under the License. <html> <head> <title>Zuul Tenants</title> - <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css"> + <link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" href="static/styles/zuul.css" /> - <script src="/static/js/jquery.min.js"></script> - <script src="/static/js/angular.min.js"></script> + <script src="static/js/jquery.min.js"></script> + <script src="static/js/angular.min.js"></script> <script src="static/javascripts/zuul.angular.js"></script> </head> <body ng-app="zuulTenants" ng-controller="mainController"><div class="container-fluid"> diff --git a/zuul/web/static/javascripts/zuul.app.js b/zuul/web/static/javascripts/zuul.app.js index 7ceb2dda7..bf90a4db7 100644 --- a/zuul/web/static/javascripts/zuul.app.js +++ b/zuul/web/static/javascripts/zuul.app.js @@ -28,8 +28,6 @@ function zuul_build_dom($, container) { // Build a default-looking DOM var default_layout = '<div class="container">' - + '<h1>Zuul Status</h1>' - + '<p>Real-time status monitor of Zuul, the pipeline manager between Gerrit and Workers.</p>' + '<div class="zuul-container" id="zuul-container">' + '<div style="display: none;" class="alert" id="zuul_msg"></div>' + '<button class="btn pull-right zuul-spinner">updating <span class="glyphicon glyphicon-refresh"></span></button>' diff --git a/zuul/web/static/jobs.html b/zuul/web/static/jobs.html index 694672372..b27d8827a 100644 --- a/zuul/web/static/jobs.html +++ b/zuul/web/static/jobs.html @@ -17,10 +17,10 @@ under the License. <html> <head> <title>Zuul Builds</title> - <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css"> + <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" href="../static/styles/zuul.css" /> - <script src="/static/js/jquery.min.js"></script> - <script src="/static/js/angular.min.js"></script> + <script src="../static/js/jquery.min.js"></script> + <script src="../static/js/angular.min.js"></script> <script src="../static/javascripts/zuul.angular.js"></script> </head> <body ng-app="zuulJobs" ng-controller="mainController"><div class="container-fluid"> diff --git a/zuul/web/static/status.html b/zuul/web/static/status.html index 7cb9536b3..8471fd171 100644 --- a/zuul/web/static/status.html +++ b/zuul/web/static/status.html @@ -19,11 +19,11 @@ under the License. <html> <head> <title>Zuul Status</title> - <link rel="stylesheet" href="/static/bootstrap/css/bootstrap.min.css"> + <link rel="stylesheet" href="../static/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" href="../static/styles/zuul.css" /> - <script src="/static/js/jquery.min.js"></script> - <script src="/static/js/jquery-visibility.min.js"></script> - <script src="/static/js/jquery.graphite.min.js"></script> + <script src="../static/js/jquery.min.js"></script> + <script src="../static/js/jquery-visibility.min.js"></script> + <script src="../static/js/jquery.graphite.min.js"></script> <script src="../static/javascripts/jquery.zuul.js"></script> <script src="../static/javascripts/zuul.app.js"></script> </head> |