| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SqlAlchemy 2.0 has gotten a lot more picky about transactions. In
addition to needing to explicitly set up transactions using SqlAlchemy
2.0 it seems alembic's get_current_revision() call cannot be in the same
transaction as the alembic migrate/stamp calls with MySQL 8.0.
In particular the get_current_revision call seems to get a
SHARED_HIGH_PRIO lock on the alembic_version table. This prevents the
migrate/stamp calls from creating the alembic_version table as this
requires an EXCLUSIVE lock. The SHARED_HIGH_PRIO lock appears to be in
place as long as the get_current_revision transaction is active. To fix
this we simplify our migration tooling and put get_current_revision in a
transaction block of its own. The rest of our migrate function calls
into functions that will setup new transactions and it doesn't need to
be in this block.
Change-Id: Ic71ddf1968610784cef72c4634ccec3a18855a0e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This addresses SqlAlchemy's removed in 2.0 warnings. Now that SqlAlchemy
2.0 has released we can see that we are not compatible yet. A good first
step in adding compatibility is fixing warnings in 1.4.
In particular there are four types of warning we fix here:
1. Using raw strings in conn.execute() calls. We need to use the text()
construct instead.
2. Passing a list of items to select when doing select queries. Instead
we need to pass things as normal posargs.
3. Accessing row result items as if the row is a dict. THis is not
longer possible without first going through the row._mapping system.
Instead we can access items as normal object attributes.
4. You must now use sqlalchemy.inspect() on a connectable to create an
Inspector object rather than instantiating it directly.
Finally we set up alembic's engine creation to run with future 2.0
behavior now that the warnings are cleared up. This appears to have
already been done for the main zuul application.
Change-Id: I5475e39bd93d71cd1106ec6d3a5423ea2dd51859
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a build is paused or resumed, we now store this information on the
build together with the event time. Instead of additional attributes for
each timestamp, we add an "event" list attribute to the build which can
also be used for other events in the future.
The events are stored in the SQL database and added to the MQTT payload
so the information can be used by the zuul-web UI (e.g. in the "build
times" gantt chart) or provided to external services.
Change-Id: I789b4f69faf96e3b8fd090a2e389df3bb9efd602
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have had an on-and-off relationship with skipped builds in the
database. Generally we have attempted to exclude them from the db,
but we have occasionally (accidentally?) included them. The status
quo is that builds with a result of SKIPPED (as well as several
other results which don't come from the executor) are not recorded
in the database.
With a greater interest in being able to determine which jobs ran
or did not run for a change after the fact, this job deliberately
adds all builds (whether they touch an executor or not, whether
real or not) to the database. This means than anything that could
potentially show up on the status page or in a code-review report
will be in the database, and can therefore be seen in the web UI.
It is still the case that we are not actually interested in seeing
a page full of SKIPPED builds when we visit the "Builds" tab in
the web ui (which is the principal reason we have not included them
in the database so far). To address this, we set the default query
in the builds tab to exclude skipped builds (it is easy to add other
types of builds to exclude in the future if we wish). If a user
then specifies a query filter to *include* specific results, we drop
the exclusion from the query string. This allows for the expected
behavior of not showing SKIPPED by default, then as specific results
are added to the filter, we show only those, and if the user selects
that they want to see SKIPPED, they will then be included.
On the buildset page, we add a switch similar to the current "show
retried jobs" switch that selects whether skipped builds in a buildset
should be displayed (again, it hides them by default).
Change-Id: I1835965101299bc7a95c952e99f6b0b095def085
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a build result arrives for a non-current buildset we should skip
the reporting as we can no longer create the reference to the buildset.
Traceback (most recent call last):
File "/opt/zuul/lib/python3.10/site-packages/zuul/scheduler.py", line 2654, in _doBuildCompletedEvent
self.sql.reportBuildEnd(
File "/opt/zuul/lib/python3.10/site-packages/zuul/driver/sql/sqlreporter.py", line 143, in reportBuildEnd
db_build = self._createBuild(db, build)
File "/opt/zuul/lib/python3.10/site-packages/zuul/driver/sql/sqlreporter.py", line 180, in _createBuild
tenant=buildset.item.pipeline.tenant.name, uuid=buildset.uuid)
AttributeError: 'NoneType' object has no attribute 'item'
Change-Id: Iccbe9ab8212fbbfa21cb29b84a17e03ca221d7bd
|
|
|
|
|
|
|
| |
This adds a zuul-admin command which allows operators to delete old
database entries.
Change-Id: I4e277a07394aa4852a563f4c9cdc39b5801ab4ba
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This corrects two shortcomings in the database handling:
1) If we are unable to create a build or buildset and a later operation
attempts to update that build or buildset, it will likely fail, possibly
aborting the pipeline processing run.
2) If a transient db error occurs, we may miss reporting data to the db.
To correct these, this change does the following:
1) Creates missing builds or buildsets at any point we try to update them.
2) Wraps every write operation in a retry loop which attempts to write to
the database 3 times with a 5 second delay. The retry loop is just
outside the transaction block, so the entire transaction will have been
aborted and we will start again.
3) If the retry loop fails, we log the exception but do not raise it
to the level of the pipeline processor.
Change-Id: I364010fada8cbdb160fc41c5ef5e25576a654b90
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is a prelude to a change which will report a distinct buildset result
to the database if the upstream code review system is unable to merge a change.
Currently it is reported as MERGER_FAILURE which makes it difficult to
distinguish from merge conflicts.
Essentially, the two states we're interested in are when Zuul's merger is
unable to prepare a git checkout of the change (99% of the time, this is
a merge conflict). This will be known as MERGE_CONFLICT now.
The second state is when Zuul asks Gerrit/Github/etc to submit/merge a change
and the remote system is unable (or refuses) to do so. In a future change,
that will be reported as MERGE_FAILURE.
To avoid confusion and use names which better reflect the situation, this change
performs the rename to MERGE_CONFLICT.
Because there are pipeline configuration options tied to the MERGER_FAILURE
status (which start with 'merge-failure') they are also renamed to 'merge-conflict'.
The old names are supported for backwards compatibility.
A SQL migration takes care of updating values in the database.
The upgrade procedure is noted as being special because of the db value updates.
If an operator doesn't follow the recommended procedure, however, the consequences
are minimal (builds which won't be easily queried in the web ui; that can be
manually corrected if desired).
A model API change is not needed since the only place where we receive this value
from ZK can be updated to accept both values.
Change-Id: I3050409ed68805c748efe7a176b9755fa281536f
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent change added extra columns to the buildset table. The
end time of the last job is guaranteed to be at least the start time
of the first job. However, if there are queue items in-flight
during the upgrade, those buildsets will not have th first job
timestamps initialized. This produces the following traceback:
Exception in pipeline processing:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/zuul/scheduler.py", line 1977, in _process_pipeline
while not self._stopped and pipeline.manager.processQueue():
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1563, in processQueue
item_changed, nnfi = self._processOneItem(
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1498, in _processOneItem
self.reportItem(item)
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1793, in reportItem
reported=not self._reportItem(item))
File "/usr/local/lib/python3.8/site-packages/zuul/manager/__init__.py", line 1925, in _reportItem
self.sql.reportBuildsetEnd(item.current_build_set, action, final=True)
File "/usr/local/lib/python3.8/site-packages/zuul/driver/sql/sqlreporter.py", line 91, in reportBuildsetEnd
if build.end_time and build.end_time > end_time:
TypeError: '>' not supported between instances of 'datetime.datetime' and 'NoneType'
This change protects against that error; if the first build start
time is None, then we won't perform the comparison and the last
build end time will also be None.
Change-Id: I78840dc58cd950ba85b0dcf108fc0a659b051e95
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add two columns to the buildset table in the database: the timestamp
of the start of the first build and the end of the last build. These
are calculated from the builds in the webui buildset page, but they
are not available in the buildset listing without performing
a table join on the server side.
To keep the buildset query simple and fast, this adds the columns to
the buildset table (which is a minor data duplication).
Return the new values in the rest api.
Change-Id: Ie162e414ed5cf09e9dc8f5c92e07b80934592fdf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, NODE_FAILURE results are not reported via SQL in case the
node request failed. Tis is because those results are directly
evaluated in the pipeline manager before the build is even started.
Thus, there are no build result events sent by the executor and the
"normal" build result event handling is skipped for those builds.
As those build results are not stored in the database they are also not
visible in the UI. Thus, there could be cases where a buildset failed
because of a NODE_FAILURE, but all builds that are shown were
successful.
To fix this, we could directly call the SQL reporter when the
NODE_FAILURE is evaluated in the pipeline manager.
Also adapt the reportBuildEnd() method in the sql reporter so that the
build entry is created in case its not present. This could be the case
if the build started event was not processed or did not happen at all
(e.g. for the NODE_FAILURE results or any result that is created via a
"fake build" directly in the pipeline manager).
Change-Id: I2603a7ccf26a41e6747c9276cb37c9b0fd668f75
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The overall duration is from a user (developer) point of view, how much
time it takes from the trigger of the build (e.g. a push, a comment,
etc.), till the last build is finished.
It takes into account also the time spent in waiting in queue, launching
nodes, preparing the nodes, etc.
Technically it measures between the event timestamp and the end time of
the last build in the build set.
This duration reflects the user experience of how much time the user needs
to wait.
Change-Id: I253d023146c696d0372197e599e0df3c217ef344
|
| |
| |
| |
| |
| |
| |
| | |
This is a framework for making upgrades to the ZooKeeper data model
in a manner that can support a rolling Zuul system upgrade.
Change-Id: Iff09c95878420e19234908c2a937e9444832a6ec
|
|/
|
|
|
|
|
|
| |
Ensure that during startup of multiple schedulers or web-instances in
parallel only one at a time is performing the migration for a SQL
connection.
Change-Id: I734bd76dde16c52cd76ea93e44a0fc6e7c7855f1
|
|
|
|
|
|
|
|
|
|
|
| |
Allow filtering searches per primary index; ie return only
builds or buildsets whose primary index key is greater than idx_min
or lower than idx_max. This is expected to increase queries speed
compared to using the offset argument when it is possible to do
so, since "offset" requires the database to sift through all results until
the offset is reached.
Change-Id: I420d71d7c62dad6d118310525e97b4a546f05f99
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an early preparation step for removing the RPC calls between
zuul-web and the scheduler.
In order to do so we must initialize the ConfigLoader in zuul-web which
requires all connections to be available. Therefore, this change ensures
that we can load all connections in zuul-web without providing a
scheduler instance.
To avoid unnecessary traffic from a zuul-web instance the onLoad()
method initializes the change cache only if a scheduler instance is
available on the connection.
Change-Id: I3c1d2995e81e17763ae3454076ab2f5ce87ab1fc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can obtain the same information from the SQL database now, so
do that and remove the filesystem-based time database. This will
help support multiple schedulers (as they will all have access to
the same data).
Nothing in the scheduler uses the state directory anymore, so clean
up the docs around that. The executor still has a state dir where
it may install ansible-related files.
The SQL query was rather slow in practice because it created a
temporary table since it was filtering mostly by buildset fields
then sorting by build.id. We can sort by buildset.id and get nearly
the same results (equally valid from our perspective) much faster.
In some configurations under postgres, we may see a performance
variation in the run-time of the query. In order to keep the time
estimation out of the critical path of job launches, we perform
the SQL query asynchronously. We may be able to remove this added
bit of complexity once the scale-out-scheduler work is finished
(and/or we we further define/restrict our database requirements).
Change-Id: Id3c64be7a05c9edc849e698200411ad436a1334d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The executor client still holds a list of local builds objects which is
used in various places. One use case is to look up necessary
information of the original build when a build result event is handled.
Using such a local list won't work with multiple schedulers in place. As
a first step we will avoid using this list for handling build result
events and instead provide all necessary information to the build result
itself and look up the remaining information from the pipeline directly.
This change also improves the log output when processing build result
events in the scheduler.
Change-Id: I9c4e573de2ce63259ec6cfb7d69c2f5be48f33ef
|
|
|
|
|
|
| |
If this works, it's apparently needed by alembic 1.7.x
Change-Id: Icbbffeb3b30410c4af33f0cdf74821eb4f6eb676
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The alembic documentation mentions that a raw percent sign not part of
an interpolation symbol in 'value' parameter must be escaped.
Fix this exception which occurs when for example a password contains
a percent:
2021-06-11 10:22:14,366 ERROR zuul.Scheduler: Error starting Zuul:
Traceback (most recent call last):
File "zuul/lib/python3.7/site-packages/zuul/cmd/scheduler.py", line 172, in run
self.sched.registerConnections(self.connections)
File "zuul/lib/python3.7/site-packages/zuul/scheduler.py", line 400, in registerConnections
self.connections.registerScheduler(self, load)
File "zuul/lib/python3.7/site-packages/zuul/lib/connections.py", line 73, in registerScheduler
connection.onLoad()
File "zuul/lib/python3.7/site-packages/zuul/driver/sql/sqlconnection.py", line 247, in onLoad
self._migrate()
File "zuul/lib/python3.7/site-packages/zuul/driver/sql/sqlconnection.py", line 238, in _migrate
self.connection_config.get('dburi'))
File "zuul/lib/python3.7/site-packages/alembic/config.py", line 242, in set_main_option
self.set_section_option(self.config_ini_section, name, value)
File "zuul/lib/python3.7/site-packages/alembic/config.py", line 269, in set_section_option
self.file_config.set(section, name, value)
File "/usr/lib/python3.7/configparser.py", line 1198, in set
super().set(section, option, value)
File "/usr/lib/python3.7/configparser.py", line 893, in set
value)
File "/usr/lib/python3.7/configparser.py", line 402, in before_set
"position %d" % (value, tmp_value.find('%')))
ValueError: invalid interpolation syntax in 'postgresql://[...]' at position 18
Change-Id: I96d70f68da2ba58455cbc2ae4d54a3c90f461123
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SQLAlchemy 2.0 will introduce a number of changes. Thankfully Current
sqlalchemy has deprecation warnings and can be run with future flags set
to True to enforce 2.0 behavior. We use these tools to prepare Zuul for
the SQLAlchemy 2.0 release.
In tox.ini configure the environment to always emit DeprecationWarnings
for modules that touch sqlalchemy.
Update sqlconnection to use the new location for declarative_base and
set future=True on our Engine and Session objects.
Finally update the database migration tests to use transaction based
connections for executing raw sql statements. Also we switch to the
exec_driver_sql method for that. SQLAlchemy 2.0 will not do implicit
autocommiting and doesn't support executing strings directly.
https://docs.sqlalchemy.org/en/14/changelog/migration_20.html has tons
of info on these changes. Reviews should probably pay attention to the
transaction behavior changes as well as any alembic code that might also
need updating.
Change-Id: I4e7a56d24d0f52b6d5b00a8c12fed52d6fae92ef
|
|/ /
| |
| |
| |
| |
| |
| |
| | |
The SQL reporter isn't really a reporter any more, so we don't need
these methods. But we do use the reporter formatting helpers, so
let's keep the class hierarchy for now.
Change-Id: Ic6c9c599cb7ef697f0fdb838180f0f6b5fcf0a5a
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We missed some cases where builds might be aborted and the results
not reported to the database. This updates the test framework to
assert that tests end with no open builds or buildsets in the
database.
To fix the actual issues, we need to report some build completions
in the scheduler instead of the pipeline manager. So to do that,
we grab a SQL reporter object when initializing the scheduler
(and we therefore no longer need to do so when initializing the
pipeline manager). The SQL reporter isn't like the rest of the
reporters -- it isn't pipeline specific, so a single global instance
is fine.
Finally, initializing the SQL reporter during scheduler init had
some conflicts with a unit test which tested that the merger could
load "source-only" connections. That test actually verified that
the *scheduler* loaded source-only connections. So to correct this,
it now verifies that the executor (which has a merger and is under
the same constraints as the merger for this purpose) can do so. We
no longer need the source_only flag in tests.
Change-Id: I1a983dcc9f4e5282c11af23813a4ca1c0f8e9d9d
|
| |
| |
| |
| |
| |
| |
| |
| | |
We're currently recording a lot of NO_JOBS buildsets in the db,
and it's likely that no one is interested in that info. Instead,
only add a buildset entry if we know we're going to run jobs.
Change-Id: Ib89c3513a23908befaaea4f09933e846c6477aaa
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a tri-state parameter to the build and buildset queries,
both in the internal API and via the web API. True means return
builds with results, False means only in-progress builds,
None (or omitted) means both.
Also render "In Progress" builds as such in the web UI.
Change-Id: Ib021e6a2c7338c08deae1aef4dbb5f0d9154daa0
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The name of the nodeset used by a job may be of interest to users
as they look at historical build results (did py38 run on focal or
bionic?). Add a column for that purpose.
Meanwhile, the node_name column has not been used in a very long
time (at least v3). It is universally null now. As a singular value,
it doesn't make a lot of sense for a multinode system. Drop it.
The build object "node_labels" field is also unused; drop it as well.
The MQTT schema is updated to match SQL.
Change-Id: Iae8444dfdd52561928c80448bc3e3158744e08e6
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This moves some functions of the SQL reporter into the pipeline
manager, so that builds and buildsets are always recorded in the
database when started and when completed. The 'final' flag is
used to indicate whether a build or buildset result is user-visible
or not.
Change-Id: I053e195d120ecbb2fd89cf7e1e9fc7eccc9dcd2f
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than running through all of the migrations when starting Zuul
with an empty database, this uses sqlalchemy's create_all method to
create it from the declarative schema.
To make sure that stays in sync with alembic, a test is added to run
DB creation both ways and compare.
The declaritive schema had one column with an incorrect type, and
several columns out of order; this adjusts the schema to match the
migrations.
Contrary to expectations, using sqlalchemy to create the schema actually
adds about 0.05 seconds on averate to test runtime.
Change-Id: I594b6980f5efa5fa4b8ca387c5d0ab4373b86394
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that the SQL database is required, fail to start if the dburi has
an error (like an incorrect module specification), and wait forever
for a connection to the database before proceeding.
This can be especially helpful in container environments where starting
Zuul may race starting a SQL database.
A test which verified that Zuul would start despite problems with the
SQL connection is removed since that is no longer the desired behavior.
Change-Id: Iae8ea420297f6264ae1d265b22b96d81f1df9a12
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We already have the infrastructure in place for adding warnings to the
reporting. Plumb that through to zuul_return so jobs can do that on
purpose as well. An example could be a post playbook that analyzes
performance statistics and emits a warning about inefficient usage of
the build node resources.
Change-Id: I4c3b85dc8f4c69c55cbc6168b8a66afce8b50a97
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On the way towards a fully scale out scheduler we need to move the
times database from the local filesystem into the SQL
database. Therefore we need to make at least one SQL connection
mandatory.
SQL reporters are required (an implied sql reporter is added to
every pipeline, explicit sql reporters ignored)
Change-Id: I30723f9b320b9f2937cc1d7ff3267519161bc380
Depends-On: https://review.opendev.org/621479
Story: 2007192
Task: 38329
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We set url to the job name if it's missing, but a job name is not a url
nor is it additional information. Then, in dashboard displays
we have display code that sets url to null if it matches the job name.
Instead of fixing the data at the display layer, let's set it to null in
the first place. So that we can update the dashboard code to remove the
workarounds, let's also run a migration to update the previously saved
build data.
Change-Id: I80ce26de4abc15720d7e37aee73049423584d1b9
|
|/
|
|
|
|
|
|
| |
The boolean "held" attribute is set to True if a build triggered
a autohold request, and its nodeset was held.
Allow filtering builds by "held" status.
Change-Id: I6517764c534f3ba8112094177aefbaa144abafae
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the mqtt reporter uses the report url as log_url. This is
fine as long as report-build-page is disabled. As soon as
report-build-page is enabled on a tenant it reports the url to the
result page of the build. As mqtt is meant to be consumed by machines
this breaks e.g. log post processing.
Fix this by reporting the real log url as log_url and add the field
web_url for use cases where really the human url is required.
This fixes also a wrong indentation in the mqtt driver documentation,
resulting in all buildset.builds.* attributes being listed as buildset.*
attributes.
Change-Id: I91ce93a7000ddd0d70ce504b70742262d8239a8f
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we added those to the MQTT reporter, we should also store them in
the SQL database. They are stored in the zuul_build table and can be
identified via the new "final" column which is set to False for those
builds (and True for all others).
The final flag can also be used in the API to specifically filter for
those builds or remove them from the result. By default, no builds are
filtered out.
The buildset API includes these retried builds under a dedicated
'retry_builds' key to not mix them up with the final builds. Thus, the
JSON format is equally to the one the MQTT reporter uses.
For the migration of the SQL database, all existing builds will be set
to final.
We could also provide a filter mechanism via the web UI, but that should
be part of a separate change (e.g. add a checkbox next to the search
filter saying "Show retried builds").
Change-Id: I5960df8a997c7ab81a07b9bd8631c14dbe22b8ab
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The OIDC Authenticator can be configured to specify scope(s).
By default, use scopes "openid profile", the smallest subset of scopes
supported by all OpenID Connect Identity Providers.
Add a basic capability register for the web service. This is simply
meant to expose configuration details that can be public, so that
other services (namely zuul web-app) can access them through the REST
API.
Fix capability 'job_history' by setting it to True if a SQL driver is
active.
Change-Id: I6ec0338cc0f7c0756c0cb26d6e5b3732c3ca655c
|
|
|
|
|
|
|
|
| |
This should be stored in the SQL database so that the build page
can present the reason why a particular build failed, instead of
just the result "ERROR".
Change-Id: I4dd25546e27b8d3f3a4e049f9980082a3622079f
|
|
|
|
|
|
|
| |
Having the zuul event id available in the database and also in the build
and buildset detail page makes debugging a lot easier.
Change-Id: Ia1e4aaf50fb28bb27cbcfcfc3b5a92bba88fc85e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The build page needs the actual log_url returned by the job (without
any modification from success_url or failure_url) in order to create
links to the log site.
The reported success/failure URL isn't as important in this context,
and I suspect their days are numbered once we require the SQL
reporter and report the link to the build page instead. So we just
won't record those in the DB. If we feel that they are important,
we can add new columns for them.
Also, ensure it has a trailing / so that API users (including the JS
pages) are not annoyed by inconsistent data.
Change-Id: I5ea98158d204ae17280c4bf5921e2edf4483cf0a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The query for builds-joined-with-buildsets is currently optimized
for the case where little additional filtering is performed. E.g.,
the case where a user browses to the builds tab and does not enter
any search terms. In that case, mysql needs a hint supplied in
order to choose the right index.
When search terms are entered which can, due to the presense of
other indexes, greatly reduce the working set, it's better to let
the query planner off the leash and it will make the right choices.
This change stops adding the hint in the cases where a user supplies
a search term that matches one of the indexes on the build or
buildset table (notable exception: job_name because it is difficult
to generalize about that one).
It also adds an additional index for build and buildset uuids,
which should provide excellent performance when searching for
only those terms.
Change-Id: I0277be8cc4ba7555c5e6a9a7eb3eed988a24469c
|
|
|
|
|
|
|
| |
It's useful to annotate the logs around reporting with the event id
that caused the action.
Change-Id: I282c28fb0156070696f3d231a2a28f8f62deffca
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When requiring more than one artifact zuul runs into an sql exception
[1] which bubbles up to the run_handler. This effectively blocks all
operations of zuul until the change that triggers this bug is
dequeued. Fix this by correctly filtering the sql results.
[1] Trace
2019-04-04 17:15:01,158 ERROR zuul.Scheduler: Exception in run handler:
Traceback (most recent call last):
File "/opt/zuul/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context
cursor, statement, parameters, context
File "/opt/zuul/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 552, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: operator does not exist: character varying = record
LINE 3: ...bc89ecf79fd84fc7c' AND zuul_provides.name = ('pro...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Change-Id: I7f95e1376b7a1f46a5b4ef5242c777e16ceca451
Co-Authored-By: Tobias Henkel <tobias.henkel@bmw.de>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the scheduler and web processes are started simultaneously,
table creation can error out. Therefore, only create tables
within the scheduler process.
The scheduler is the only process that calls the onLoad method.
Change-Id: Ibb72e5e1af0cdd0db51744767c853318516dc22d
|