| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
| |
It adds two methods for checking if a background job
(for a given class) has dead or retrying jobs.
|
|
|
| |
Allow `steal` to handle dead jobs.
|
|
|
|
| |
Useful for checking progress.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes the BackgroundMigration worker so it checks for the health
of the DB before performing a background migration. This in turn allows
us to reduce the minimum interval, without having to worry about blowing
things up if we schedule too many migrations.
In this setup, the BackgroundMigration worker will reschedule jobs as
long as the database is considered to be in an unhealthy state. Once the
database has recovered, the migration can be performed.
To determine if the database is in a healthy state, we look at the
replication lag of any replication slots defined on the primary. If the
lag is deemed to great (100 MB by default) for too many slots, the
migration is rescheduled for a later point in time.
The health checking code is hidden behind a feature flag, allowing us to
disable it if necessary.
|
| |
|
|
|
|
|
| |
Simply re-raise an exception when it occurs, but guarantee that no
background migration is lost in the process.
|
| |
|
|
|
|
|
| |
It also makes it possible to gracefully retry a migration in order to
avoid problems like deadlocks.
|
| |
|
|
|
|
|
|
| |
We first pop a job from the Sidekiq queue / scheduled set and only if
this has been successfully deleted we process the job. This makes it
possible to minimize a possibility of a race condition happening.
|
| |
|
| |
|
| |
|
| |
|
|
Background migrations can be used to perform long running data
migrations without these blocking a deployment procedure.
See MR https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/11854 for
more information.
|