summaryrefslogtreecommitdiff
path: root/doc/update
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2021-09-20 13:18:24 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2021-09-20 13:18:24 +0000
commit0653e08efd039a5905f3fa4f6e9cef9f5d2f799c (patch)
tree4dcc884cf6d81db44adae4aa99f8ec1233a41f55 /doc/update
parent744144d28e3e7fddc117924fef88de5d9674fe4c (diff)
downloadgitlab-ce-0653e08efd039a5905f3fa4f6e9cef9f5d2f799c.tar.gz
Add latest changes from gitlab-org/gitlab@14-3-stable-eev14.3.0-rc42
Diffstat (limited to 'doc/update')
-rw-r--r--doc/update/deprecations.md67
-rw-r--r--doc/update/index.md144
-rw-r--r--doc/update/package/convert_to_ee.md118
-rw-r--r--doc/update/package/downgrade.md83
-rw-r--r--doc/update/package/index.md278
-rw-r--r--doc/update/patch_versions.md2
-rw-r--r--doc/update/plan_your_upgrade.md180
-rw-r--r--doc/update/upgrading_from_ce_to_ee.md2
-rw-r--r--doc/update/upgrading_from_source.md14
-rw-r--r--doc/update/zero_downtime.md942
10 files changed, 1730 insertions, 100 deletions
diff --git a/doc/update/deprecations.md b/doc/update/deprecations.md
new file mode 100644
index 00000000000..d453c5d8336
--- /dev/null
+++ b/doc/update/deprecations.md
@@ -0,0 +1,67 @@
+---
+stage: none
+group: none
+info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments-to-development-guidelines"
+---
+
+# Deprecated feature removal schedule
+
+<!--
+This page is automatically generated from the YAML files in `/data/deprecations` by the rake task
+located at `lib/tasks/gitlab/docs/compile_deprecations.rake`.
+
+Do not edit this page directly.
+
+To add a deprecation, use the example.yml file in `/data/deprecations/templates` as a template,
+then run `bin/rake gitlab:docs:compile_deprecations`.
+-->
+
+## 15.0
+
+### Legacy database configuration
+
+The syntax of [GitLabs database](https://docs.gitlab.com/omnibus/settings/database.html)
+configuration located in `database.yml` is changing and the legacy format is deprecated. The legacy format
+supported using a single PostgreSQL adapter, whereas the new format is changing to support multiple databases. The `main:` database needs to be defined as a first configuration item.
+
+This deprecation mainly impacts users compiling GitLab from source because Omnibus will handle this configuration automatically.
+
+Announced: 2021-09-22
+
+### Audit events for repository push events
+
+Audit events for [repository events](../administration/audit_events.md#repository-push) are now deprecated and will be removed in GitLab 15.0.
+
+These events have always been disabled by default and had to be manually enabled with a
+feature flag. Enabling them can cause too many events to be generated which can
+dramatically slow down GitLab instances. For this reason, they are being removed.
+
+Announced: 2021-09-02
+
+### OmniAuth Kerberos gem
+
+The `omniauth-kerberos` gem will be removed in our next major release, GitLab 15.0.
+
+This gem has not been maintained and has very little usage. We therefore plan to remove support for this authentication method and recommend using the Kerberos [SPNEGO](https://en.wikipedia.org/wiki/SPNEGO) integration instead. You can follow the [upgrade instructions](../integration/kerberos.md#upgrading-from-password-based-to-ticket-based-kerberos-sign-ins) to upgrade from the `omniauth-kerberos` integration to the supported one.
+
+Note that we are not deprecating the Kerberos SPNEGO integration, only the old password-based Kerberos integration.
+
+Announced: 2021-09-22
+
+### GitLab Serverless
+
+[GitLab Serverless](../user/project/clusters/serverless/index.md) is a feature set to support Knative-based serverless development with automatic deployments and monitoring.
+
+We decided to remove the GitLab Serverless features as they never really resonated with our users. Besides, given the continuous development of Kubernetes and Knative, our current implementations do not even work with recent versions.
+
+Announced: 2021-09-22
+
+## 14.4
+
+### Rename Task Runner pod to Toolbox
+
+The Task Runner pod is used to execute periodic housekeeping tasks within the GitLab application and is often confused with the GitLab Runner. Thus, [Task Runner will be renamed to Toolbox](https://gitlab.com/groups/gitlab-org/charts/-/epics/25).
+
+This will result in the rename of the sub-chart: `gitlab/task-runner` to `gitlab/toolbox`. Resulting pods will be named along the lines of `{{ .Release.Name }}-toolbox`, which will often be `gitlab-toolbox`. They will be locatable with the label `app=toolbox`.
+
+Announced: 2021-09-22
diff --git a/doc/update/index.md b/doc/update/index.md
index 4b7e63a8277..fadb55684f8 100644
--- a/doc/update/index.md
+++ b/doc/update/index.md
@@ -32,12 +32,12 @@ official ways to update GitLab:
### Linux packages (Omnibus GitLab)
-The [Omnibus update guide](https://docs.gitlab.com/omnibus/update/)
+The [package upgrade guide](package/index.md)
contains the steps needed to update a package installed by official GitLab
repositories.
There are also instructions when you want to
-[update to a specific version](https://docs.gitlab.com/omnibus/update/#multi-step-upgrade-using-the-official-repositories).
+[update to a specific version](package/index.md#upgrade-to-a-specific-version-using-the-official-repositories).
### Installation from source
@@ -70,6 +70,10 @@ Instructions on how to update a cloud-native deployment are in
Use the [version mapping](https://docs.gitlab.com/charts/installation/version_mappings.html)
from the chart version to GitLab version to determine the [upgrade path](#upgrade-paths).
+## Plan your upgrade
+
+See the guide to [plan your GitLab upgrade](plan_your_upgrade.md).
+
## Checking for background migrations before upgrading
Certain major/minor releases may require different migrations to be
@@ -79,7 +83,7 @@ finished before you update to the newer version.
To check the status of [batched background migrations](../user/admin_area/monitoring/background_migrations.md):
-1. On the top bar, select **Menu >** **{admin}** **Admin**.
+1. On the top bar, select **Menu > Admin**.
1. On the left sidebar, select **Monitoring > Background Migrations**.
![queued batched background migrations table](img/batched_background_migrations_queued_v14_0.png)
@@ -174,15 +178,15 @@ migration](../integration/elasticsearch.md#retry-a-halted-migration).
## Upgrade paths
-Upgrading across multiple GitLab versions in one go is *only possible with downtime*.
-The following examples assume a downtime upgrade.
-See the section below for [zero downtime upgrades](#upgrading-without-downtime).
+Upgrading across multiple GitLab versions in one go is *only possible by accepting downtime*.
+The following examples assume downtime is acceptable while upgrading.
+If you don't want any downtime, read how to [upgrade with zero downtime](zero_downtime.md).
Find where your version sits in the upgrade path below, and upgrade GitLab
accordingly, while also consulting the
[version-specific upgrade instructions](#version-specific-upgrading-instructions):
-`8.11.Z` -> [`8.12.0`](#upgrades-from-versions-earlier-than-812) -> `8.17.7` -> `9.5.10` -> `10.8.7` -> [`11.11.8`](#1200) -> `12.0.12` -> [`12.1.17`](#1210) -> `12.10.14` -> `13.0.14` -> [`13.1.11`](#1310) -> [latest `13.12.Z`](https://about.gitlab.com/releases/categories/releases/) -> [latest `14.0.Z`](#1400) -> [`14.1.Z`](#1410) -> [latest `14.Y.Z`](https://about.gitlab.com/releases/categories/releases/)
+`8.11.Z` -> `8.12.0` -> `8.17.7` -> `9.5.10` -> `10.8.7` -> [`11.11.8`](#1200) -> `12.0.12` -> [`12.1.17`](#1210) -> `12.10.14` -> `13.0.14` -> [`13.1.11`](#1310) -> [`13.8.8`](#1388) -> [latest `13.12.Z`](https://about.gitlab.com/releases/categories/releases/) -> [latest `14.0.Z`](#1400) -> [latest `14.Y.Z`](https://about.gitlab.com/releases/categories/releases/)
The following table, while not exhaustive, shows some examples of the supported
upgrade paths.
@@ -190,7 +194,7 @@ upgrade paths.
| Target version | Your version | Supported upgrade path | Note |
| -------------- | ------------ | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `14.1.2` | `13.9.2` | `13.9.2` -> `13.12.9` -> `14.0.7` -> `14.1.2` | Two intermediate versions are required: `13.12` and `14.0`, then `14.1`. |
-| `13.5.4` | `12.9.2` | `12.9.2` -> `12.10.14` -> `13.0.14` -> `13.1.11` -> `13.5.4` | Three intermediate versions are required: `12.10`, `13.0` and `13.1`, then `13.5.4`. |
+| `13.12.10` | `12.9.2` | `12.9.2` -> `12.10.14` -> `13.0.14` -> `13.1.11` -> `13.8.8` -> `13.12.10` | Four intermediate versions are required: `12.10`, `13.0`, `13.1` and `13.8.8`, then `13.12.10`. |
| `13.2.10` | `11.5.0` | `11.5.0` -> `11.11.8` -> `12.0.12` -> `12.1.17` -> `12.10.14` -> `13.0.14` -> `13.1.11` -> `13.2.10` | Six intermediate versions are required: `11.11`, `12.0`, `12.1`, `12.10`, `13.0` and `13.1`, then `13.2.10`. |
| `12.10.14` | `11.3.4` | `11.3.4` -> `11.11.8` -> `12.0.12` -> `12.1.17` -> `12.10.14` | Three intermediate versions are required: `11.11`, `12.0` and `12.1`, then `12.10.14`. |
| `12.9.5` | `10.4.5` | `10.4.5` -> `10.8.7` -> `11.11.8` -> `12.0.12` -> `12.1.17` -> `12.9.5` | Four intermediate versions are required: `10.8`, `11.11`, `12.0` and `12.1`, then `12.9.5`. |
@@ -229,76 +233,7 @@ upgraded to. This is to ensure [compatibility with GitLab versions](https://docs
## Upgrading without downtime
-Starting with GitLab 9.1.0 it's possible to upgrade to a newer major, minor, or
-patch version of GitLab without having to take your GitLab instance offline.
-However, for this to work there are the following requirements:
-
-- You can only upgrade 1 minor release at a time. So from 9.1 to 9.2, not to
- 9.3. If you skip releases, database modifications may be run in the wrong
- sequence [and leave the database schema in a broken state](https://gitlab.com/gitlab-org/gitlab/-/issues/321542).
-- You have to use [post-deployment
- migrations](../development/post_deployment_migrations.md) (included in
- [zero downtime update steps below](#steps)).
-- You are using PostgreSQL. Starting from GitLab 12.1, MySQL is not supported.
-- Multi-node GitLab instance. Single-node instances may experience brief interruptions
- [as services restart (Puma in particular)](https://docs.gitlab.com/omnibus/update/README.html#single-node-deployment).
-
-Most of the time you can safely upgrade from a patch release to the next minor
-release if the patch release is not the latest. For example, upgrading from
-9.1.1 to 9.2.0 should be safe even if 9.1.2 has been released. We do recommend
-you check the release posts of any releases between your current and target
-version just in case they include any migrations that may require you to upgrade
-1 release at a time.
-
-Some releases may also include so called "background migrations". These
-migrations are performed in the background by Sidekiq and are often used for
-migrating data. Background migrations are only added in the monthly releases.
-
-Certain major/minor releases may require a set of background migrations to be
-finished. To guarantee this, such a release processes any remaining jobs
-before continuing the upgrading procedure. While this doesn't require downtime
-(if the above conditions are met) we require that you [wait for background
-migrations to complete](#checking-for-background-migrations-before-upgrading)
-between each major/minor release upgrade.
-The time necessary to complete these migrations can be reduced by
-increasing the number of Sidekiq workers that can process jobs in the
-`background_migration` queue. To see the size of this queue,
-[Check for background migrations before upgrading](#checking-for-background-migrations-before-upgrading).
-
-As a rule of thumb, any database smaller than 10 GB doesn't take too much time to
-upgrade; perhaps an hour at most per minor release. Larger databases however may
-require more time, but this is highly dependent on the size of the database and
-the migrations that are being performed.
-
-### Examples
-
-To help explain this, let's look at some examples.
-
-**Example 1:** You are running a large GitLab installation using version 9.4.2,
-which is the latest patch release of 9.4. When GitLab 9.5.0 is released this
-installation can be safely upgraded to 9.5.0 without requiring downtime if the
-requirements mentioned above are met. You can also skip 9.5.0 and upgrade to
-9.5.1 after it's released, but you **can not** upgrade straight to 9.6.0; you
-_have_ to first upgrade to a 9.5.Z release.
-
-**Example 2:** You are running a large GitLab installation using version 9.4.2,
-which is the latest patch release of 9.4. GitLab 9.5 includes some background
-migrations, and 10.0 requires these to be completed (processing any
-remaining jobs for you). Skipping 9.5 is not possible without downtime, and due
-to the background migrations would require potentially hours of downtime
-depending on how long it takes for the background migrations to complete. To
-work around this you have to upgrade to 9.5.Z first, then wait at least a
-week before upgrading to 10.0.
-
-**Example 3:** You use MySQL as the database for GitLab. Any upgrade to a new
-major/minor release requires downtime. If a release includes any background
-migrations this could potentially lead to hours of downtime, depending on the
-size of your database. To work around this you must use PostgreSQL and
-meet the other online upgrade requirements mentioned above.
-
-### Steps
-
-Steps to [upgrade without downtime](https://docs.gitlab.com/omnibus/update/README.html#zero-downtime-updates).
+Read how to [upgrade without downtime](zero_downtime.md).
## Upgrading between editions
@@ -320,7 +255,7 @@ Edition, follow the guides below based on the installation method:
to a version upgrade: stop the server, get the code, update configuration files for
the new functionality, install libraries and do migrations, update the init
script, start the application and check its status.
-- [Omnibus CE to EE](https://docs.gitlab.com/omnibus/update/README.html#update-community-edition-to-enterprise-edition) - Follow this guide to update your Omnibus
+- [Omnibus CE to EE](package/convert_to_ee.md) - Follow this guide to update your Omnibus
GitLab Community Edition to the Enterprise Edition.
### Enterprise to Community Edition
@@ -351,21 +286,43 @@ These include:
Apart from the instructions in this section, you should also check the
installation-specific upgrade instructions, based on how you installed GitLab:
-- [Linux packages (Omnibus GitLab)](https://docs.gitlab.com/omnibus/update/README.html#version-specific-changes)
+- [Linux packages (Omnibus GitLab)](../update/package/index.md#version-specific-changes)
- [Helm charts](https://docs.gitlab.com/charts/installation/upgrade.html)
NOTE:
Specific information that follow related to Ruby and Git versions do not apply to [Omnibus installations](https://docs.gitlab.com/omnibus/)
and [Helm Chart deployments](https://docs.gitlab.com/charts/). They come with appropriate Ruby and Git versions and are not using system binaries for Ruby and Git. There is no need to install Ruby or Git when utilizing these two approaches.
+### 14.3.0
+
+Ruby 2.7.4 is required. Refer to [the Ruby installation instructions](../install/installation.md#2-ruby)
+for how to proceed.
+
+- GitLab 14.3.0 contains post-deployment migrations to [address Primary Key overflow risk for tables with an integer PK](https://gitlab.com/groups/gitlab-org/-/epics/4785) for the tables listed below:
+
+ - [`ci_builds.id`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/70245)
+ - [`ci_builds.stage_id`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66688)
+ - [`ci_builds_metadata`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65692)
+ - [`taggings`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66625)
+ - [`events`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/64779)
+
+ If the migrations are executed as part of a no-downtime deployment, there's a risk of failure due to lock conflicts with the application logic, resulting in lock timeout or deadlocks. In each case, these migrations are safe to re-run until successful:
+
+ ```shell
+ # For Omnibus GitLab
+ sudo gitlab-rake db:migrate
+
+ # For source installations
+ sudo -u git -H bundle exec rake db:migrate RAILS_ENV=production
+ ```
+
### 14.2.0
- Due to an issue where `BatchedBackgroundMigrationWorkers` were
[not working](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2785#note_614738345)
for self-managed instances, a [fix was created](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65106)
- and a [14.0.Z](#1400) version was released. If you haven't updated to 14.0.Z, you need
- to update to at least 14.1.0 that contains the same fix before you update to
- to 14.2.
+ and a [14.0.Z](#1400) version was released. If you haven't updated to 14.0.5, you need
+ to update to at least 14.1.0 that contains the same fix before you update to 14.2.
- GitLab 14.2.0 contains background migrations to [address Primary Key overflow risk for tables with an integer PK](https://gitlab.com/groups/gitlab-org/-/epics/4785) for the tables listed below:
- [`ci_build_needs`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65216)
@@ -393,7 +350,7 @@ and [Helm Chart deployments](https://docs.gitlab.com/charts/). They come with ap
- Due to an issue where `BatchedBackgroundMigrationWorkers` were
[not working](https://gitlab.com/gitlab-org/charts/gitlab/-/issues/2785#note_614738345)
for self-managed instances, a [fix was created](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65106)
- and a [14.0.Z](#1400) version was released. If you haven't updated to 14.0.Z, you need
+ and a [14.0.Z](#1400) version was released. If you haven't updated to 14.0.5, you need
to update to at least 14.1.0 that contains the same fix before you update to
a later version.
@@ -480,6 +437,17 @@ DETAIL: trigger trigger_0d588df444c8 on table application_settings depends on co
To work around this bug, follow the previous steps to complete the update.
More details are available [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/324160).
+### 13.8.8
+
+GitLab 13.8 includes a background migration to address [an issue with duplicate service records](https://gitlab.com/gitlab-org/gitlab/-/issues/290008). If duplicate services are present, this background migration must complete before a unique index is applied to the services table, which was [introduced in GitLab 13.9](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52563). Upgrades from GitLab 13.8 and earlier to later versions must include an intermediate upgrade to GitLab 13.8.8 and [must wait until the background migrations complete](#checking-for-background-migrations-before-upgrading) before proceeding.
+
+If duplicate services are still present, an upgrade to 13.9.x or later results in a failed upgrade with the following error:
+
+```console
+PG::UniqueViolation: ERROR: could not create unique index "index_services_on_project_id_and_type_unique"
+DETAIL: Key (project_id, type)=(NNN, ServiceName) is duplicated.
+```
+
### 13.6.0
Ruby 2.7.2 is required. GitLab does not start with Ruby 2.6.6 or older versions.
@@ -528,7 +496,7 @@ The Rails upgrade included a change to CSRF token generation which is
not backwards-compatible - GitLab servers with the new Rails version
generate CSRF tokens that are not recognizable by GitLab servers
with the older Rails version - which could cause non-GET requests to
-fail for [multi-node GitLab installations](https://docs.gitlab.com/omnibus/update/#multi-node--ha-deployment).
+fail for [multi-node GitLab installations](zero_downtime.md#multi-node--ha-deployment).
So, if you are using multiple Rails servers and specifically upgrading from 13.0,
all servers must first be upgraded to 13.1.Z before upgrading to 13.2.0 or later:
@@ -577,12 +545,6 @@ After upgraded to 11.11.8 you can safely upgrade to 12.0.Z.
See our [documentation on upgrade paths](../policy/maintenance.md#upgrade-recommendations)
for more information.
-### Upgrades from versions earlier than 8.12
-
-- `8.11.Z` and earlier: you might have to upgrade to `8.12.0` specifically before you can upgrade to `8.17.7`. This was [reported in an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/207259).
-- [CI changes prior to version 8.0](https://docs.gitlab.com/omnibus/update/README.html#updating-gitlab-ci-from-prior-540-to-version-714-via-omnibus-gitlab)
- when it was merged into GitLab.
-
## Miscellaneous
- [MySQL to PostgreSQL](mysql_to_postgresql.md) guides you through migrating
diff --git a/doc/update/package/convert_to_ee.md b/doc/update/package/convert_to_ee.md
new file mode 100644
index 00000000000..2cc54e2c8cf
--- /dev/null
+++ b/doc/update/package/convert_to_ee.md
@@ -0,0 +1,118 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Convert Community Edition to Enterprise Edition **(FREE SELF)**
+
+To convert an existing GitLab Community Edition (CE) server installed using the Omnibus GitLab
+packages to GitLab [Enterprise Edition](https://about.gitlab.com/pricing/) (EE), you install the EE
+package on top of CE.
+
+Converting from the same version of CE to EE is not explicitly necessary, and any standard upgrade
+(for example, CE 12.0 to EE 12.1) should work. However, in the following steps we assume that
+you are upgrading the same version (for example, CE 12.1 to EE 12.1), which is **recommended**.
+
+WARNING:
+When updating to EE from CE, avoid reverting back to CE if you plan on going to EE again in the
+future. Reverting back to CE can cause
+[database issues](index.md#500-error-when-accessing-project--settings--repository)
+that may require Support intervention.
+
+The steps can be summed up to:
+
+1. Find the currently installed GitLab version:
+
+ **For Debian/Ubuntu**
+
+ ```shell
+ sudo apt-cache policy gitlab-ce | grep Installed
+ ```
+
+ The output should be similar to: `Installed: 13.0.4-ce.0`. In that case,
+ the equivalent Enterprise Edition version will be: `13.0.4-ee.0`. Write this
+ value down.
+
+ **For CentOS/RHEL**
+
+ ```shell
+ sudo rpm -q gitlab-ce
+ ```
+
+ The output should be similar to: `gitlab-ce-13.0.4-ce.0.el8.x86_64`. In that
+ case, the equivalent Enterprise Edition version will be:
+ `gitlab-ee-13.0.4-ee.0.el8.x86_64`. Write this value down.
+
+1. Add the `gitlab-ee` [Apt or Yum repository](https://packages.gitlab.com/gitlab/gitlab-ee/install):
+
+ **For Debian/Ubuntu**
+
+ ```shell
+ curl --silent "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh" | sudo bash
+ ```
+
+ **For CentOS/RHEL**
+
+ ```shell
+ curl --silent "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh" | sudo bash
+ ```
+
+ The above command will find your OS version and automatically set up the
+ repository. If you are not comfortable installing the repository through a
+ piped script, you can first
+ [check its contents](https://packages.gitlab.com/gitlab/gitlab-ee/install).
+
+1. Next, install the `gitlab-ee` package. Note that this will automatically
+ uninstall the `gitlab-ce` package on your GitLab server. `reconfigure`
+ Omnibus right after the `gitlab-ee` package is installed. **Make sure that you
+ install the exact same GitLab version**:
+
+ **For Debian/Ubuntu**
+
+ ```shell
+ ## Make sure the repositories are up-to-date
+ sudo apt-get update
+
+ ## Install the package using the version you wrote down from step 1
+ sudo apt-get install gitlab-ee=13.0.4-ee.0
+
+ ## Reconfigure GitLab
+ sudo gitlab-ctl reconfigure
+ ```
+
+ **For CentOS/RHEL**
+
+ ```shell
+ ## Install the package using the version you wrote down from step 1
+ sudo yum install gitlab-ee-13.0.4-ee.0.el8.x86_64
+
+ ## Reconfigure GitLab
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Now go to the GitLab admin panel of your server (`/admin/license/new`) and
+ upload your license file.
+
+1. After you confirm that GitLab is working as expected, you may remove the old
+ Community Edition repository:
+
+ **For Debian/Ubuntu**
+
+ ```shell
+ sudo rm /etc/apt/sources.list.d/gitlab_gitlab-ce.list
+ ```
+
+ **For CentOS/RHEL**
+
+ ```shell
+ sudo rm /etc/yum.repos.d/gitlab_gitlab-ce.repo
+ ```
+
+That's it! You can now use GitLab Enterprise Edition! To update to a newer
+version, follow [Update using the official repositories](index.md#upgrade-using-the-official-repositories).
+
+NOTE:
+If you want to use `dpkg`/`rpm` instead of `apt-get`/`yum`, go through the first
+step to find the current GitLab version and then follow
+[Update using a manually-downloaded package](index.md#upgrade-using-a-manually-downloaded-package).
diff --git a/doc/update/package/downgrade.md b/doc/update/package/downgrade.md
new file mode 100644
index 00000000000..9a528f5ee44
--- /dev/null
+++ b/doc/update/package/downgrade.md
@@ -0,0 +1,83 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Downgrade **(FREE SELF)**
+
+This section contains general information on how to revert to an earlier version
+of a package.
+
+WARNING:
+You must at least have a database backup created under the version you are
+downgrading to. Ideally, you should have a
+[full backup archive](../../raketasks/backup_restore.md#back-up-gitlab)
+on hand.
+
+The example below demonstrates the downgrade procedure when downgrading between minor
+and patch versions (for example, from 13.0.6 to 13.0.5).
+
+When downgrading between major versions, take into account the
+[specific version changes](index.md#version-specific-changes) that occurred when you upgraded
+to the major version you are downgrading from.
+
+These steps consist of:
+
+- Stopping GitLab
+- Removing the current package
+- Installing the old package
+- Reconfiguring GitLab
+- Restoring the backup
+- Starting GitLab
+
+Steps:
+
+1. Stop GitLab and remove the current package:
+
+ ```shell
+ # If running Puma
+ sudo gitlab-ctl stop puma
+
+ # Stop sidekiq
+ sudo gitlab-ctl stop sidekiq
+
+ # If on Ubuntu: remove the current package
+ sudo dpkg -r gitlab-ee
+
+ # If on Centos: remove the current package
+ sudo yum remove gitlab-ee
+ ```
+
+1. Identify the GitLab version you want to downgrade to:
+
+ ```shell
+ # (Replace with gitlab-ce if you have GitLab FOSS installed)
+
+ # Ubuntu
+ sudo apt-cache madison gitlab-ee
+
+ # CentOS:
+ sudo yum --showduplicates list gitlab-ee
+ ```
+
+1. Downgrade GitLab to the desired version (for example, to GitLab 13.0.5):
+
+ ```shell
+ # (Replace with gitlab-ce if you have GitLab FOSS installed)
+
+ # Ubuntu
+ sudo apt install gitlab-ee=13.0.5-ee.0
+
+ # CentOS:
+ sudo yum install gitlab-ee-13.0.5-ee.0.el8
+ ```
+
+1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. [Restore GitLab](../../raketasks/backup_restore.md#restore-for-omnibus-gitlab-installations)
+ to complete the downgrade.
diff --git a/doc/update/package/index.md b/doc/update/package/index.md
new file mode 100644
index 00000000000..44be79f22fb
--- /dev/null
+++ b/doc/update/package/index.md
@@ -0,0 +1,278 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Upgrade GitLab using the GitLab Package **(FREE SELF)**
+
+This section describes how to upgrade GitLab to a new version using the
+GitLab package.
+
+We recommend performing upgrades between major and minor releases no more than once per
+week, to allow time for background migrations to finish. Decrease the time required to
+complete these migrations by increasing the number of
+[Sidekiq workers](../../administration/operations/extra_sidekiq_processes.md)
+that can process jobs in the `background_migration` queue.
+
+If you don't follow the steps in [zero downtime upgrades](../zero_downtime.md),
+your GitLab application will not be available to users while an upgrade is in progress.
+They either see a "Deploy in progress" message or a "502" error in their web browser.
+
+Prerequisites:
+
+- [Supported upgrade paths](../index.md#upgrade-paths)
+ has suggestions on when to upgrade. Upgrade paths are enforced for version upgrades by
+ default. This restricts performing direct upgrades that skip major versions (for
+ example 10.3 to 12.7 in one jump) that **can break GitLab
+ installations** due to multiple reasons like deprecated or removed configuration
+ settings, upgrade of internal tools and libraries, and so on.
+- If you are upgrading from a non-Package installation to a GitLab Package installation, see
+ [Upgrading from a non-Package installation to a GitLab Package installation](https://docs.gitlab.com/omnibus/convert_to_omnibus.html).
+- It's important to ensure that any
+ [background migrations](../index.md#checking-for-background-migrations-before-upgrading)
+ have been fully completed before upgrading to a new major version. Upgrading
+ before background migrations have finished may lead to data corruption.
+- Gitaly servers must be upgraded to the newer version prior to upgrading the application server.
+ This prevents the gRPC client on the application server from sending RPCs that the old Gitaly version
+ does not support.
+
+You can upgrade the GitLab Package using one of the following methods:
+
+- [Using the official repositories](#upgrade-using-the-official-repositories).
+- [Using a manually-downloaded package](#upgrade-using-a-manually-downloaded-package).
+
+Both automatically back up the GitLab database before installing a newer
+GitLab version. You may skip this automatic database backup by creating an empty file
+at `/etc/gitlab/skip-auto-backup`:
+
+```shell
+sudo touch /etc/gitlab/skip-auto-backup
+```
+
+For safety reasons, you should maintain an up-to-date backup on your own if you plan to use this flag.
+
+## Version-specific changes
+
+Updating to major versions might need some manual intervention. For more information,
+check the version your are upgrading to:
+
+- [GitLab 14](https://docs.gitlab.com/omnibus/gitlab_14_changes.html)
+- [GitLab 13](https://docs.gitlab.com/omnibus/gitlab_13_changes.html)
+- [GitLab 12](https://docs.gitlab.com/omnibus/gitlab_12_changes.html)
+- [GitLab 11](https://docs.gitlab.com/omnibus/gitlab_11_changes.html)
+
+## Upgrade using the official repositories
+
+All GitLab packages are posted to the GitLab [package server](https://packages.gitlab.com/gitlab/).
+Five repositories are maintained:
+
+- [GitLab EE](https://packages.gitlab.com/gitlab/gitlab-ee): for official
+ [Enterprise Edition](https://about.gitlab.com/pricing/) releases.
+- [GitLab CE](https://packages.gitlab.com/gitlab/gitlab-ce): for official Community Edition releases.
+- [Unstable](https://packages.gitlab.com/gitlab/unstable): for release candidates and other unstable versions.
+- [Nighty Builds](https://packages.gitlab.com/gitlab/nightly-builds): for nightly builds.
+- [Raspberry Pi](https://packages.gitlab.com/gitlab/raspberry-pi2): for official Community Edition releases built for [Raspberry Pi](https://www.raspberrypi.org) packages.
+
+If you have installed Omnibus GitLab [Community Edition](https://about.gitlab.com/install/?version=ce)
+or [Enterprise Edition](https://about.gitlab.com/install/), then the
+official GitLab repository should have already been set up for you.
+
+To upgrade to the newest GitLab version, run:
+
+- For GitLab [Enterprise Edition](https://about.gitlab.com/pricing/):
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update
+ sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+- For GitLab Community Edition:
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update
+ sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+### Upgrade to a specific version using the official repositories
+
+Linux package managers default to installing the latest available version of a
+package for installation and upgrades. Upgrading directly to the latest major
+version can be problematic for older GitLab versions that require a multi-stage
+[upgrade path](../index.md#upgrade-paths). An upgrade path can span multiple
+versions, so you must specify the specific GitLab package with each upgrade.
+
+To specify the intended GitLab version number in your package manager's install
+or upgrade command:
+
+1. First, identify the GitLab version number in your package manager:
+
+ ```shell
+ # Ubuntu/Debian
+ sudo apt-cache madison gitlab-ee
+ # RHEL/CentOS 6 and 7
+ yum --showduplicates list gitlab-ee
+ # RHEL/CentOS 8
+ dnf search gitlab-ee*
+ ```
+
+1. Then install the specific GitLab package:
+
+ ```shell
+ # Ubuntu/Debian
+ sudo apt install gitlab-ee=12.0.12-ee.0
+ # RHEL/CentOS 6 and 7
+ yum install gitlab-ee-12.0.12-ee.0.el7
+ # RHEL/CentOS 8
+ dnf install gitlab-ee-12.0.12-ee.0.el8
+ # SUSE
+ zypper install gitlab-ee=12.0.12-ee.0
+ ```
+
+## Upgrade using a manually-downloaded package
+
+NOTE:
+The [package repository](#upgrade-using-the-official-repositories) is recommended over
+a manual installation.
+
+If for some reason you don't use the official repositories, you can
+download the package and install it manually. This method can be used to either
+install GitLab for the first time or update it.
+
+To download and install GitLab:
+
+1. Visit the [official repository](#upgrade-using-the-official-repositories) of your package.
+1. Browse to the repository for the type of package you would like to see the
+ list of packages that are available. Multiple packages exist for a
+ single version, one for each supported distribution type. Next to the filename
+ is a label indicating the distribution, as the file names may be the same.
+1. Find the package version you wish to install and click on it.
+1. Click the **Download** button in the upper right corner to download the package.
+1. After the GitLab package is downloaded, install it using the following commands:
+
+ - For GitLab [Enterprise Edition](https://about.gitlab.com/pricing/):
+
+ ```shell
+ # Debian/Ubuntu
+ dpkg -i gitlab-ee-<version>.deb
+
+ # CentOS/RHEL
+ rpm -Uvh gitlab-ee-<version>.rpm
+ ```
+
+ - For GitLab Community Edition:
+
+ ```shell
+ # GitLab Community Edition
+ # Debian/Ubuntu
+ dpkg -i gitlab-ce-<version>.deb
+
+ # CentOS/RHEL
+ rpm -Uvh gitlab-ce-<version>.rpm
+ ```
+
+## Troubleshooting
+
+### GitLab 13.7 and later unavailable on Amazon Linux 2
+
+Amazon Linux 2 is not an [officially supported operating system](../../administration/package_information/deprecated_os.md#supported-operating-systems).
+However, in past the [official package installation script](https://packages.gitlab.com/gitlab/gitlab-ee/install)
+installed the `el/6` package repository if run on Amazon Linux. From GitLab 13.7, we no longer
+provide `el/6` packages so administrators must run the [installation script](https://packages.gitlab.com/gitlab/gitlab-ee/install)
+again to update the repository to `el/7`:
+
+```shell
+curl --silent "https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh" | sudo bash
+```
+
+See the [epic on support for GitLab on Amazon Linux 2](https://gitlab.com/groups/gitlab-org/-/epics/2195) for the latest details on official Amazon Linux 2 support.
+
+### Get the status of a GitLab installation
+
+```shell
+sudo gitlab-ctl status
+sudo gitlab-rake gitlab:check SANITIZE=true
+```
+
+- Information on using `gitlab-ctl` to perform [maintenance tasks](https://docs.gitlab.com/omnibus/maintenance/index.html).
+- Information on using `gitlab-rake` to [check the configuration](../../administration/raketasks/maintenance.md#check-gitlab-configuration).
+
+### RPM 'package is already installed' error
+
+If you are using RPM and you are upgrading from GitLab Community Edition to GitLab Enterprise Edition you may get an error like this:
+
+```shell
+package gitlab-7.5.2_omnibus.5.2.1.ci-1.el7.x86_64 (which is newer than gitlab-7.5.2_ee.omnibus.5.2.1.ci-1.el7.x86_64) is already installed
+```
+
+You can override this version check with the `--oldpackage` option:
+
+```shell
+sudo rpm -Uvh --oldpackage gitlab-7.5.2_ee.omnibus.5.2.1.ci-1.el7.x86_64.rpm
+```
+
+### Package obsoleted by installed package
+
+CE and EE packages are marked as obsoleting and replacing each other so that both aren't installed and running at the same time.
+
+If you are using local RPM files to switch from CE to EE or vice versa, use `rpm` for installing the package rather than `yum`. If you try to use yum, then you may get an error like this:
+
+```plaintext
+Cannot install package gitlab-ee-11.8.3-ee.0.el6.x86_64. It is obsoleted by installed package gitlab-ce-11.8.3-ce.0.el6.x86_64
+```
+
+To avoid this issue, either:
+
+- Use the same instructions provided in the
+ [Upgrade using a manually-downloaded package](#upgrade-using-a-manually-downloaded-package) section.
+- Temporarily disable this checking in yum by adding `--setopt=obsoletes=0` to the options given to the command.
+
+### 500 error when accessing Project > Settings > Repository
+
+When GitLab is migrated from CE > EE > CE, and then back to EE, you
+might get the following error when viewing a project's repository settings:
+
+```shell
+Processing by Projects::Settings::RepositoryController#show as HTML
+ Parameters: {"namespace_id"=>"<namespace_id>", "project_id"=>"<project_id>"}
+Completed 500 Internal Server Error in 62ms (ActiveRecord: 4.7ms | Elasticsearch: 0.0ms | Allocations: 14583)
+
+NoMethodError (undefined method `commit_message_negative_regex' for #<PushRule:0x00007fbddf4229b8>
+Did you mean? commit_message_regex_change):
+```
+
+This error is caused by an EE feature being added to a CE instance on the initial move to EE.
+After the instance is moved back to CE and then is upgraded to EE again, the
+`push_rules` table already exists in the database. Therefore, a migration is
+unable to add the `commit_message_regex_change` column.
+
+This results in the [backport migration of EE tables](https://gitlab.com/gitlab-org/gitlab/-/blob/cf00e431024018ddd82158f8a9210f113d0f4dbc/db/migrate/20190402150158_backport_enterprise_schema.rb#L1619) not working correctly.
+The backport migration assumes that certain tables in the database do not exist when running CE.
+
+To fix this issue, manually add the missing `commit_message_negative_regex` column and restart GitLab:
+
+```shell
+# Access psql
+sudo gitlab-rails dbconsole
+
+# Add the missing column
+ALTER TABLE push_rules ADD COLUMN commit_message_negative_regex VARCHAR;
+
+# Exit psql
+\q
+
+# Restart GitLab
+sudo gitlab-ctl restart
+```
+
+### Error `Failed to connect to the internal GitLab API` on a separate GitLab Pages server
+
+Please see [GitLab Pages troubleshooting](../../administration/pages/index.md#failed-to-connect-to-the-internal-gitlab-api).
diff --git a/doc/update/patch_versions.md b/doc/update/patch_versions.md
index 0a7057ffe97..d09f19d143b 100644
--- a/doc/update/patch_versions.md
+++ b/doc/update/patch_versions.md
@@ -103,7 +103,7 @@ sudo -u git -H make
### 8. Install/Update `gitlab-elasticsearch-indexer` **(PREMIUM SELF)**
-Please follow the [install instruction](../integration/elasticsearch.md#installing-elasticsearch).
+Please follow the [install instruction](../integration/elasticsearch.md#install-elasticsearch).
### 9. Start application
diff --git a/doc/update/plan_your_upgrade.md b/doc/update/plan_your_upgrade.md
new file mode 100644
index 00000000000..406f8322218
--- /dev/null
+++ b/doc/update/plan_your_upgrade.md
@@ -0,0 +1,180 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+---
+
+# Create a GitLab upgrade plan
+
+This document serves as a guide to create a strong plan to upgrade a self-managed
+GitLab instance.
+
+General notes:
+
+- If possible, we recommend you test out the upgrade in a test environment before
+ updating your production instance. Ideally, your test environment should mimic
+ your production environment as closely as possible.
+- If [working with Support](https://about.gitlab.com/support/scheduling-live-upgrade-assistance.html)
+ to create your plan, share details of your architecture, including:
+ - How is GitLab installed?
+ - What is the operating system of the node?
+ (check [OS versions that are no longer supported](../administration/package_information/deprecated_os.md) to confirm that later updates are available).
+ - Is it a single-node or a multi-node setup? If multi-node, share any architectural details about each node with us.
+ - Are you using [GitLab Geo](../administration/geo/index.md)? If so, share any architectural details about each secondary node.
+ - What else might be unique or interesting in your setup that might be important for us to understand?
+ - Are you running into any known issues with your current version of GitLab?
+
+## Pre-upgrade and post-upgrade checks
+
+Immediately before and after the upgrade, perform the pre-upgrade and post-upgrade checks
+to ensure the major components of GitLab are working:
+
+1. [Check the general configuration](../administration/raketasks/maintenance.md#check-gitlab-configuration):
+
+ ```shell
+ sudo gitlab-rake gitlab:check
+ ```
+
+1. Confirm that encrypted database values [can be decrypted](../administration/raketasks/doctor.md#verify-database-values-can-be-decrypted-using-the-current-secrets):
+
+ ```shell
+ sudo gitlab-rake gitlab:doctor:secrets
+ ```
+
+1. In GitLab UI, check that:
+ - Users can log in.
+ - The project list is visible.
+ - Project issues and merge requests are accessible.
+ - Users can clone repositories from GitLab.
+ - Users can push commits to GitLab.
+
+1. For GitLab CI/CD, check that:
+ - Runners pick up jobs.
+ - Docker images can be pushed and pulled from the registry.
+
+1. If using Geo, run the relevant checks on the primary and each secondary:
+
+ ```shell
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+1. If using Elasticsearch, verify that searches are successful.
+
+If in any case something goes wrong, see [how to troubleshoot](#troubleshooting).
+
+## Rollback plan
+
+It's possible that something may go wrong during an upgrade, so it's critical
+that a rollback plan be present for that scenario. A proper rollback plan
+creates a clear path to bring the instance back to its last working state. It is
+comprised of a way to back up the instance and a way to restore it.
+
+### Back up GitLab
+
+Create a backup of GitLab and all its data (database, repos, uploads, builds,
+artifacts, LFS objects, registry, pages). This is vital for making it possible
+to roll back GitLab to a working state if there's a problem with the upgrade:
+
+- Create a [GitLab backup](../raketasks/backup_restore.md#back-up-gitlab).
+ Make sure to follow the instructions based on your installation method.
+ Don't forget to back up the [secrets and configuration files](../raketasks/backup_restore.md#storing-configuration-files).
+- Alternatively, create a snapshot of your instance. If this is a multi-node
+ installation, you must snapshot every node.
+ **This process is out of scope for GitLab Support.**
+
+### Restore GitLab
+
+To restore your GitLab backup:
+
+- Before restoring, make sure to read about the
+ [prerequisites](../raketasks/backup_restore.md#restore-gitlab), most importantly,
+ the versions of the backed up and the new GitLab instance must be the same.
+- [Restore GitLab](../raketasks/backup_restore.md#restore-gitlab).
+ Make sure to follow the instructions based on your installation method.
+ Confirm that the [secrets and configuration files](../raketasks/backup_restore.md#storing-configuration-files) are also restored.
+- If restoring from a snapshot, know the steps to do this.
+ **This process is out of scope for GitLab Support.**
+
+## Upgrade plan
+
+For the upgrade plan, start by creating an outline of a plan that best applies
+to your instance and then upgrade it for any relevant features you're using.
+
+- Generate an upgrade plan by reading and understanding the relevant documentation:
+ - upgrade based on the installation method:
+ - [Linux package (Omnibus)](index.md#linux-packages-omnibus-gitlab)
+ - [Compiled from source](index.md#installation-from-source)
+ - [Docker](index.md#installation-using-docker)
+ - [Helm Charts](index.md#installation-using-helm)
+ - [Zero-downtime upgrades](zero_downtime.md) (if possible and desired)
+ - [Convert from GitLab Community Edition to Enterprise Edition](package/convert_to_ee.md)
+- What version should you upgrade to:
+ - [Determine what upgrade path](index.md#upgrade-paths) to follow.
+ - Account for any [version-specific update instructions](index.md#version-specific-upgrading-instructions).
+ - Account for any [version-specific changes](package/index.md#version-specific-changes).
+ - Check the [OS compatibility with the target GitLab version](../administration/package_information/deprecated_os.md).
+- Due to background migrations, plan to pause any further upgrades after upgrading
+ to a new major version.
+ [All migrations must finish running](index.md#checking-for-background-migrations-before-upgrading)
+ before the next upgrade.
+- If available in your starting version, consider
+ [turning on maintenance mode](../administration/maintenance_mode/) during the
+ upgrade.
+- About PostgreSQL:
+ - On the top bar, select **Menu > Admin**, and look for the version of
+ PostgreSQL you are using.
+ If [a PostgreSQL upgrade is needed](../administration/package_information/postgresql_versions.md),
+ account for the relevant
+ [packaged](https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server)
+ or [non-packaged](https://docs.gitlab.com/omnibus/settings/database.html#upgrade-a-non-packaged-postgresql-database) steps.
+
+### Additional features
+
+Apart from all the generic information above, you may have enabled some features
+that require special planning.
+
+Feel free to ignore sections about features that are inapplicable to your setup,
+such as Geo, external Gitaly, or Elasticsearch.
+
+#### External Gitaly
+
+If you're using an external Gitaly server, it must be upgraded to the newer
+version prior to upgrading the application server.
+
+#### Geo
+
+If you're using Geo:
+
+- Review [Geo upgrade documentation](../administration/geo/replication/updating_the_geo_nodes.md).
+- Read about the [Geo version-specific update instructions](../administration/geo/replication/version_specific_updates.md).
+- Review Geo-specific steps when [updating the database](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance).
+- Create an upgrade and rollback plan for _each_ Geo node (primary and each secondary).
+
+#### Runners
+
+After updating GitLab, upgrade your runners to match
+[your new GitLab version](https://docs.gitlab.com/runner/#gitlab-runner-versions).
+
+#### Elasticsearch
+
+After updating GitLab, you may have to upgrade
+[Elasticsearch if the new version breaks compatibility](../integration/elasticsearch.md#version-requirements).
+Updating Elasticsearch is **out of scope for GitLab Support**.
+
+## Troubleshooting
+
+If anything doesn't go as planned:
+
+- If time is of the essence, copy any errors and gather any logs to later analyze,
+ and then [roll back to the last working version](#rollback-plan). You can use
+ the following tools to help you gather data:
+ - [`gitlabsos`](https://gitlab.com/gitlab-com/support/toolbox/gitlabsos) if
+ you installed GitLab using the Linux package or Docker.
+ - [`kubesos`](https://gitlab.com/gitlab-com/support/toolbox/kubesos/) if
+ you installed GitLab using the Helm Charts.
+- For support:
+ - [Contact GitLab Support](https://support.gitlab.com) and,
+ if you have one, your Technical Account Manager.
+ - If [the situation qualifies](https://about.gitlab.com/support/#definitions-of-support-impact)
+ and [your plan includes emergency support](https://about.gitlab.com/support/#priority-support),
+ create an emergency ticket.
diff --git a/doc/update/upgrading_from_ce_to_ee.md b/doc/update/upgrading_from_ce_to_ee.md
index 93c9432f6d3..d91b3de6df1 100644
--- a/doc/update/upgrading_from_ce_to_ee.md
+++ b/doc/update/upgrading_from_ce_to_ee.md
@@ -88,7 +88,7 @@ sudo -u git -H bundle exec rake cache:clear RAILS_ENV=production
### 4. Install `gitlab-elasticsearch-indexer` **(PREMIUM SELF)**
-Please follow the [install instruction](../integration/elasticsearch.md#installing-elasticsearch).
+Please follow the [install instruction](../integration/elasticsearch.md#install-elasticsearch).
### 5. Start application
diff --git a/doc/update/upgrading_from_source.md b/doc/update/upgrading_from_source.md
index dd7ef27feca..9abf993f0fe 100644
--- a/doc/update/upgrading_from_source.md
+++ b/doc/update/upgrading_from_source.md
@@ -69,9 +69,9 @@ Download Ruby and compile it:
```shell
mkdir /tmp/ruby && cd /tmp/ruby
-curl --remote-name --progress "https://cache.ruby-lang.org/pub/ruby/2.7/ruby-2.7.2.tar.gz"
-echo 'cb9731a17487e0ad84037490a6baf8bfa31a09e8 ruby-2.7.2.tar.gz' | shasum -c - && tar xzf ruby-2.7.2.tar.gz
-cd ruby-2.7.2
+curl --remote-name --progress-bar "https://cache.ruby-lang.org/pub/ruby/2.7/ruby-2.7.4.tar.gz"
+echo '3043099089608859fc8cce7f9fdccaa1f53a462457e3838ec3b25a7d609fbc5b ruby-2.7.4.tar.gz' | sha256sum -c - && tar xzf ruby-2.7.4.tar.gz
+cd ruby-2.7.4
./configure --disable-install-rdoc --enable-shared
make
@@ -107,11 +107,11 @@ Download and install Go (for Linux, 64-bit):
# Remove former Go installation folder
sudo rm -rf /usr/local/go
-curl --remote-name --progress "https://dl.google.com/go/go1.13.5.linux-amd64.tar.gz"
-echo '512103d7ad296467814a6e3f635631bd35574cab3369a97a323c9a585ccaa569 go1.13.5.linux-amd64.tar.gz' | shasum -a256 -c - && \
- sudo tar -C /usr/local -xzf go1.13.5.linux-amd64.tar.gz
+curl --remote-name --progress-bar "https://dl.google.com/go/go1.15.12.linux-amd64.tar.gz"
+echo 'bbdb935699e0b24d90e2451346da76121b2412d30930eabcd80907c230d098b7 go1.15.12.linux-amd64.tar.gz' | shasum -a256 -c - && \
+ sudo tar -C /usr/local -xzf go1.15.12.linux-amd64.tar.gz
sudo ln -sf /usr/local/go/bin/{go,godoc,gofmt} /usr/local/bin/
-rm go1.13.5.linux-amd64.tar.gz
+rm go1.15.12.linux-amd64.tar.gz
```
diff --git a/doc/update/zero_downtime.md b/doc/update/zero_downtime.md
new file mode 100644
index 00000000000..f0e6377f355
--- /dev/null
+++ b/doc/update/zero_downtime.md
@@ -0,0 +1,942 @@
+---
+stage: Enablement
+group: Distribution
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+---
+
+# Zero downtime upgrades
+
+Starting with GitLab 9.1.0 it's possible to upgrade to a newer major, minor, or
+patch version of GitLab without having to take your GitLab instance offline.
+However, for this to work there are the following requirements:
+
+- You can only upgrade 1 minor release at a time. So from 9.1 to 9.2, not to
+ 9.3. If you skip releases, database modifications may be run in the wrong
+ sequence [and leave the database schema in a broken state](https://gitlab.com/gitlab-org/gitlab/-/issues/321542).
+- You have to use [post-deployment migrations](../development/post_deployment_migrations.md).
+- You are using PostgreSQL. Starting from GitLab 12.1, MySQL is not supported.
+- Multi-node GitLab instance. Single-node instances may experience brief interruptions
+ [as services restart (Puma in particular)](#single-node-deployment).
+
+If you meet all the requirements above, follow these instructions in order. There are three sets of steps, depending on your deployment type:
+
+| Deployment type | Description |
+| --------------------------------------------------------------- | ------------------------------------------------ |
+| [Single-node](#single-node-deployment) | GitLab CE/EE on a single node |
+| [Gitaly Cluster](#gitaly-cluster) | GitLab CE/EE using HA architecture for Gitaly Cluster |
+| [Multi-node / PostgreSQL HA](#use-postgresql-ha) | GitLab CE/EE using HA architecture for PostgreSQL |
+| [Multi-node / Redis HA](#use-redis-ha-using-sentinel) | GitLab CE/EE using HA architecture for Redis |
+| [Geo](#geo-deployment) | GitLab EE with Geo enabled |
+| [Multi-node / HA with Geo](#multi-node--ha-deployment-with-geo) | GitLab CE/EE on multiple nodes |
+
+Each type of deployment will require that you hot reload the `puma` and `sidekiq` processes on all nodes running these
+services after you've upgraded. The reason for this is that those processes each load the GitLab Rails application which reads and loads
+the database schema into memory when starting up. Each of these processes will need to be reloaded (or restarted in the case of `sidekiq`)
+to re-read any database changes that have been made by post-deployment migrations.
+
+Most of the time you can safely upgrade from a patch release to the next minor
+release if the patch release is not the latest. For example, upgrading from
+9.1.1 to 9.2.0 should be safe even if 9.1.2 has been released. We do recommend
+you check the release posts of any releases between your current and target
+version just in case they include any migrations that may require you to upgrade
+1 release at a time.
+
+Some releases may also include so called "background migrations". These
+migrations are performed in the background by Sidekiq and are often used for
+migrating data. Background migrations are only added in the monthly releases.
+
+Certain major/minor releases may require a set of background migrations to be
+finished. To guarantee this, such a release processes any remaining jobs
+before continuing the upgrading procedure. While this doesn't require downtime
+(if the above conditions are met) we require that you [wait for background
+migrations to complete](index.md#checking-for-background-migrations-before-upgrading)
+between each major/minor release upgrade.
+The time necessary to complete these migrations can be reduced by
+increasing the number of Sidekiq workers that can process jobs in the
+`background_migration` queue. To see the size of this queue,
+[Check for background migrations before upgrading](index.md#checking-for-background-migrations-before-upgrading).
+
+As a rule of thumb, any database smaller than 10 GB doesn't take too much time to
+upgrade; perhaps an hour at most per minor release. Larger databases however may
+require more time, but this is highly dependent on the size of the database and
+the migrations that are being performed.
+
+To help explain this, let's look at some examples:
+
+**Example 1:** You are running a large GitLab installation using version 9.4.2,
+which is the latest patch release of 9.4. When GitLab 9.5.0 is released this
+installation can be safely upgraded to 9.5.0 without requiring downtime if the
+requirements mentioned above are met. You can also skip 9.5.0 and upgrade to
+9.5.1 after it's released, but you **can not** upgrade straight to 9.6.0; you
+_have_ to first upgrade to a 9.5.Z release.
+
+**Example 2:** You are running a large GitLab installation using version 9.4.2,
+which is the latest patch release of 9.4. GitLab 9.5 includes some background
+migrations, and 10.0 requires these to be completed (processing any
+remaining jobs for you). Skipping 9.5 is not possible without downtime, and due
+to the background migrations would require potentially hours of downtime
+depending on how long it takes for the background migrations to complete. To
+work around this you have to upgrade to 9.5.Z first, then wait at least a
+week before upgrading to 10.0.
+
+**Example 3:** You use MySQL as the database for GitLab. Any upgrade to a new
+major/minor release requires downtime. If a release includes any background
+migrations this could potentially lead to hours of downtime, depending on the
+size of your database. To work around this you must use PostgreSQL and
+meet the other online upgrade requirements mentioned above.
+
+## Single-node deployment
+
+Before following these instructions, note the following **important** information:
+
+- You can only upgrade 1 minor release at a time. So from 13.6 to 13.7, not to 13.8.
+ If you attempt more than one minor release, the upgrade may fail.
+- On single-node Omnibus deployments, updates with no downtime are not possible when
+ using Puma because Puma always requires a complete restart. This is because the
+ [phased restart](https://github.com/puma/puma/blob/master/README.md#clustered-mode)
+ feature of Puma does not work with the way it is configured in GitLab all-in-one
+ packages (cluster-mode with app preloading).
+- While it is possible to minimize downtime on a single-node instance by following
+ these instructions, **it is not possible to always achieve true zero downtime
+ updates**. Users may see some connections timeout or be refused for a few minutes,
+ depending on which services need to restart.
+
+1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+1. Update the GitLab package:
+
+ - For GitLab Community Edition:
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update
+ sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+ - For GitLab [Enterprise Edition](https://about.gitlab.com/pricing/):
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update
+ sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. To get the regular migrations and latest code in place, run
+
+ ```shell
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
+ ```
+
+1. Once the node is updated and `reconfigure` finished successfully, run post-deployment migrations with
+
+ ```shell
+ sudo gitlab-rake db:migrate
+ ```
+
+1. Hot reload `puma` and `sidekiq` services
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ ```
+
+If you do not want to run zero downtime upgrades in the future, make
+sure you remove `/etc/gitlab/skip-auto-reconfigure` after
+you've completed these steps.
+
+## Multi-node / HA deployment
+
+You can only upgrade 1 minor release at a time. So from 13.6 to 13.7, not to 13.8.
+If you attempt more than one minor release, the upgrade may fail.
+
+### Use a load balancer in front of web (Puma) nodes
+
+With Puma, single node zero-downtime updates are no longer possible. To achieve
+HA with zero-downtime updates, at least two nodes are required to be used with a
+load balancer which distributes the connections properly across both nodes.
+
+The load balancer in front of the application nodes must be configured to check
+proper health check endpoints to check if the service is accepting traffic or
+not. For Puma, the `/-/readiness` endpoint should be used, while
+`/readiness` endpoint can be used for Sidekiq and other services.
+
+Upgrades on web (Puma) nodes must be done in a rolling manner, one after
+another, ensuring at least one node is always up to serve traffic. This is
+required to ensure zero-downtime.
+
+Puma will enter a blackout period as part of the upgrade, during which they
+continue to accept connections but will mark their respective health check
+endpoints to be unhealthy. On seeing this, the load balancer should disconnect
+them gracefully.
+
+Puma will restart only after completing all the currently processing requests.
+This ensures data and service integrity. Once they have restarted, the health
+check end points will be marked healthy.
+
+The nodes must be updated in the following order to update an HA instance using
+load balancer to latest GitLab version.
+
+1. Select one application node as a deploy node and complete the following steps
+ on it:
+
+ 1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+ 1. Update the GitLab package:
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with
+ `gitlab-ee` in the above command.
+
+ 1. Get the regular migrations and latest code in place:
+
+ ```shell
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
+ ```
+
+ 1. Ensure services use the latest code:
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ ```
+
+1. Complete the following steps on the other Puma/Sidekiq nodes, one
+ after another. Always ensure at least one of such nodes is up and running,
+ and connected to the load balancer before proceeding to the next node.
+
+ 1. Update the GitLab package and ensure a `reconfigure` is run as part of
+ it. If not (due to `/etc/gitlab/skip-auto-reconfigure` file being
+ present), run `sudo gitlab-ctl reconfigure` manually.
+
+ 1. Ensure services use latest code:
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ ```
+
+1. On the deploy node, run the post-deployment migrations:
+
+ ```shell
+ sudo gitlab-rake db:migrate
+ ```
+
+### Gitaly Cluster
+
+[Gitaly Cluster](../administration/gitaly/praefect.md) is built using
+Gitaly and the Praefect component. It has its own PostgreSQL database, independent of the rest of
+the application.
+
+Before you update the main application you need to update Praefect.
+Out of your Praefect nodes, pick one to be your Praefect deploy node.
+This is where you will install the new Omnibus package first and run
+database migrations.
+
+**Praefect deploy node**
+
+- Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab:
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+- Ensure that `praefect['auto_migrate'] = true` is set in `/etc/gitlab/gitlab.rb`
+
+**All Praefect nodes _excluding_ the Praefect deploy node**
+
+- To prevent `reconfigure` from automatically running database migrations, ensure that `praefect['auto_migrate'] = false` is set in `/etc/gitlab/gitlab.rb`.
+
+**Praefect deploy node**
+
+- Update the GitLab package:
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with `gitlab-ee` in the above command.
+
+- To apply the Praefect database migrations and restart Praefect, run:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**All Praefect nodes _excluding_ the Praefect deploy node**
+
+- Update the GitLab package:
+
+ ```shell
+ sudo apt-get update && sudo apt-get install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with `gitlab-ee` in the above command.
+
+- Ensure nodes are running the latest code:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+### Use PostgreSQL HA
+
+Pick a node to be the `Deploy Node`. It can be any application node, but it must be the same
+node throughout the process.
+
+**Deploy node**
+
+- Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+**All nodes _including_ the Deploy node**
+
+- To prevent `reconfigure` from automatically running database migrations, ensure that `gitlab_rails['auto_migrate'] = false` is set in `/etc/gitlab/gitlab.rb`.
+
+**Gitaly only nodes**
+
+- Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with `gitlab-ee` in the above command.
+
+- Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**Deploy node**
+
+- Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ce
+
+ # Centos/RHEL
+ sudo yum install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with `gitlab-ee` in the above command.
+
+- If you're using PgBouncer:
+
+ You'll need to bypass PgBouncer and connect directly to the database master
+ before running migrations.
+
+ Rails uses an advisory lock when attempting to run a migration to prevent
+ concurrent migrations from running on the same database. These locks are
+ not shared across transactions, resulting in `ActiveRecord::ConcurrentMigrationError`
+ and other issues when running database migrations using PgBouncer in transaction
+ pooling mode.
+
+ To find the master node, run the following on a database node:
+
+ ```shell
+ sudo gitlab-ctl patroni members
+ ```
+
+ Then, in your `gitlab.rb` file on the deploy node, update
+ `gitlab_rails['db_host']` and `gitlab_rails['db_port']` with the database
+ master's host and port.
+
+- To get the regular database migrations and latest code in place, run
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake db:migrate
+ ```
+
+**All nodes _excluding_ the Deploy node**
+
+- Update the GitLab package
+
+ ```shell
+ sudo apt-get update && sudo apt-get install gitlab-ce
+ ```
+
+ If you are an Enterprise Edition user, replace `gitlab-ce` with `gitlab-ee` in the above command.
+
+- Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**Deploy node**
+
+- Run post-deployment database migrations on deploy node to complete the migrations with
+
+ ```shell
+ sudo gitlab-rake db:migrate
+ ```
+
+**For nodes that run Puma or Sidekiq**
+
+- Hot reload `puma` and `sidekiq` services
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ ```
+
+- If you're using PgBouncer:
+
+ Change your `gitlab.rb` to point back to PgBouncer and run:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+If you do not want to run zero downtime upgrades in the future, make
+sure you remove `/etc/gitlab/skip-auto-reconfigure` and revert
+setting `gitlab_rails['auto_migrate'] = false` in
+`/etc/gitlab/gitlab.rb` after you've completed these steps.
+
+### Use Redis HA (using Sentinel) **(PREMIUM ONLY)**
+
+Package upgrades may involve version updates to the bundled Redis service. On
+instances using [Redis for scaling](../administration/redis/index.md),
+upgrades must follow a proper order to ensure minimum downtime, as specified
+below. This doc assumes the official guides are being followed to setup Redis
+HA.
+
+#### In the application node
+
+According to [official Redis docs](https://redis.io/topics/admin#upgrading-or-restarting-a-redis-instance-without-downtime),
+the easiest way to update an HA instance using Sentinel is to upgrade the
+secondaries one after the other, perform a manual failover from current
+primary (running old version) to a recently upgraded secondary (running a new
+version), and then upgrade the original primary. For this, we need to know
+the address of the current Redis primary.
+
+- If your application node is running GitLab 12.7.0 or later, you can use the
+following command to get address of current Redis primary
+
+ ```shell
+ sudo gitlab-ctl get-redis-master
+ ```
+
+- If your application node is running a version older than GitLab 12.7.0, you
+ will have to run the underlying `redis-cli` command (which `get-redis-master`
+ command uses) to fetch information about the primary.
+
+ 1. Get the address of one of the sentinel nodes specified as
+ `gitlab_rails['redis_sentinels']` in `/etc/gitlab/gitlab.rb`
+
+ 1. Get the Redis master name specified as `redis['master_name']` in
+ `/etc/gitlab/gitlab.rb`
+
+ 1. Run the following command
+
+ ```shell
+ sudo /opt/gitlab/embedded/bin/redis-cli -h <sentinel host> -p <sentinel port> SENTINEL get-master-addr-by-name <redis master name>
+ ```
+
+#### In the Redis secondary nodes
+
+1. Install package for new version.
+
+1. Run `sudo gitlab-ctl reconfigure`, if a reconfigure is not run as part of
+ installation (due to `/etc/gitlab/skip-auto-reconfigure` file being present).
+
+1. If reconfigure warns about a pending Redis/Sentinel restart, restart the
+ corresponding service
+
+ ```shell
+ sudo gitlab-ctl restart redis
+ sudo gitlab-ctl restart sentinel
+ ```
+
+#### In the Redis primary node
+
+Before upgrading the Redis primary node, we need to perform a failover so that
+one of the recently upgraded secondary nodes becomes the new primary. Once the
+failover is complete, we can go ahead and upgrade the original primary node.
+
+1. Stop Redis service in Redis primary node so that it fails over to a secondary
+ node
+
+ ```shell
+ sudo gitlab-ctl stop redis
+ ```
+
+1. Wait for failover to be complete. You can verify it by periodically checking
+ details of the current Redis primary node (as mentioned above). If it starts
+ reporting a new IP, failover is complete.
+
+1. Start Redis again in that node, so that it starts following the current
+ primary node.
+
+ ```shell
+ sudo gitlab-ctl start redis
+ ```
+
+1. Install package corresponding to new version.
+
+1. Run `sudo gitlab-ctl reconfigure`, if a reconfigure is not run as part of
+ installation (due to `/etc/gitlab/skip-auto-reconfigure` file being present).
+
+1. If reconfigure warns about a pending Redis/Sentinel restart, restart the
+ corresponding service
+
+ ```shell
+ sudo gitlab-ctl restart redis
+ sudo gitlab-ctl restart sentinel
+ ```
+
+#### Update the application node
+
+Install the package for new version and follow regular package upgrade
+procedure.
+
+## Geo deployment **(PREMIUM ONLY)**
+
+The order of steps is important. While following these steps, make
+sure you follow them in the right order, on the correct node.
+
+Log in to your **primary** node, executing the following:
+
+1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and ensure the following is present:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. To get the database migrations and latest code in place, run
+
+ ```shell
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
+ ```
+
+1. Hot reload `puma` and `sidekiq` services
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ ```
+
+On each **secondary** node, executing the following:
+
+1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+ ```shell
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and ensure the following is present:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. To get the database migrations and latest code in place, run
+
+ ```shell
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-ctl reconfigure
+ ```
+
+1. Hot reload `puma`, `sidekiq` and restart `geo-logcursor` services
+
+ ```shell
+ sudo gitlab-ctl hup puma
+ sudo gitlab-ctl restart sidekiq
+ sudo gitlab-ctl restart geo-logcursor
+ ```
+
+1. Run post-deployment database migrations, specific to the Geo database
+
+ ```shell
+ sudo gitlab-rake geo:db:migrate
+ ```
+
+After all **secondary** nodes are updated, finalize
+the update on the **primary** node:
+
+- Run post-deployment database migrations
+
+ ```shell
+ sudo gitlab-rake db:migrate
+ ```
+
+After updating all nodes (both **primary** and all **secondaries**), check their status:
+
+- Verify Geo configuration and dependencies
+
+ ```shell
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+If you do not want to run zero downtime upgrades in the future, make
+sure you remove `/etc/gitlab/skip-auto-reconfigure` and revert
+setting `gitlab_rails['auto_migrate'] = false` in
+`/etc/gitlab/gitlab.rb` after you've completed these steps.
+
+## Multi-node / HA deployment with Geo **(PREMIUM ONLY)**
+
+This section describes the steps required to upgrade a multi-node / HA
+deployment with Geo. Some steps must be performed on a particular node. This
+node will be known as the “deploy node” and is noted through the following
+instructions.
+
+Updates must be performed in the following order:
+
+1. Update Geo **primary** multi-node deployment.
+1. Update Geo **secondary** multi-node deployments.
+1. Post-deployment migrations and checks.
+
+### Step 1: Choose a "deploy node" for each deployment
+
+You now need to choose:
+
+- One instance for use as the **primary** "deploy node" on the Geo **primary** multi-node deployment.
+- One instance for use as the **secondary** "deploy node" on each Geo **secondary** multi-node deployment.
+
+Deploy nodes must be configured to be running Puma or Sidekiq or the `geo-logcursor` daemon. In order
+to avoid any downtime, they must not be in use during the update:
+
+- If running Puma remove the deploy node from the load balancer.
+- If running Sidekiq, ensure the deploy node is not processing jobs:
+
+ ```shell
+ sudo gitlab-ctl stop sidekiq
+ ```
+
+- If running `geo-logcursor` daemon, ensure the deploy node is not processing events:
+
+ ```shell
+ sudo gitlab-ctl stop geo-logcursor
+ ```
+
+For zero-downtime, Puma, Sidekiq, and `geo-logcursor` must be running on other nodes during the update.
+
+### Step 2: Update the Geo primary multi-node deployment
+
+**On all primary nodes _including_ the primary "deploy node"**
+
+1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+```shell
+sudo touch /etc/gitlab/skip-auto-reconfigure
+```
+
+1. To prevent `reconfigure` from automatically running database migrations, ensure that `gitlab_rails['auto_migrate'] = false` is set in `/etc/gitlab/gitlab.rb`.
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**On primary Gitaly only nodes**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**On the primary "deploy node"**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. If you're using PgBouncer:
+
+ You'll need to bypass PgBouncer and connect directly to the database master
+ before running migrations.
+
+ Rails uses an advisory lock when attempting to run a migration to prevent
+ concurrent migrations from running on the same database. These locks are
+ not shared across transactions, resulting in `ActiveRecord::ConcurrentMigrationError`
+ and other issues when running database migrations using PgBouncer in transaction
+ pooling mode.
+
+ To find the master node, run the following on a database node:
+
+ ```shell
+ sudo gitlab-ctl patroni members
+ ```
+
+ Then, in your `gitlab.rb` file on the deploy node, update
+ `gitlab_rails['db_host']` and `gitlab_rails['db_port']` with the database
+ master's host and port.
+
+1. To get the regular database migrations and latest code in place, run
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake db:migrate
+ ```
+
+1. If this deploy node is normally used to serve requests or process jobs,
+ then you may return it to service at this point.
+
+ - To serve requests, add the deploy node to the load balancer.
+ - To process Sidekiq jobs again, start Sidekiq:
+
+ ```shell
+ sudo gitlab-ctl start sidekiq
+ ```
+
+**On all primary nodes _excluding_ the primary "deploy node"**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**For all primary nodes that run Puma or Sidekiq _including_ the primary "deploy node"**
+
+Hot reload `puma` and `sidekiq` services:
+
+```shell
+sudo gitlab-ctl hup puma
+sudo gitlab-ctl restart sidekiq
+```
+
+### Step 3: Update each Geo secondary multi-node deployment
+
+Only proceed if you have successfully completed all steps on the Geo **primary** multi-node deployment.
+
+**On all secondary nodes _including_ the secondary "deploy node"**
+
+1. Create an empty file at `/etc/gitlab/skip-auto-reconfigure`. This prevents upgrades from running `gitlab-ctl reconfigure`, which by default automatically stops GitLab, runs all database migrations, and restarts GitLab.
+
+```shell
+sudo touch /etc/gitlab/skip-auto-reconfigure
+```
+
+1. To prevent `reconfigure` from automatically running database migrations, ensure that `geo_secondary['auto_migrate'] = false` is set in `/etc/gitlab/gitlab.rb`.
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**On secondary Gitaly only nodes**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**On the secondary "deploy node"**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. To get the regular database migrations and latest code in place, run
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ sudo SKIP_POST_DEPLOYMENT_MIGRATIONS=true gitlab-rake geo:db:migrate
+ ```
+
+1. If this deploy node is normally used to serve requests or perform
+ background processing, then you may return it to service at this point.
+
+ - To serve requests, add the deploy node to the load balancer.
+ - To process Sidekiq jobs again, start Sidekiq:
+
+ ```shell
+ sudo gitlab-ctl start sidekiq
+ ```
+
+ - To process Geo events again, start the `geo-logcursor` daemon:
+
+ ```shell
+ sudo gitlab-ctl start geo-logcursor
+ ```
+
+**On all secondary nodes _excluding_ the secondary "deploy node"**
+
+1. Update the GitLab package
+
+ ```shell
+ # Debian/Ubuntu
+ sudo apt-get update && sudo apt-get install gitlab-ee
+
+ # Centos/RHEL
+ sudo yum install gitlab-ee
+ ```
+
+1. Ensure nodes are running the latest code
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**For all secondary nodes that run Puma, Sidekiq, or the `geo-logcursor` daemon _including_ the secondary "deploy node"**
+
+Hot reload `puma`, `sidekiq` and ``geo-logcursor`` services:
+
+```shell
+sudo gitlab-ctl hup puma
+sudo gitlab-ctl restart sidekiq
+sudo gitlab-ctl restart geo-logcursor
+```
+
+### Step 4: Run post-deployment migrations and checks
+
+**On the primary "deploy node"**
+
+1. Run post-deployment database migrations:
+
+ ```shell
+ sudo gitlab-rake db:migrate
+ ```
+
+1. Verify Geo configuration and dependencies
+
+ ```shell
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+1. If you're using PgBouncer:
+
+ Change your `gitlab.rb` to point back to PgBouncer and run:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+**On all secondary "deploy nodes"**
+
+1. Run post-deployment database migrations, specific to the Geo database:
+
+ ```shell
+ sudo gitlab-rake geo:db:migrate
+ ```
+
+1. Verify Geo configuration and dependencies
+
+ ```shell
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+1. Verify Geo status
+
+ ```shell
+ sudo gitlab-rake geo:status
+ ```