summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAchilleas Pipinellis <axil@gitlab.com>2019-05-05 16:08:22 +0000
committerAchilleas Pipinellis <axil@gitlab.com>2019-05-05 16:08:22 +0000
commitfe74abfb0dfcd83edc0e46f423573fa5ee6d273e (patch)
treed34a9da4dc67f93bce1ffadb2494b62762775cf8
parent2c1a9367bc0cd5fdc1736517943fb0c67dc1081f (diff)
parente2fe9c8ee59e703ca354a5cdd77847727129fd5a (diff)
downloadgitlab-ce-fe74abfb0dfcd83edc0e46f423573fa5ee6d273e.tar.gz
Merge branch 'docs-repo-merge-7-admin-geo' into 'master'
Docs: Merge EE doc/​administration/​geo to CE See merge request gitlab-org/gitlab-ce!27660
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md172
-rw-r--r--doc/administration/geo/disaster_recovery/bring_primary_back.md61
-rw-r--r--doc/administration/geo/disaster_recovery/img/replication-status.pngbin0 -> 7716 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/reverification-interval.pngbin0 -> 33620 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification-status-primary.pngbin0 -> 13329 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/verification-status-secondary.pngbin0 -> 12186 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/index.md322
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md227
-rw-r--r--doc/administration/geo/replication/configuration.md314
-rw-r--r--doc/administration/geo/replication/configuration_source.md172
-rw-r--r--doc/administration/geo/replication/database.md507
-rw-r--r--doc/administration/geo/replication/database_source.md439
-rw-r--r--doc/administration/geo/replication/docker_registry.md23
-rw-r--r--doc/administration/geo/replication/external_database.md219
-rw-r--r--doc/administration/geo/replication/faq.md62
-rw-r--r--doc/administration/geo/replication/high_availability.md228
-rw-r--r--doc/administration/geo/replication/img/geo_architecture.pngbin0 -> 116506 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_node_dashboard.pngbin0 -> 26266 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_node_healthcheck.pngbin0 -> 28285 bytes
-rw-r--r--doc/administration/geo/replication/img/geo_overview.pngbin0 -> 37246 bytes
-rw-r--r--doc/administration/geo/replication/index.md309
-rw-r--r--doc/administration/geo/replication/object_storage.md43
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md50
-rw-r--r--doc/administration/geo/replication/security_review.md286
-rw-r--r--doc/administration/geo/replication/troubleshooting.md457
-rw-r--r--doc/administration/geo/replication/tuning.md17
-rw-r--r--doc/administration/geo/replication/updating_the_geo_nodes.md403
-rw-r--r--doc/administration/geo/replication/using_a_geo_server.md18
28 files changed, 4329 insertions, 0 deletions
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
new file mode 100644
index 00000000000..7d2fd51f834
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -0,0 +1,172 @@
+# Automatic background verification **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+Automatic background verification of repositories and wikis was added in
+GitLab EE 10.6 but is enabled by default only on GitLab EE 11.1. You can
+disable or enable this feature manually by following
+[these instructions](#disabling-or-enabling-the-automatic-background-verification).
+
+Automatic background verification ensures that the transferred data matches a
+calculated checksum. If the checksum of the data on the **primary** node matches checksum of the
+data on the **secondary** node, the data transferred successfully. Following a planned failover,
+any corrupted data may be **lost**, depending on the extent of the corruption.
+
+If verification fails on the **primary** node, this indicates that Geo is
+successfully replicating a corrupted object; restore it from backup or remove it
+it from the **primary** node to resolve the issue.
+
+If verification succeeds on the **primary** node but fails on the **secondary** node,
+this indicates that the object was corrupted during the replication process.
+Geo actively try to correct verification failures marking the repository to
+be resynced with a backoff period. If you want to reset the verification for
+these failures, so you should follow [these instructions][reset-verification].
+
+If verification is lagging significantly behind replication, consider giving
+the node more time before scheduling a planned failover.
+
+## Disabling or enabling the automatic background verification
+
+Run the following commands in a Rails console on the **primary** node:
+
+```sh
+# Omnibus GitLab
+gitlab-rails console
+
+# Installation from source
+cd /home/git/gitlab
+sudo -u git -H bin/rails console RAILS_ENV=production
+```
+
+To check if automatic background verification is enabled:
+
+```ruby
+Gitlab::Geo.repository_verification_enabled?
+```
+
+To disable automatic background verification:
+
+```ruby
+Feature.disable('geo_repository_verification')
+```
+
+To enable automatic background verification:
+
+```ruby
+Feature.enable('geo_repository_verification')
+```
+
+## Repository verification
+
+Navigate to the **Admin Area > Geo** dashboard on the **primary** node and expand
+the **Verification information** tab for that node to view automatic checksumming
+status for repositories and wikis. Successes are shown in green, pending work
+in grey, and failures in red.
+
+![Verification status](img/verification-status-primary.png)
+
+Navigate to the **Admin Area > Geo** dashboard on the **secondary** node and expand
+the **Verification information** tab for that node to view automatic verification
+status for repositories and wikis. As with checksumming, successes are shown in
+green, pending work in grey, and failures in red.
+
+![Verification status](img/verification-status-secondary.png)
+
+## Using checksums to compare Geo nodes
+
+To check the health of Geo **secondary** nodes, we use a checksum over the list of
+Git references and their values. The checksum includes `HEAD`, `heads`, `tags`,
+`notes`, and GitLab-specific references to ensure true consistency. If two nodes
+have the same checksum, then they definitely hold the same references. We compute
+the checksum for every node after every update to make sure that they are all
+in sync.
+
+## Repository re-verification
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/8550) in GitLab Enterprise Edition 11.6. Available in [GitLab Premium](https://about.gitlab.com/pricing/).
+
+Due to bugs or transient infrastructure failures, it is possible for Git
+repositories to change unexpectedly without being marked for verification.
+Geo constantly reverifies the repositories to ensure the integrity of the
+data. The default and recommended re-verification interval is 7 days, though
+an interval as short as 1 day can be set. Shorter intervals reduce risk but
+increase load and vice versa.
+
+Navigate to the **Admin Area > Geo** dashboard on the **primary** node, and
+click the **Edit** button for the **primary** node to customize the minimum
+re-verification interval:
+
+![Re-verification interval](img/reverification-interval.png)
+
+The automatic background re-verification is enabled by default, but you can
+disable if you need. Run the following commands in a Rails console on the
+**primary** node:
+
+```sh
+# Omnibus GitLab
+gitlab-rails console
+
+# Installation from source
+cd /home/git/gitlab
+sudo -u git -H bin/rails console RAILS_ENV=production
+```
+
+To disable automatic background re-verification:
+
+```ruby
+Feature.disable('geo_repository_reverification')
+```
+
+To enable automatic background re-verification:
+
+```ruby
+Feature.enable('geo_repository_reverification')
+```
+
+## Reset verification for projects where verification has failed
+
+Geo actively try to correct verification failures marking the repository to
+be resynced with a backoff period. If you want to reset them manually, this
+rake task marks projects where verification has failed or the checksum mismatch
+to be resynced without the backoff period:
+
+For repositories:
+
+- Omnibus Installation
+
+ ```sh
+ sudo gitlab-rake geo:verification:repository:reset
+ ```
+
+- Source Installation
+
+ ```sh
+ sudo -u git -H bundle exec rake geo:verification:repository:reset RAILS_ENV=production
+ ```
+
+For wikis:
+
+- Omnibus Installation
+
+ ```sh
+ sudo gitlab-rake geo:verification:wiki:reset
+ ```
+
+- Source Installation
+
+ ```sh
+ sudo -u git -H bundle exec rake geo:verification:wiki:reset RAILS_ENV=production
+ ```
+
+## Current limitations
+
+Until [issue #5064][ee-5064] is completed, background verification doesn't cover
+CI job artifacts and traces, LFS objects, or user uploads in file storage.
+Verify their integrity manually by following [these instructions][foreground-verification]
+on both nodes, and comparing the output between them.
+
+Data in object storage is **not verified**, as the object store is responsible
+for ensuring the integrity of the data.
+
+[reset-verification]: background_verification.md#reset-verification-for-projects-where-verification-has-failed
+[foreground-verification]: ../../raketasks/check.md
+[ee-5064]: https://gitlab.com/gitlab-org/gitlab-ee/issues/5064
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
new file mode 100644
index 00000000000..f4d31a98080
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -0,0 +1,61 @@
+# Bring a demoted primary node back online **[PREMIUM ONLY]**
+
+After a failover, it is possible to fail back to the demoted **primary** node to
+restore your original configuration. This process consists of two steps:
+
+1. Making the old **primary** node a **secondary** node.
+1. Promoting a **secondary** node to a **primary** node.
+
+CAUTION: **Caution:**
+If you have any doubts about the consistency of the data on this node, we recommend setting it up from scratch.
+
+## Configure the former **primary** node to be a **secondary** node
+
+Since the former **primary** node will be out of sync with the current **primary** node, the first step is to bring the former **primary** node up to date. Note, deletion of data stored on disk like
+repositories and uploads will not be replayed when bringing the former **primary** node back
+into sync, which may result in increased disk usage.
+Alternatively, you can [set up a new **secondary** GitLab instance][setup-geo] to avoid this.
+
+To bring the former **primary** node up to date:
+
+1. SSH into the former **primary** node that has fallen behind.
+1. Make sure all the services are up:
+
+ ```sh
+ sudo gitlab-ctl start
+ ```
+
+ > **Note 1:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary],
+ > you need to undo those steps now. For Debian/Ubuntu you just need to run
+ > `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
+ > the GitLab instance from scratch and set it up as a **secondary** node by
+ > following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
+ >
+ > **Note 2:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
+ > for this node during disaster recovery procedure you may need to [block
+ > all the writes to this node](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/gitlab-geo/planned-failover.md#block-primary-traffic)
+ > during this procedure.
+
+1. [Setup database replication][database-replication]. Note that in this
+ case, **primary** node refers to the current **primary** node, and **secondary** node refers to the
+ former **primary** node.
+
+If you have lost your original **primary** node, follow the
+[setup instructions][setup-geo] to set up a new **secondary** node.
+
+## Promote the **secondary** node to **primary** node
+
+When the initial replication is complete and the **primary** node and **secondary** node are
+closely in sync, you can do a [planned failover].
+
+## Restore the **secondary** node
+
+If your objective is to have two nodes again, you need to bring your **secondary**
+node back online as well by repeating the first step
+([configure the former **primary** node to be a **secondary** node](#configure-the-former-primary-node-to-be-a-secondary-node))
+for the **secondary** node.
+
+[setup-geo]: ../replication/index.md#setup-instructions
+[database-replication]: ../replication/database.md
+[disaster-recovery-disable-primary]: index.md#step-2-permanently-disable-the-primary-node
+[planned failover]: planned_failover.md
diff --git a/doc/administration/geo/disaster_recovery/img/replication-status.png b/doc/administration/geo/disaster_recovery/img/replication-status.png
new file mode 100644
index 00000000000..d7085927c75
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/replication-status.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/reverification-interval.png b/doc/administration/geo/disaster_recovery/img/reverification-interval.png
new file mode 100644
index 00000000000..ad4597a4f49
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/reverification-interval.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification-status-primary.png b/doc/administration/geo/disaster_recovery/img/verification-status-primary.png
new file mode 100644
index 00000000000..2503408ec5d
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/verification-status-primary.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png b/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png
new file mode 100644
index 00000000000..462274d8b14
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/img/verification-status-secondary.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
new file mode 100644
index 00000000000..71dc797f281
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -0,0 +1,322 @@
+# Disaster Recovery **[PREMIUM ONLY]**
+
+Geo replicates your database, your Git repositories, and few other assets.
+We will support and replicate more data in the future, that will enable you to
+failover with minimal effort, in a disaster situation.
+
+See [Geo current limitations][geo-limitations] for more information.
+
+CAUTION: **Warning:**
+Disaster recovery for multi-secondary configurations is in **Alpha**.
+For the latest updates, check the multi-secondary [Disaster Recovery epic][gitlab-org&65].
+
+## Promoting a **secondary** Geo node in single-secondary configurations
+
+We don't currently provide an automated way to promote a Geo replica and do a
+failover, but you can do it manually if you have `root` access to the machine.
+
+This process promotes a **secondary** Geo node to a **primary** node. To regain
+geographic redundancy as quickly as possible, you should add a new **secondary** node
+immediately after following these instructions.
+
+### Step 1. Allow replication to finish if possible
+
+If the **secondary** node is still replicating data from the **primary** node, follow
+[the planned failover docs][planned-failover] as closely as possible in
+order to avoid unnecessary data loss.
+
+### Step 2. Permanently disable the **primary** node
+
+CAUTION: **Warning:**
+If the **primary** node goes offline, there may be data saved on the **primary** node
+that has not been replicated to the **secondary** node. This data should be treated
+as lost if you proceed.
+
+If an outage on the **primary** node happens, you should do everything possible to
+avoid a split-brain situation where writes can occur in two different GitLab
+instances, complicating recovery efforts. So to prepare for the failover, we
+must disable the **primary** node.
+
+1. SSH into the **primary** node to stop and disable GitLab, if possible:
+
+ ```sh
+ sudo gitlab-ctl stop
+ ```
+
+ Prevent GitLab from starting up again if the server unexpectedly reboots:
+
+ ```sh
+ sudo systemctl disable gitlab-runsvdir
+ ```
+
+ > **CentOS only**: In CentOS 6 or older, there is no easy way to prevent GitLab from being
+ > started if the machine reboots isn't available (see [gitlab-org/omnibus-gitlab#3058]).
+ > It may be safest to uninstall the GitLab package completely:
+
+ ```sh
+ yum remove gitlab-ee
+ ```
+
+ > **Ubuntu 14.04 LTS**: If you are using an older version of Ubuntu
+ > or any other distro based on the Upstart init system, you can prevent GitLab
+ > from starting if the machine reboots by doing the following:
+
+ ```sh
+ initctl stop gitlab-runsvvdir
+ echo 'manual' > /etc/init/gitlab-runsvdir.override
+ initctl reload-configuration
+ ```
+
+1. If you do not have SSH access to the **primary** node, take the machine offline and
+ prevent it from rebooting by any means at your disposal.
+ Since there are many ways you may prefer to accomplish this, we will avoid a
+ single recommendation. You may need to:
+ - Reconfigure the load balancers.
+ - Change DNS records (e.g., point the primary DNS record to the **secondary**
+ node in order to stop usage of the **primary** node).
+ - Stop the virtual servers.
+ - Block traffic through a firewall.
+ - Revoke object storage permissions from the **primary** node.
+ - Physically disconnect a machine.
+
+1. If you plan to
+ [update the primary domain DNS record](#step-4-optional-updating-the-primary-domain-dns-record),
+ you may wish to lower the TTL now to speed up propagation.
+
+### Step 3. Promoting a **secondary** node
+
+NOTE: **Note:**
+A new **secondary** should not be added at this time. If you want to add a new
+**secondary**, do this after you have completed the entire process of promoting
+the **secondary** to the **primary**.
+
+#### Promoting a **secondary** node running on a single machine
+
+1. SSH in to your **secondary** node and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` to reflect its new status as **primary** by
+ removing any lines that enabled the `geo_secondary_role`:
+
+ ```ruby
+ ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
+ geo_secondary_role['enable'] = true
+
+ ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
+ roles ['geo_secondary_role']
+ ```
+
+1. Promote the **secondary** node to the **primary** node. Execute:
+
+ ```sh
+ gitlab-ctl promote-to-primary-node
+ ```
+
+1. Verify you can connect to the newly promoted **primary** node using the URL used
+ previously for the **secondary** node.
+1. If successful, the **secondary** node has now been promoted to the **primary** node.
+
+#### Promoting a **secondary** node with HA
+
+The `gitlab-ctl promote-to-primary-node` command cannot be used yet in
+conjunction with High Availability or with multiple machines, as it can only
+perform changes on a **secondary** with only a single machine. Instead, you must
+do this manually.
+
+1. SSH in to the database node in the **secondary** and trigger PostgreSQL to
+ promote to read-write:
+
+ ```bash
+ sudo gitlab-pg-ctl promote
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` on every machine in the **secondary** to
+ reflect its new status as **primary** by removing any lines that enabled the
+ `geo_secondary_role`:
+
+ ```ruby
+ ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
+ geo_secondary_role['enable'] = true
+
+ ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
+ roles ['geo_secondary_role']
+ ```
+
+ After making these changes [Reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) each
+ machine so the changes take effect.
+
+1. Promote the **secondary** to **primary**. SSH into a single application
+ server and execute:
+
+ ```bash
+ sudo gitlab-rake geo:set_secondary_as_primary
+ ```
+
+1. Verify you can connect to the newly promoted **primary** using the URL used
+ previously for the **secondary**.
+1. Success! The **secondary** has now been promoted to **primary**.
+
+### Step 4. (Optional) Updating the primary domain DNS record
+
+Updating the DNS records for the primary domain to point to the **secondary** node
+will prevent the need to update all references to the primary domain to the
+secondary domain, like changing Git remotes and API URLs.
+
+1. SSH into the **secondary** node and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Update the primary domain's DNS record. After updating the primary domain's
+ DNS records to point to the **secondary** node, edit `/etc/gitlab/gitlab.rb` on the
+ **secondary** node to reflect the new URL:
+
+ ```ruby
+ # Change the existing external_url configuration
+ external_url 'https://<new_external_url>'
+ ```
+
+ NOTE: **Note**
+ Changing `external_url` won't prevent access via the old secondary URL, as
+ long as the secondary DNS records are still intact.
+
+1. Reconfigure the **secondary** node for the change to take effect:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+1. Execute the command below to update the newly promoted **primary** node URL:
+
+ ```sh
+ gitlab-rake geo:update_primary_node_url
+ ```
+
+ This command will use the changed `external_url` configuration defined
+ in `/etc/gitlab/gitlab.rb`.
+
+1. Verify you can connect to the newly promoted **primary** using its URL.
+ If you updated the DNS records for the primary domain, these changes may
+ not have yet propagated depending on the previous DNS records TTL.
+
+### Step 5. (Optional) Add **secondary** Geo node to a promoted **primary** node
+
+Promoting a **secondary** node to **primary** node using the process above does not enable
+Geo on the new **primary** node.
+
+To bring a new **secondary** node online, follow the [Geo setup instructions][setup-geo].
+
+### Step 6. (Optional) Removing the secondary's tracking database
+
+Every **secondary** has a special tracking database that is used to save the status of the synchronization of all the items from the **primary**.
+Because the **secondary** is already promoted, that data in the tracking database is no longer required.
+
+The data can be removed with the following command:
+
+```sh
+sudo rm -rf /var/opt/gitlab/geo-postgresql
+```
+
+## Promoting secondary Geo replica in multi-secondary configurations
+
+If you have more than one **secondary** node and you need to promote one of them, we suggest you follow
+[Promoting a **secondary** Geo node in single-secondary configurations](#promoting-a-secondary-geo-node-in-single-secondary-configurations)
+and after that you also need two extra steps.
+
+### Step 1. Prepare the new **primary** node to serve one or more **secondary** nodes
+
+1. SSH into the new **primary** node and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ ## Enable a Geo Primary role (if you haven't yet)
+ roles ['geo_primary_role']
+
+ ##
+ # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
+ # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
+ ##
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
+
+ # Every secondary server needs to have its own slot so specify the number of secondary nodes you're going to have
+ postgresql['max_replication_slots'] = 1
+
+ ##
+ ## Disable automatic database migrations temporarily
+ ## (until PostgreSQL is restarted and listening on the private address).
+ ##
+ gitlab_rails['auto_migrate'] = false
+
+ ```
+
+ (For more details about these settings you can read [Configure the primary server][configure-the-primary-server])
+
+1. Save the file and reconfigure GitLab for the database listen changes and
+ the replication slot changes to be applied.
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+ Restart PostgreSQL for its changes to take effect:
+
+ ```sh
+ gitlab-ctl restart postgresql
+ ```
+
+1. Re-enable migrations now that PostgreSQL is restarted and listening on the
+ private address.
+
+ Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = true
+ ```
+
+ Save the file and reconfigure GitLab:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+### Step 2. Initiate the replication process
+
+Now we need to make each **secondary** node listen to changes on the new **primary** node. To do that you need
+to [initiate the replication process][initiate-the-replication-process] again but this time
+for another **primary** node. All the old replication settings will be overwritten.
+
+## Troubleshooting
+
+### I followed the disaster recovery instructions and now two-factor auth is broken!
+
+The setup instructions for Geo prior to 10.5 failed to replicate the
+`otp_key_base` secret, which is used to encrypt the two-factor authentication
+secrets stored in the database. If it differs between **primary** and **secondary**
+nodes, users with two-factor authentication enabled won't be able to log in
+after a failover.
+
+If you still have access to the old **primary** node, you can follow the
+instructions in the
+[Upgrading to GitLab 10.5][updating-geo]
+section to resolve the error. Otherwise, the secret is lost and you'll need to
+[reset two-factor authentication for all users][sec-tfa].
+
+[gitlab-org&65]: https://gitlab.com/groups/gitlab-org/-/epics/65
+[geo-limitations]: ../replication/index.md#current-limitations
+[planned-failover]: planned_failover.md
+[setup-geo]: ../replication/index.md#setup-instructions
+[updating-geo]: ../replication/updating_the_geo_nodes.md#upgrading-to-gitlab-105
+[sec-tfa]: ../../../security/two_factor_authentication.md#disabling-2fa-for-everyone
+[gitlab-org/omnibus-gitlab#3058]: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3058
+[gitlab-org/gitlab-ee#4284]: https://gitlab.com/gitlab-org/gitlab-ee/issues/4284
+[initiate-the-replication-process]: ../replication/database.html#step-3-initiate-the-replication-process
+[configure-the-primary-server]: ../replication/database.html#step-1-configure-the-primary-server
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
new file mode 100644
index 00000000000..88ab12d910a
--- /dev/null
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -0,0 +1,227 @@
+# Disaster recovery for planned failover **[PREMIUM ONLY]**
+
+The primary use-case of Disaster Recovery is to ensure business continuity in
+the event of unplanned outage, but it can be used in conjunction with a planned
+failover to migrate your GitLab instance between regions without extended
+downtime.
+
+As replication between Geo nodes is asynchronous, a planned failover requires
+a maintenance window in which updates to the **primary** node are blocked. The
+length of this window is determined by your replication capacity - once the
+**secondary** node is completely synchronized with the **primary** node, the failover can occur without
+data loss.
+
+This document assumes you already have a fully configured, working Geo setup.
+Please read it and the [Disaster Recovery][disaster-recovery] failover
+documentation in full before proceeding. Planned failover is a major operation,
+and if performed incorrectly, there is a high risk of data loss. Consider
+rehearsing the procedure until you are comfortable with the necessary steps and
+have a high degree of confidence in being able to perform them accurately.
+
+## Not all data is automatically replicated
+
+If you are using any GitLab features that Geo [doesn't support][limitations],
+you must make separate provisions to ensure that the **secondary** node has an
+up-to-date copy of any data associated with that feature. This may extend the
+required scheduled maintenance period significantly.
+
+A common strategy for keeping this period as short as possible for data stored
+in files is to use `rsync` to transfer the data. An initial `rsync` can be
+performed ahead of the maintenance window; subsequent `rsync`s (including a
+final transfer inside the maintenance window) will then transfer only the
+*changes* between the **primary** node and the **secondary** nodes.
+
+Repository-centric strategies for using `rsync` effectively can be found in the
+[moving repositories][moving-repositories] documentation; these strategies can
+be adapted for use with any other file-based data, such as GitLab Pages (to
+be found in `/var/opt/gitlab/gitlab-rails/shared/pages` if using Omnibus).
+
+## Pre-flight checks
+
+Follow these steps before scheduling a planned failover to ensure the process
+will go smoothly.
+
+### Object storage
+
+Some classes of non-repository data can use object storage in preference to
+file storage. Geo [does not replicate data in object storage](../replication/object_storage.md),
+leaving that task up to the object store itself. For a planned failover, this
+means you can decouple the replication of this data from the failover of the
+GitLab service.
+
+If you're already using object storage, simply verify that your **secondary**
+node has access to the same data as the **primary** node - they must either they share the
+same object storage configuration, or the **secondary** node should be configured to
+access a [geographically-replicated][os-repl] copy provided by the object store
+itself.
+
+If you have a large GitLab installation or cannot tolerate downtime, consider
+[migrating to Object Storage][os-conf] **before** scheduling a planned failover.
+Doing so reduces both the length of the maintenance window, and the risk of data
+loss as a result of a poorly executed planned failover.
+
+### Review the configuration of each **secondary** node
+
+Database settings are automatically replicated to the **secondary** node, but the
+`/etc/gitlab/gitlab.rb` file must be set up manually, and differs between
+nodes. If features such as Mattermost, OAuth or LDAP integration are enabled
+on the **primary** node but not the **secondary** node, they will be lost during failover.
+
+Review the `/etc/gitlab/gitlab.rb` file for both nodes and ensure the **secondary** node
+supports everything the **primary** node does **before** scheduling a planned failover.
+
+### Run system checks
+
+Run the following on both **primary** and **secondary** nodes:
+
+```sh
+gitlab-rake gitlab:check
+gitlab-rake gitlab:geo:check
+```
+
+If any failures are reported on either node, they should be resolved **before**
+scheduling a planned failover.
+
+### Check that secrets match between nodes
+
+The SSH host keys and `/etc/gitlab/gitlab-secrets.json` files should be
+identical on all nodes. Check this by running the following on all nodes and
+comparing the output:
+
+```sh
+sudo sha256sum /etc/ssh/ssh_host* /etc/gitlab/gitlab-secrets.json
+```
+
+If any files differ, replace the content on the **secondary** node with the
+content from the **primary** node.
+
+### Ensure Geo replication is up-to-date
+
+The maintenance window won't end until Geo replication and verification is
+completely finished. To keep the window as short as possible, you should
+ensure these processes are close to 100% as possible during active use.
+
+Navigate to the **Admin Area > Geo** dashboard on the **secondary** node to
+review status. Replicated objects (shown in green) should be close to 100%,
+and there should be no failures (shown in red). If a large proportion of
+objects aren't yet replicated (shown in grey), consider giving the node more
+time to complete
+
+![Replication status](img/replication-status.png)
+
+If any objects are failing to replicate, this should be investigated before
+scheduling the maintenance window. Following a planned failover, anything that
+failed to replicate will be **lost**.
+
+You can use the [Geo status API](https://docs.gitlab.com/ee/api/geo_nodes.html#retrieve-project-sync-or-verification-failures-that-occurred-on-the-current-node) to review failed objects and
+the reasons for failure.
+
+A common cause of replication failures is the data being missing on the
+**primary** node - you can resolve these failures by restoring the data from backup,
+or removing references to the missing data.
+
+### Verify the integrity of replicated data
+
+This [content was moved to another location][background-verification].
+
+### Notify users of scheduled maintenance
+
+On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
+message. You can check under **Admin Area > Geo** to estimate how long it
+will take to finish syncing. An example message would be:
+
+> A scheduled maintenance will take place at XX:XX UTC. We expect it to take
+> less than 1 hour.
+
+## Prevent updates to the **primary** node
+
+Until a [read-only mode][ce-19739] is implemented, updates must be prevented
+from happening manually. Note that your **secondary** node still needs read-only
+access to the **primary** node during the maintenance window.
+
+1. At the scheduled time, using your cloud provider or your node's firewall, block
+ all HTTP, HTTPS and SSH traffic to/from the **primary** node, **except** for your IP and
+ the **secondary** node's IP.
+
+ For instance, you might run the following commands on the server(s) making up your **primary** node:
+
+ ```sh
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 22 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 22 -j ACCEPT
+ sudo iptables -A INPUT --destination-port 22 -j REJECT
+
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 80 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 80 -j ACCEPT
+ sudo iptables -A INPUT --tcp-dport 80 -j REJECT
+
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 443 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 443 -j ACCEPT
+ sudo iptables -A INPUT --tcp-dport 443 -j REJECT
+ ```
+
+ From this point, users will be unable to view their data or make changes on the
+ **primary** node. They will also be unable to log in to the **secondary** node.
+ However, existing sessions will work for the remainder of the maintenance period, and
+ public data will be accessible throughout.
+
+1. Verify the **primary** node is blocked to HTTP traffic by visiting it in browser via
+ another IP. The server should refuse connection.
+
+1. Verify the **primary** node is blocked to Git over SSH traffic by attempting to pull an
+ existing Git repository with an SSH remote URL. The server should refuse
+ connection.
+
+1. Disable non-Geo periodic background jobs on the primary node by navigating
+ to **Admin Area > Monitoring > Background Jobs > Cron** , pressing `Disable All`,
+ and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
+ This job will re-enable several other cron jobs that are essential for planned
+ failover to complete successfully.
+
+## Finish replicating and verifying all data
+
+1. If you are manually replicating any data not managed by Geo, trigger the
+ final replication process now.
+1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
+ and wait for all queues except those with `geo` in the name to drop to 0.
+ These queues contain work that has been submitted by your users; failing over
+ before it is completed will cause the work to be lost.
+1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
+ following conditions to be true of the **secondary** node you are failing over to:
+ - All replication meters to each 100% replicated, 0% failures.
+ - All verification meters reach 100% verified, 0% failures.
+ - Database replication lag is 0ms.
+ - The Geo log cursor is up to date (0 events behind).
+
+1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
+ and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
+1. On the **secondary** node, use [these instructions][foreground-verification]
+ to verify the integrity of CI artifacts, LFS objects and uploads in file
+ storage.
+
+At this point, your **secondary** node will contain an up-to-date copy of everything the
+**primary** node has, meaning nothing will be lost when you fail over.
+
+## Promote the **secondary** node
+
+Finally, follow the [Disaster Recovery docs][disaster-recovery] to promote the
+**secondary** node to a **primary** node. This process will cause a brief outage on the **secondary** node, and users may need to log in again.
+
+Once it is completed, the maintenance window is over! Your new **primary** node will now
+begin to diverge from the old one. If problems do arise at this point, failing
+back to the old **primary** node [is possible][bring-primary-back], but likely to result
+in the loss of any data uploaded to the new primary in the meantime.
+
+Don't forget to remove the broadcast message after failover is complete.
+
+[bring-primary-back]: bring_primary_back.md
+[ce-19739]: https://gitlab.com/gitlab-org/gitlab-ce/issues/19739
+[container-registry]: ../replication/container_registry.md
+[disaster-recovery]: index.md
+[ee-4930]: https://gitlab.com/gitlab-org/gitlab-ee/issues/4930
+[ee-5064]: https://gitlab.com/gitlab-org/gitlab-ee/issues/5064
+[foreground-verification]: ../../raketasks/check.md
+[background-verification]: background_verification.md
+[limitations]: ../replication/index.md#current-limitations
+[moving-repositories]: ../../operations/moving_repositories.md
+[os-conf]: ../replication/object_storage.md#configuration
+[os-repl]: ../replication/object_storage.md#replication
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
new file mode 100644
index 00000000000..57735b21cda
--- /dev/null
+++ b/doc/administration/geo/replication/configuration.md
@@ -0,0 +1,314 @@
+# Geo configuration (GitLab Omnibus) **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+This is the documentation for the Omnibus GitLab packages. For installations
+from source, follow the [**Geo nodes configuration for installations
+from source**][configuration-source] guide.
+
+## Configuring a new **secondary** node
+
+NOTE: **Note:**
+This is the final step in setting up a **secondary** Geo node. Stages of the
+setup process must be completed in the documented order.
+Before attempting the steps in this stage, [complete all prior stages][setup-geo-omnibus].
+
+The basic steps of configuring a **secondary** node are to:
+
+- Replicate required configurations between the **primary** node and the **secondary** nodes.
+- Configure a tracking database on each **secondary** node.
+- Start GitLab on each **secondary** node.
+
+You are encouraged to first read through all the steps before executing them
+in your testing/production environment.
+
+> **Notes:**
+> - **Do not** setup any custom authentication for the **secondary** nodes. This will be
+ handled by the **primary** node.
+> - Any change that requires access to the **Admin Area** needs to be done in the
+ **primary** node because the **secondary** node is a read-only replica.
+
+### Step 1. Manually replicate secret GitLab values
+
+GitLab stores a number of secret values in the `/etc/gitlab/gitlab-secrets.json`
+file which *must* be the same on all nodes. Until there is
+a means of automatically replicating these between nodes (see issue [gitlab-org/gitlab-ee#3789]),
+they must be manually replicated to the **secondary** node.
+
+1. SSH into the **primary** node, and execute the command below:
+
+ ```sh
+ sudo cat /etc/gitlab/gitlab-secrets.json
+ ```
+
+ This will display the secrets that need to be replicated, in JSON format.
+
+1. SSH into the **secondary** node and login as the `root` user:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Make a backup of any existing secrets:
+
+ ```sh
+ mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.`date +%F`
+ ```
+
+1. Copy `/etc/gitlab/gitlab-secrets.json` from the **primary** node to the **secondary** node, or
+ copy-and-paste the file contents between nodes:
+
+ ```sh
+ sudo editor /etc/gitlab/gitlab-secrets.json
+
+ # paste the output of the `cat` command you ran on the primary
+ # save and exit
+ ```
+
+1. Ensure the file permissions are correct:
+
+ ```sh
+ chown root:root /etc/gitlab/gitlab-secrets.json
+ chmod 0600 /etc/gitlab/gitlab-secrets.json
+ ```
+
+1. Reconfigure the **secondary** node for the change to take effect:
+
+ ```sh
+ gitlab-ctl reconfigure
+ gitlab-ctl restart
+ ```
+
+### Step 2. Manually replicate the **primary** node's SSH host keys
+
+GitLab integrates with the system-installed SSH daemon, designating a user
+(typically named git) through which all access requests are handled.
+
+In a [Disaster Recovery] situation, GitLab system
+administrators will promote a **secondary** node to the **primary** node. DNS records for the
+**primary** domain should also be updated to point to the new **primary** node
+(previously a **secondary** node). Doing so will avoid the need to update Git remotes and API URLs.
+
+This will cause all SSH requests to the newly promoted **primary** node to
+fail due to SSH host key mismatch. To prevent this, the primary SSH host
+keys must be manually replicated to the **secondary** node.
+
+1. SSH into the **secondary** node and login as the `root` user:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Make a backup of any existing SSH host keys:
+
+ ```sh
+ find /etc/ssh -iname ssh_host_* -exec cp {} {}.backup.`date +%F` \;
+ ```
+
+1. Copy OpenSSH host keys from the **primary** node:
+
+ If you can access your **primary** node using the **root** user:
+
+ ```sh
+ # Run this from the secondary node, change `<primary_node_fqdn>` for the IP or FQDN of the server
+ scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh
+ ```
+
+ If you only have access through a user with **sudo** privileges:
+
+ ```sh
+ # Run this from your primary node:
+ sudo tar --transform 's/.*\///g' -zcvf ~/geo-host-key.tar.gz /etc/ssh/ssh_host_*_key*
+
+ # Run this from your secondary node:
+ scp <user_with_sudo>@<primary_node_fqdn>:geo-host-key.tar.gz .
+ tar zxvf ~/geo-host-key.tar.gz -C /etc/ssh
+ ```
+
+1. On your **secondary** node, ensure the file permissions are correct:
+
+ ```sh
+ chown root:root /etc/ssh/ssh_host_*_key*
+ chmod 0600 /etc/ssh/ssh_host_*_key*
+ ```
+
+1. To verify key fingerprint matches, execute the following command on both nodes:
+
+ ```sh
+ for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
+ ```
+
+ You should get an output similar to this one and they should be identical on both nodes:
+
+ ```sh
+ 1024 SHA256:FEZX2jQa2bcsd/fn/uxBzxhKdx4Imc4raXrHwsbtP0M root@serverhostname (DSA)
+ 256 SHA256:uw98R35Uf+fYEQ/UnJD9Br4NXUFPv7JAUln5uHlgSeY root@serverhostname (ECDSA)
+ 256 SHA256:sqOUWcraZQKd89y/QQv/iynPTOGQxcOTIXU/LsoPmnM root@serverhostname (ED25519)
+ 2048 SHA256:qwa+rgir2Oy86QI+PZi/QVR+MSmrdrpsuH7YyKknC+s root@serverhostname (RSA)
+ ```
+
+1. Verify that you have the correct public keys for the existing private keys:
+
+ ```sh
+ # This will print the fingerprint for private keys:
+ for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
+
+ # This will print the fingerprint for public keys:
+ for file in /etc/ssh/ssh_host_*_key.pub; do ssh-keygen -lf $file; done
+ ```
+
+ NOTE: **Note**:
+ The output for private keys and public keys command should generate the same fingerprint.
+
+1. Restart sshd on your **secondary** node:
+
+ ```sh
+ # Debian or Ubuntu installations
+ sudo service ssh reload
+
+ # CentOS installations
+ sudo service sshd reload
+ ```
+
+### Step 3. Add the **secondary** node
+
+1. Visit the **primary** node's **Admin Area > Geo**
+ (`/admin/geo/nodes`) in your browser.
+1. Add the **secondary** node by providing its full URL. **Do NOT** check the
+ **This is a primary node** checkbox.
+1. Optionally, choose which groups or storage shards should be replicated by the
+ **secondary** node. Leave blank to replicate all. Read more in
+ [selective synchronization](#selective-synchronization).
+1. Click the **Add node** button.
+1. SSH into your GitLab **secondary** server and restart the services:
+
+ ```sh
+ gitlab-ctl restart
+ ```
+
+ Check if there are any common issue with your Geo setup by running:
+
+ ```sh
+ gitlab-rake gitlab:geo:check
+ ```
+
+1. SSH into your **primary** server and login as root to verify the
+ **secondary** node is reachable or there are any common issue with your Geo setup:
+
+ ```sh
+ gitlab-rake gitlab:geo:check
+ ```
+
+Once added to the admin panel and restarted, the **secondary** node will automatically start
+replicating missing data from the **primary** node in a process known as **backfill**.
+Meanwhile, the **primary** node will start to notify each **secondary** node of any changes, so
+that the **secondary** node can act on those notifications immediately.
+
+Make sure the **secondary** node is running and accessible.
+You can login to the **secondary** node with the same credentials as used for the **primary** node.
+
+### Step 4. Enabling Hashed Storage
+
+Using Hashed Storage significantly improves Geo replication. Project and group
+renames no longer require synchronization between nodes.
+
+1. Visit the **primary** node's **Admin Area > Settings > Repository**
+ (`/admin/application_settings/repository`) in your browser.
+1. In the **Repository storage** section, check **Use hashed storage paths for newly created and renamed projects**.
+
+### Step 5. (Optional) Configuring the **secondary** node to trust the **primary** node
+
+You can safely skip this step if your **primary** node uses a CA-issued HTTPS certificate.
+
+If your **primary** node is using a self-signed certificate for *HTTPS* support, you will
+need to add that certificate to the **secondary** node's trust store. Retrieve the
+certificate from the **primary** node and follow
+[these instructions][omnibus-ssl]
+on the **secondary** node.
+
+### Step 6. Enable Git access over HTTP/HTTPS
+
+Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
+method to be enabled. Navigate to **Admin Area > Settings**
+(`/admin/application_settings`) on the **primary** node, and set
+`Enabled Git access protocols` to `Both SSH and HTTP(S)` or `Only HTTP(S)`.
+
+### Step 7. Verify proper functioning of the **secondary** node
+
+Your **secondary** node is now configured!
+
+You can login to the **secondary** node with the same credentials you used for the
+**primary** node. Visit the **secondary** node's **Admin Area > Geo**
+(`/admin/geo/nodes`) in your browser to check if it's correctly identified as a
+**secondary** Geo node and if Geo is enabled.
+
+The initial replication, or 'backfill', will probably still be in progress. You
+can monitor the synchronization process on each geo node from the **primary**
+node's Geo Nodes dashboard in your browser.
+
+![Geo dashboard](img/geo_node_dashboard.png)
+
+If your installation isn't working properly, check the
+[troubleshooting document].
+
+The two most obvious issues that can become apparent in the dashboard are:
+
+1. Database replication not working well.
+1. Instance to instance notification not working. In that case, it can be
+ something of the following:
+ - You are using a custom certificate or custom CA (see the
+ [troubleshooting document]).
+ - The instance is firewalled (check your firewall rules).
+
+Please note that disabling a **secondary** node will stop the synchronization process.
+
+Please note that if `git_data_dirs` is customized on the **primary** node for multiple
+repository shards you must duplicate the same configuration on each **secondary** node.
+
+Point your users to the ["Using a Geo Server" guide][using-geo].
+
+Currently, this is what is synced:
+
+- Git repositories.
+- Wikis.
+- LFS objects.
+- Issues, merge requests, snippets, and comment attachments.
+- Users, groups, and project avatars.
+
+## Selective synchronization
+
+Geo supports selective synchronization, which allows admins to choose
+which projects should be synchronized by **secondary** nodes.
+A subset of projects can be chosen, either by group or by storage shard. The
+former is ideal for replicating data belonging to a subset of users, while the
+latter is more suited to progressively rolling out Geo to a large GitLab
+instance.
+
+It is important to note that selective synchronization:
+
+1. Does not restrict permissions from **secondary** nodes.
+1. Does not hide project metadata from **secondary** nodes.
+ - Since Geo currently relies on PostgreSQL replication, all project metadata
+ gets replicated to **secondary** nodes, but repositories that have not been
+ selected will be empty.
+1. Does not reduce the number of events generated for the Geo event log.
+ - The **primary** node generates events as long as any **secondary** nodes are present.
+ Selective synchronization restrictions are implemented on the **secondary** nodes,
+ not the **primary** node.
+
+## Upgrading Geo
+
+See the [updating the Geo nodes document](updating_the_geo_nodes.md).
+
+## Troubleshooting
+
+See the [troubleshooting document](troubleshooting.md).
+
+[configuration-source]: configuration_source.md
+[setup-geo-omnibus]: index.md#using-omnibus-gitlab
+[Hashed Storage]: ../../repository_storage_types.md
+[Disaster Recovery]: ../disaster_recovery/index.md
+[gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab-ee/issues/3789
+[gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
+[omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html
+[troubleshooting document]: troubleshooting.md
+[using-geo]: using_a_geo_server.md
diff --git a/doc/administration/geo/replication/configuration_source.md b/doc/administration/geo/replication/configuration_source.md
new file mode 100644
index 00000000000..10dd9405b4b
--- /dev/null
+++ b/doc/administration/geo/replication/configuration_source.md
@@ -0,0 +1,172 @@
+# Geo configuration (source) **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+This documentation applies to GitLab source installations. In GitLab 11.5, this documentation was deprecated and will be removed in a future release.
+Please consider [migrating to GitLab Omnibus install](https://docs.gitlab.com/omnibus/update/convert_to_omnibus.html). For installations
+using the Omnibus GitLab packages, follow the
+[**Omnibus Geo nodes configuration**][configuration] guide.
+
+## Configuring a new **secondary** node
+
+NOTE: **Note:**
+This is the final step in setting up a **secondary** node. Stages of the setup
+process must be completed in the documented order. Before attempting the steps
+in this stage, [complete all prior stages](index.md#using-gitlab-installed-from-source-deprecated).
+
+The basic steps of configuring a **secondary** node are to:
+
+- Replicate required configurations between the **primary** and **secondary** nodes.
+- Configure a tracking database on each **secondary** node.
+- Start GitLab on the **secondary** node.
+
+You are encouraged to first read through all the steps before executing them
+in your testing/production environment.
+
+NOTE: **Note:**
+**Do not** set up any custom authentication on **secondary** nodes, this will be handled by the **primary** node.
+
+NOTE: **Note:**
+**Do not** add anything in the **secondary** node's admin area (**Admin Area > Geo**). This is handled solely by the **primary** node.
+
+### Step 1. Manually replicate secret GitLab values
+
+GitLab stores a number of secret values in the `/home/git/gitlab/config/secrets.yml`
+file which *must* match between the **primary** and **secondary** nodes. Until there is
+a means of automatically replicating these between nodes (see [gitlab-org/gitlab-ee#3789]), they must
+be manually replicated to **secondary** nodes.
+
+1. SSH into the **primary** node, and execute the command below:
+
+ ```sh
+ sudo cat /home/git/gitlab/config/secrets.yml
+ ```
+
+ This will display the secrets that need to be replicated, in YAML format.
+
+1. SSH into the **secondary** node and login as the `git` user:
+
+ ```sh
+ sudo -i -u git
+ ```
+
+1. Make a backup of any existing secrets:
+
+ ```sh
+ mv /home/git/gitlab/config/secrets.yml /home/git/gitlab/config/secrets.yml.`date +%F`
+ ```
+
+1. Copy `/home/git/gitlab/config/secrets.yml` from the **primary** node to the **secondary** node, or
+ copy-and-paste the file contents between nodes:
+
+ ```sh
+ sudo editor /home/git/gitlab/config/secrets.yml
+
+ # paste the output of the `cat` command you ran on the primary
+ # save and exit
+ ```
+
+1. Ensure the file permissions are correct:
+
+ ```sh
+ chown git:git /home/git/gitlab/config/secrets.yml
+ chmod 0600 /home/git/gitlab/config/secrets.yml
+ ```
+
+1. Restart GitLab
+
+ ```sh
+ service gitlab restart
+ ```
+
+Once restarted, the **secondary** node will automatically start replicating missing data
+from the **primary** node in a process known as backfill. Meanwhile, the **primary** node
+will start to notify the **secondary** node of any changes, so that the **secondary** node can
+act on those notifications immediately.
+
+Make sure the **secondary** node is running and accessible. You can login to
+the **secondary** node with the same credentials as used for the **primary** node.
+
+### Step 2. Manually replicate the **primary** node's SSH host keys
+
+Read [Manually replicate the **primary** node's SSH host keys](configuration.md#step-2-manually-replicate-the-primary-nodes-ssh-host-keys)
+
+### Step 3. Add the **secondary** GitLab node
+
+1. Navigate to the **primary** node's **Admin Area > Geo**
+ (`/admin/geo/nodes`) in your browser.
+1. Add the **secondary** node by providing its full URL. **Do NOT** check the
+ **This is a primary node** checkbox.
+1. Optionally, choose which namespaces should be replicated by the
+ **secondary** node. Leave blank to replicate all. Read more in
+ [selective synchronization](#selective-synchronization).
+1. Click the **Add node** button.
+1. SSH into your GitLab **secondary** server and restart the services:
+
+ ```sh
+ service gitlab restart
+ ```
+
+ Check if there are any common issue with your Geo setup by running:
+
+ ```sh
+ bundle exec rake gitlab:geo:check
+ ```
+
+1. SSH into your GitLab **primary** server and login as root to verify the
+ **secondary** node is reachable or there are any common issue with your Geo setup:
+
+ ```sh
+ bundle exec rake gitlab:geo:check
+ ```
+
+Once reconfigured, the **secondary** node will automatically start
+replicating missing data from the **primary** node in a process known as backfill.
+Meanwhile, the **primary** node will start to notify the **secondary** node of any changes, so
+that the **secondary** node can act on those notifications immediately.
+
+Make sure the **secondary** node is running and accessible.
+You can log in to the **secondary** node with the same credentials as used for the **primary** node.
+
+### Step 4. Enabling Hashed Storage
+
+Read [Enabling Hashed Storage](configuration.md#step-4-enabling-hashed-storage).
+
+### Step 5. (Optional) Configuring the secondary to trust the primary
+
+You can safely skip this step if your **primary** node uses a CA-issued HTTPS certificate.
+
+If your **primary** node is using a self-signed certificate for *HTTPS* support, you will
+need to add that certificate to the **secondary** node's trust store. Retrieve the
+certificate from the **primary** node and follow your distribution's instructions for
+adding it to the **secondary** node's trust store. In Debian/Ubuntu, you would follow these steps:
+
+```sh
+sudo -i
+cp <primary_node_certification_file> /usr/local/share/ca-certificates
+update-ca-certificates
+```
+
+### Step 6. Enable Git access over HTTP/HTTPS
+
+Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
+method to be enabled. Navigate to **Admin Area > Settings**
+(`/admin/application_settings`) on the **primary** node, and set
+`Enabled Git access protocols` to `Both SSH and HTTP(S)` or `Only HTTP(S)`.
+
+### Step 7. Verify proper functioning of the secondary node
+
+Read [Verify proper functioning of the secondary node][configuration-verify-node].
+
+## Selective synchronization
+
+Read [Selective synchronization][configuration-selective-replication].
+
+## Troubleshooting
+
+Read the [troubleshooting document][troubleshooting].
+
+[gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab-ee/issues/3789
+[configuration]: configuration.md
+[configuration-selective-replication]: configuration.md#selective-synchronization
+[configuration-verify-node]: configuration.md#step-7-verify-proper-functioning-of-the-secondary-node
+[troubleshooting]: troubleshooting.md
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
new file mode 100644
index 00000000000..e57583a3bf9
--- /dev/null
+++ b/doc/administration/geo/replication/database.md
@@ -0,0 +1,507 @@
+# Geo database replication (GitLab Omnibus) **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+This is the documentation for the Omnibus GitLab packages. For installations
+from source, follow the
+[Geo database replication (source)](database_source.md) guide.
+
+NOTE: **Note:**
+If your GitLab installation uses external (not managed by Omnibus) PostgreSQL
+instances, the Omnibus roles will not be able to perform all necessary
+configuration steps. In this case,
+[follow the Geo with external PostgreSQL instances document instead](external_database.md).
+
+NOTE: **Note:**
+The stages of the setup process must be completed in the documented order.
+Before attempting the steps in this stage, [complete all prior stages][toc].
+
+This document describes the minimal steps you have to take in order to
+replicate your **primary** GitLab database to a **secondary** node's database. You may
+have to change some values according to your database setup, how big it is, etc.
+
+You are encouraged to first read through all the steps before executing them
+in your testing/production environment.
+
+## PostgreSQL replication
+
+The GitLab **primary** node where the write operations happen will connect to
+the **primary** database server, and **secondary** nodes will
+connect to their own database servers (which are also read-only).
+
+NOTE: **Note:**
+In database documentation, you may see "**primary**" being referenced as "master"
+and "**secondary**" as either "slave" or "standby" server (read-only).
+
+We recommend using [PostgreSQL replication slots][replication-slots-article]
+to ensure that the **primary** node retains all the data necessary for the **secondary** nodes to
+recover. See below for more details.
+
+The following guide assumes that:
+
+- You are using Omnibus and therefore you are using PostgreSQL 9.6 or later
+ which includes the [`pg_basebackup` tool][pgback] and improved
+ [Foreign Data Wrapper][FDW] support.
+- You have a **primary** node already set up (the GitLab server you are
+ replicating from), running Omnibus' PostgreSQL (or equivalent version), and
+ you have a new **secondary** server set up with the same versions of the OS,
+ PostgreSQL, and GitLab on all nodes.
+
+CAUTION: **Warning:**
+Geo works with streaming replication. Logical replication is not supported at this time.
+There is an [issue where support is being discussed](https://gitlab.com/gitlab-org/gitlab-ee/issues/7420).
+
+### Step 1. Configure the **primary** server
+
+1. SSH into your GitLab **primary** server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Execute the command below to define the node as **primary** node:
+
+ ```sh
+ gitlab-ctl set-geo-primary-node
+ ```
+
+ This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
+
+1. GitLab 10.4 and up only: Do the following to make sure the `gitlab` database user has a password defined:
+
+ Generate a MD5 hash of the desired password:
+
+ ```sh
+ gitlab-ctl pg-password-md5 gitlab
+ # Enter password: <your_password_here>
+ # Confirm password: <your_password_here>
+ # fca0b89a972d69f00eb3ec98a5838484
+ ```
+
+ Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
+ ```
+
+1. Omnibus GitLab already has a [replication user]
+ called `gitlab_replicator`. You must set the password for this user manually.
+ You will be prompted to enter a password:
+
+ ```sh
+ gitlab-ctl set-replication-password
+ ```
+
+ This command will also read the `postgresql['sql_replication_user']` Omnibus
+ setting in case you have changed `gitlab_replicator` username to something
+ else.
+
+ If you are using an external database not managed by Omnibus GitLab, you need
+ to create the replicator user and define a password to it manually.
+ For information on how to create a replication user, refer to the
+ [appropriate step](database_source.md#step-1-configure-the-primary-server)
+ in [Geo database replication (source)](database_source.md).
+
+1. Configure PostgreSQL to listen on network interfaces:
+
+ For security reasons, PostgreSQL does not listen on any network interfaces
+ by default. However, Geo requires the **secondary** node to be able to
+ connect to the **primary** node's database. For this reason, we need the address of
+ each node. Note: For external PostgreSQL instances, see [additional instructions](external_database.md).
+
+ If you are using a cloud provider, you can lookup the addresses for each
+ Geo node through your cloud provider's management console.
+
+ To lookup the address of a Geo node, SSH in to the Geo node and execute:
+
+ ```sh
+ ##
+ ## Private address
+ ##
+ ip route get 255.255.255.255 | awk '{print "Private address:", $NF; exit}'
+
+ ##
+ ## Public address
+ ##
+ echo "External address: $(curl --silent ipinfo.io/ip)"
+ ```
+
+ In most cases, the following addresses will be used to configure GitLab
+ Geo:
+
+ | Configuration | Address |
+ |:----------------------------------------|:------------------------------------------------------|
+ | `postgresql['listen_address']` | **Primary** node's public or VPC private address. |
+ | `postgresql['md5_auth_cidr_addresses']` | **Secondary** node's public or VPC private addresses. |
+
+ If you are using Google Cloud Platform, SoftLayer, or any other vendor that
+ provides a virtual private cloud (VPC) you can use the **secondary** node's private
+ address (corresponds to "internal address" for Google Cloud Platform) for
+ `postgresql['md5_auth_cidr_addresses']` and `postgresql['listen_address']`.
+
+ The `listen_address` option opens PostgreSQL up to network connections
+ with the interface corresponding to the given address. See [the PostgreSQL
+ documentation][pg-docs-runtime-conn] for more details.
+
+ Depending on your network configuration, the suggested addresses may not
+ be correct. If your **primary** node and **secondary** nodes connect over a local
+ area network, or a virtual network connecting availability zones like
+ [Amazon's VPC](https://aws.amazon.com/vpc/) or [Google's VPC](https://cloud.google.com/vpc/)
+ you should use the **secondary** node's private address for `postgresql['md5_auth_cidr_addresses']`.
+
+ Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
+ addresses with addresses appropriate to your network configuration:
+
+ ```ruby
+ ##
+ ## Geo Primary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_primary_role']
+
+ ##
+ ## Primary address
+ ## - replace '<primary_node_ip>' with the public or VPC address of your Geo primary node
+ ##
+ postgresql['listen_address'] = '<primary_node_ip>'
+
+ ##
+ # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
+ # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
+ ##
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
+
+ ##
+ ## Replication settings
+ ## - set this to be the number of Geo secondary nodes you have
+ ##
+ postgresql['max_replication_slots'] = 1
+ # postgresql['max_wal_senders'] = 10
+ # postgresql['wal_keep_segments'] = 10
+
+ ##
+ ## Disable automatic database migrations temporarily
+ ## (until PostgreSQL is restarted and listening on the private address).
+ ##
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+1. Optional: If you want to add another **secondary** node, the relevant setting would look like:
+
+ ```ruby
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32', '<another_secondary_node_ip>/32']
+ ```
+
+ You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
+ match your database replication requirements. Consult the [PostgreSQL -
+ Replication documentation][pg-docs-runtime-replication]
+ for more information.
+
+1. Save the file and reconfigure GitLab for the database listen changes and
+ the replication slot changes to be applied:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+ Restart PostgreSQL for its changes to take effect:
+
+ ```sh
+ gitlab-ctl restart postgresql
+ ```
+
+1. Re-enable migrations now that PostgreSQL is restarted and listening on the
+ private address.
+
+ Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
+
+ ```ruby
+ gitlab_rails['auto_migrate'] = true
+ ```
+
+ Save the file and reconfigure GitLab:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+1. Now that the PostgreSQL server is set up to accept remote connections, run
+ `netstat -plnt | grep 5432` to make sure that PostgreSQL is listening on port
+ `5432` to the **primary** server's private address.
+
+1. A certificate was automatically generated when GitLab was reconfigured. This
+ will be used automatically to protect your PostgreSQL traffic from
+ eavesdroppers, but to protect against active ("man-in-the-middle") attackers,
+ the **secondary** node needs a copy of the certificate. Make a copy of the PostgreSQL
+ `server.crt` file on the **primary** node by running this command:
+
+ ```sh
+ cat ~gitlab-psql/data/server.crt
+ ```
+
+ Copy the output into a clipboard or into a local file. You
+ will need it when setting up the **secondary** node! The certificate is not sensitive
+ data.
+
+### Step 2. Configure the **secondary** server
+
+1. SSH into your GitLab **secondary** server and login as root:
+
+ ```
+ sudo -i
+ ```
+
+1. Stop application server and Sidekiq
+
+ ```
+ gitlab-ctl stop unicorn
+ gitlab-ctl stop sidekiq
+ ```
+
+ NOTE: **Note**:
+ This step is important so we don't try to execute anything before the node is fully configured.
+
+1. [Check TCP connectivity][rake-maintenance] to the **primary** node's PostgreSQL server:
+
+ ```sh
+ gitlab-rake gitlab:tcp_check[<primary_node_ip>,5432]
+ ```
+
+ NOTE: **Note**:
+ If this step fails, you may be using the wrong IP address, or a firewall may
+ be preventing access to the server. Check the IP address, paying close
+ attention to the difference between public and private addresses and ensure
+ that, if a firewall is present, the **secondary** node is permitted to connect to the
+ **primary** node on port 5432.
+
+1. Create a file `server.crt` in the **secondary** server, with the content you got on the last step of the **primary** node's setup:
+
+ ```
+ editor server.crt
+ ```
+
+1. Set up PostgreSQL TLS verification on the **secondary** node:
+
+ Install the `server.crt` file:
+
+ ```sh
+ install \
+ -D \
+ -o gitlab-psql \
+ -g gitlab-psql \
+ -m 0400 \
+ -T server.crt ~gitlab-psql/.postgresql/root.crt
+ ```
+
+ PostgreSQL will now only recognize that exact certificate when verifying TLS
+ connections. The certificate can only be replicated by someone with access
+ to the private key, which is **only** present on the **primary** node.
+
+1. Test that the `gitlab-psql` user can connect to the **primary** node's database
+ (the default Omnibus database name is gitlabhq_production):
+
+ ```sh
+ sudo \
+ -u gitlab-psql /opt/gitlab/embedded/bin/psql \
+ --list \
+ -U gitlab_replicator \
+ -d "dbname=gitlabhq_production sslmode=verify-ca" \
+ -W \
+ -h <primary_node_ip>
+ ```
+
+ When prompted enter the password you set in the first step for the
+ `gitlab_replicator` user. If all worked correctly, you should see
+ the list of **primary** node's databases.
+
+ A failure to connect here indicates that the TLS configuration is incorrect.
+ Ensure that the contents of `~gitlab-psql/data/server.crt` on the **primary** node
+ match the contents of `~gitlab-psql/.postgresql/root.crt` on the **secondary** node.
+
+1. Configure PostgreSQL to enable FDW support:
+
+ This step is similar to how we configured the **primary** instance.
+ We need to enable this, to enable FDW support, even if using a single node.
+
+ Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
+ addresses with addresses appropriate to your network configuration:
+
+ ```ruby
+ ##
+ ## Geo Secondary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_secondary_role']
+
+ ##
+ ## Secondary address
+ ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
+ ##
+ postgresql['listen_address'] = '<secondary_node_ip>'
+ postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
+
+ ##
+ ## Database credentials password (defined previously in primary node)
+ ## - replicate same values here as defined in primary node
+ ##
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ gitlab_rails['db_password'] = '<your_password_here>'
+
+ ##
+ ## Enable FDW support for the Geo Tracking Database (improves performance)
+ ##
+ geo_secondary['db_fdw'] = true
+ ```
+
+ For external PostgreSQL instances, see [additional instructions](external_database.md).
+ If you bring a former **primary** node back online to serve as a **secondary** node, then you also need to remove `roles ['geo_primary_role']` or `geo_primary_role['enable'] = true`.
+
+1. Reconfigure GitLab for the changes to take effect:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+1. Restart PostgreSQL for the IP change to take effect and reconfigure again:
+
+ ```sh
+ gitlab-ctl restart postgresql
+ gitlab-ctl reconfigure
+ ```
+
+ This last reconfigure will provision the FDW configuration and enable it.
+
+### Step 3. Initiate the replication process
+
+Below we provide a script that connects the database on the **secondary** node to
+the database on the **primary** node, replicates the database, and creates the
+needed files for streaming replication.
+
+The directories used are the defaults that are set up in Omnibus. If you have
+changed any defaults or are using a source installation, configure it as you
+see fit replacing the directories and paths.
+
+CAUTION: **Warning:**
+Make sure to run this on the **secondary** server as it removes all PostgreSQL's
+data before running `pg_basebackup`.
+
+1. SSH into your GitLab **secondary** server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Choose a database-friendly name to use for your **secondary** node to
+ use as the replication slot name. For example, if your domain is
+ `secondary.geo.example.com`, you may use `secondary_example` as the slot
+ name as shown in the commands below.
+
+1. Execute the command below to start a backup/restore and begin the replication
+ CAUTION: **Warning:** Each Geo **secondary** node must have its own unique replication slot name.
+ Using the same slot name between two secondaries will break PostgreSQL replication.
+
+ ```sh
+ gitlab-ctl replicate-geo-database \
+ --slot-name=<secondary_node_name> \
+ --host=<primary_node_ip>
+ ```
+
+ When prompted, enter the _plaintext_ password you set up for the `gitlab_replicator`
+ user in the first step.
+
+ This command also takes a number of additional options. You can use `--help`
+ to list them all, but here are a couple of tips:
+ - If PostgreSQL is listening on a non-standard port, add `--port=` as well.
+ - If your database is too large to be transferred in 30 minutes, you will need
+ to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
+ initial replication to take under an hour.
+ - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
+ (e.g., you know the network path is secure, or you are using a site-to-site
+ VPN). This is **not** safe over the public Internet!
+ - You can read more details about each `sslmode` in the
+ [PostgreSQL documentation][pg-docs-ssl];
+ the instructions above are carefully written to ensure protection against
+ both passive eavesdroppers and active "man-in-the-middle" attackers.
+ - Change the `--slot-name` to the name of the replication slot
+ to be used on the **primary** database. The script will attempt to create the
+ replication slot automatically if it does not exist.
+ - If you're repurposing an old server into a Geo **secondary** node, you'll need to
+ add `--force` to the command line.
+ - When not in a production machine you can disable backup step if you
+ really sure this is what you want by adding `--skip-backup`
+
+The replication process is now complete.
+
+## PGBouncer support (optional)
+
+[PGBouncer](http://pgbouncer.github.io/) may be used with GitLab Geo to pool
+PostgreSQL connections. We recommend using PGBouncer if you use GitLab in a
+high-availability configuration with a cluster of nodes supporting a Geo
+**primary** node and another cluster of nodes supporting a Geo **secondary** node. For more
+information, see the [Omnibus HA](https://docs.gitlab.com/ee/administration/high_availability/database.html#configure-using-omnibus-for-high-availability)
+documentation.
+
+For a Geo **secondary** node to work properly with PGBouncer in front of the database,
+it will need a separate read-only user to make [PostgreSQL FDW queries][FDW]
+work:
+
+1. On the **primary** Geo database, enter the PostgreSQL on the console as an
+ admin user. If you are using an Omnibus-managed database, log onto the **primary**
+ node that is running the PostgreSQL database (the default Omnibus database name is gitlabhq_production):
+
+ ```sh
+ sudo \
+ -u gitlab-psql /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/postgresql gitlabhq_production
+ ```
+
+1. Then create the read-only user:
+
+ ```sql
+ -- NOTE: Use the password defined earlier
+ CREATE USER gitlab_geo_fdw WITH password 'mypassword';
+ GRANT CONNECT ON DATABASE gitlabhq_production to gitlab_geo_fdw;
+ GRANT USAGE ON SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL TABLES IN SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO gitlab_geo_fdw;
+
+ -- Tables created by "gitlab" should be made read-only for "gitlab_geo_fdw"
+ -- automatically.
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON TABLES TO gitlab_geo_fdw;
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON SEQUENCES TO gitlab_geo_fdw;
+ ```
+
+1. On the **secondary** nodes, change `/etc/gitlab/gitlab.rb`:
+
+ ```
+ geo_postgresql['fdw_external_user'] = 'gitlab_geo_fdw'
+ ```
+
+1. Save the file and reconfigure GitLab for the changes to be applied:
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+## MySQL replication
+
+MySQL replication is not supported for Geo.
+
+## Troubleshooting
+
+Read the [troubleshooting document](troubleshooting.md).
+
+[replication-slots-article]: https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75
+[pgback]: http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
+[replication user]:https://wiki.postgresql.org/wiki/Streaming_Replication
+[FDW]: https://www.postgresql.org/docs/9.6/static/postgres-fdw.html
+[toc]: index.md#using-omnibus-gitlab
+[rake-maintenance]: ../../raketasks/maintenance.md
+[pg-docs-ssl]: https://www.postgresql.org/docs/9.6/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION
+[pg-docs-runtime-conn]: https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html
+[pg-docs-runtime-replication]: https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html
diff --git a/doc/administration/geo/replication/database_source.md b/doc/administration/geo/replication/database_source.md
new file mode 100644
index 00000000000..67cf8b6535f
--- /dev/null
+++ b/doc/administration/geo/replication/database_source.md
@@ -0,0 +1,439 @@
+# Geo database replication (source) **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+This documentation applies to GitLab source installations. In GitLab 11.5, this documentation was deprecated and will be removed in a future release.
+Please consider [migrating to GitLab Omnibus install](https://docs.gitlab.com/omnibus/update/convert_to_omnibus.html). For installations
+using the Omnibus GitLab packages, follow the
+[**database replication for Omnibus GitLab**][database] guide.
+
+NOTE: **Note:**
+The stages of the setup process must be completed in the documented order.
+Before attempting the steps in this stage, [complete all prior stages](index.md#using-gitlab-installed-from-source-deprecated).
+
+This document describes the minimal steps you have to take in order to
+replicate your **primary** GitLab database to a **secondary** node's database. You may
+have to change some values according to your database setup, how big it is, etc.
+
+You are encouraged to first read through all the steps before executing them
+in your testing/production environment.
+
+## PostgreSQL replication
+
+The GitLab **primary** node where the write operations happen will connect to
+**primary** database server, and the **secondary** ones which are read-only will
+connect to **secondary** database servers (which are read-only too).
+
+NOTE: **Note:**
+In many databases' documentation, you will see "**primary**" being referenced as "master"
+and "**secondary**" as either "slave" or "standby" server (read-only).
+
+We recommend using [PostgreSQL replication slots][replication-slots-article]
+to ensure the **primary** node retains all the data necessary for the secondaries to
+recover. See below for more details.
+
+The following guide assumes that:
+
+- You are using PostgreSQL 9.6 or later which includes the
+ [`pg_basebackup` tool][pgback] and improved [Foreign Data Wrapper][FDW] support.
+- You have a **primary** node already set up (the GitLab server you are
+ replicating from), running PostgreSQL 9.6 or later, and
+ you have a new **secondary** server set up with the same versions of the OS,
+ PostgreSQL, and GitLab on all nodes.
+- The IP of the **primary** server for our examples is `198.51.100.1`, whereas the
+ **secondary** node's IP is `198.51.100.2`. Note that the **primary** and **secondary** servers
+ **must** be able to communicate over these addresses. These IP addresses can either
+ be public or private.
+
+CAUTION: **Warning:**
+Geo works with streaming replication. Logical replication is not supported at this time.
+There is an [issue where support is being discussed](https://gitlab.com/gitlab-org/gitlab-ee/issues/7420).
+
+### Step 1. Configure the **primary** server
+
+1. SSH into your GitLab **primary** server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Add this node as the Geo **primary** by running:
+
+ ```sh
+ bundle exec rake geo:set_primary_node
+ ```
+
+1. Create a [replication user] named `gitlab_replicator`:
+
+ ```sql
+ --- Create a new user 'replicator'
+ CREATE USER gitlab_replicator;
+
+ --- Set/change a password and grants replication privilege
+ ALTER USER gitlab_replicator WITH REPLICATION ENCRYPTED PASSWORD '<replication_password>';
+ ```
+
+1. Make sure your the `gitlab` database user has a password defined:
+
+ ```sh
+ sudo \
+ -u postgres psql \
+ -d template1 \
+ -c "ALTER USER gitlab WITH ENCRYPTED PASSWORD '<database_password>';"
+ ```
+
+1. Edit the content of `database.yml` in `production:` and add the password like the example below:
+
+ ```yaml
+ #
+ # PRODUCTION
+ #
+ production:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_production
+ pool: 10
+ username: gitlab
+ password: <database_password>
+ host: /var/opt/gitlab/geo-postgresql
+ ```
+
+1. Set up TLS support for the PostgreSQL **primary** server:
+
+ CAUTION: **Warning**:
+ Only skip this step if you **know** that PostgreSQL traffic
+ between the **primary** and **secondary** nodes will be secured through some other
+ means, e.g., a known-safe physical network path or a site-to-site VPN that
+ you have configured.
+
+ If you are replicating your database across the open Internet, it is
+ **essential** that the connection is TLS-secured. Correctly configured, this
+ provides protection against both passive eavesdroppers and active
+ "man-in-the-middle" attackers.
+
+ To generate a self-signed certificate and key, run this command:
+
+ ```sh
+ openssl req \
+ -nodes \
+ -batch \
+ -x509 \
+ -newkey rsa:4096 \
+ -keyout server.key \
+ -out server.crt \
+ -days 3650
+ ```
+
+ This will create two files - `server.key` and `server.crt` - that you can
+ use for authentication.
+
+ Copy them to the correct location for your PostgreSQL installation:
+
+ ```sh
+ # Copying a self-signed certificate and key
+ install -o postgres -g postgres -m 0400 -T server.crt ~postgres/9.x/main/data/server.crt
+ install -o postgres -g postgres -m 0400 -T server.key ~postgres/9.x/main/data/server.key
+ ```
+
+ Add this configuration to `postgresql.conf`, removing any existing
+ configuration for `ssl_cert_file` or `ssl_key_file`:
+
+ ```
+ ssl = on
+ ssl_cert_file='server.crt'
+ ssl_key_file='server.key'
+ ```
+
+1. Edit `postgresql.conf` to configure the **primary** server for streaming replication
+ (for Debian/Ubuntu that would be `/etc/postgresql/9.x/main/postgresql.conf`):
+
+ ```
+ listen_address = '<primary_node_ip>'
+ wal_level = hot_standby
+ max_wal_senders = 5
+ min_wal_size = 80MB
+ max_wal_size = 1GB
+ max_replicaton_slots = 1 # Number of Geo secondary nodes
+ wal_keep_segments = 10
+ hot_standby = on
+ ```
+
+ NOTE: **Note**:
+ Be sure to set `max_replication_slots` to the number of Geo **secondary**
+ nodes that you may potentially have (at least 1).
+
+ For security reasons, PostgreSQL by default only listens on the local
+ interface (e.g. 127.0.0.1). However, Geo needs to communicate
+ between the **primary** and **secondary** nodes over a common network, such as a
+ corporate LAN or the public Internet. For this reason, we need to
+ configure PostgreSQL to listen on more interfaces.
+
+ The `listen_address` option opens PostgreSQL up to external connections
+ with the interface corresponding to the given IP. See [the PostgreSQL
+ documentation][pg-docs-runtime-conn] for more details.
+
+ You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
+ match your database replication requirements. Consult the
+ [PostgreSQL - Replication documentation][pg-docs-runtime-replication] for more information.
+
+1. Set the access control on the **primary** node to allow TCP connections using the
+ server's public IP and set the connection from the **secondary** node to require a
+ password. Edit `pg_hba.conf` (for Debian/Ubuntu that would be
+ `/etc/postgresql/9.x/main/pg_hba.conf`):
+
+ ```sh
+ host all all <primary_node_ip>/32 md5
+ host replication gitlab_replicator <secondary_node_ip>/32 md5
+ ```
+
+ If you want to add another secondary, add one more row like the replication
+ one and change the IP address:
+
+ ```sh
+ host all all <primary_node_ip>/32 md5
+ host replication gitlab_replicator <secondary_node_ip>/32 md5
+ host replication gitlab_replicator <another_secondary_node_ip>/32 md5
+ ```
+
+1. Restart PostgreSQL for the changes to take effect.
+
+1. Choose a database-friendly name to use for your secondary to use as the
+ replication slot name. For example, if your domain is
+ `secondary.geo.example.com`, you may use `secondary_example` as the slot
+ name.
+
+1. Create the replication slot on the **primary** node:
+
+ ```sh
+ $ sudo -u postgres psql -c "SELECT * FROM pg_create_physical_replication_slot('secondary_example');"
+ slot_name | xlog_position
+ ------------------+---------------
+ secondary_example |
+ (1 row)
+ ```
+
+1. Now that the PostgreSQL server is set up to accept remote connections, run
+ `netstat -plnt` to make sure that PostgreSQL is listening to the server's
+ public IP.
+
+### Step 2. Configure the secondary server
+
+Follow the first steps in ["configure the secondary server"][database-replication] and note that since you are installing from source, the username and
+group listed as `gitlab-psql` in those steps should be replaced by `postgres`
+instead. After completing the "Test that the `gitlab-psql` user can connect to
+the **primary** node's database" step, continue here:
+
+1. Edit `postgresql.conf` to configure the secondary for streaming replication
+ (for Debian/Ubuntu that would be `/etc/postgresql/9.*/main/postgresql.conf`):
+
+ ```sh
+ wal_level = hot_standby
+ max_wal_senders = 5
+ checkpoint_segments = 10
+ wal_keep_segments = 10
+ hot_standby = on
+ ```
+
+1. Restart PostgreSQL for the changes to take effect.
+
+#### Enable tracking database on the secondary server
+
+Geo secondary nodes use a tracking database to keep track of replication status
+and recover automatically from some replication issues. Follow the steps below to create
+the tracking database.
+
+1. On the secondary node, run the following command to create `database_geo.yml` with the
+ information of your secondary PostgreSQL instance:
+
+ ```sh
+ sudo cp /home/git/gitlab/config/database_geo.yml.postgresql /home/git/gitlab/config/database_geo.yml
+ ```
+
+1. Edit the content of `database_geo.yml` in `production:` as in the example below:
+
+ ```yaml
+ #
+ # PRODUCTION
+ #
+ production:
+ adapter: postgresql
+ encoding: unicode
+ database: gitlabhq_geo_production
+ pool: 10
+ username: gitlab_geo
+ # password:
+ host: /var/opt/gitlab/geo-postgresql
+ ```
+
+1. Create the database `gitlabhq_geo_production` on the PostgreSQL instance of the **secondary** node.
+
+1. Set up the Geo tracking database:
+
+ ```sh
+ bundle exec rake geo:db:migrate
+ ```
+
+1. Configure the [PostgreSQL FDW][FDW] connection and credentials:
+
+ Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
+ params to match your environment. Execute it to set up the FDW connection.
+
+ ```sh
+ #!/bin/bash
+
+ # Secondary Database connection params:
+ DB_HOST="/var/opt/gitlab/postgresql" # change to the public IP or VPC private IP if its an external server
+ DB_NAME="gitlabhq_production"
+ DB_USER="gitlab"
+ DB_PORT="5432"
+
+ # Tracking Database connection params:
+ GEO_DB_HOST="/var/opt/gitlab/geo-postgresql" # change to the public IP or VPC private IP if its an external server
+ GEO_DB_NAME="gitlabhq_geo_production"
+ GEO_DB_USER="gitlab_geo"
+ GEO_DB_PORT="5432"
+
+ query_exec () {
+ gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
+ }
+
+ query_exec "CREATE EXTENSION postgres_fdw;"
+ query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
+ query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}');"
+ query_exec "CREATE SCHEMA gitlab_secondary;"
+ query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
+ ```
+
+ And edit the content of `database_geo.yml` and to add `fdw: true` to
+ the `production:` block.
+
+### Step 3. Initiate the replication process
+
+Below we provide a script that connects the database on the **secondary** node to
+the database on the **primary** node, replicates the database, and creates the
+needed files for streaming replication.
+
+The directories used are the defaults for Debian/Ubuntu. If you have changed
+any defaults, configure it as you see fit replacing the directories and paths.
+
+CAUTION: **Warning:**
+Make sure to run this on the **secondary** server as it removes all PostgreSQL's
+data before running `pg_basebackup`.
+
+1. SSH into your GitLab **secondary** server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Save the snippet below in a file, let's say `/tmp/replica.sh`. Modify the
+ embedded paths if necessary:
+
+ ```
+ #!/bin/bash
+
+ PORT="5432"
+ USER="gitlab_replicator"
+ echo ---------------------------------------------------------------
+ echo WARNING: Make sure this script is run from the secondary server
+ echo ---------------------------------------------------------------
+ echo
+ echo Enter the IP or FQDN of the primary PostgreSQL server
+ read HOST
+ echo Enter the password for $USER@$HOST
+ read -s PASSWORD
+ echo Enter the required sslmode
+ read SSLMODE
+
+ echo Stopping PostgreSQL and all GitLab services
+ sudo service gitlab stop
+ sudo service postgresql stop
+
+ echo Backing up postgresql.conf
+ sudo -u postgres mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
+
+ echo Cleaning up old cluster directory
+ sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data
+
+ echo Starting base backup as the replicator user
+ echo Enter the password for $USER@$HOST
+ sudo -u postgres /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
+
+ echo Writing recovery.conf file
+ sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
+ standby_mode = 'on'
+ primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE'
+ _EOF1_
+ "
+
+ echo Restoring postgresql.conf
+ sudo -u postgres mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
+
+ echo Starting PostgreSQL
+ sudo service postgresql start
+ ```
+
+1. Run it with:
+
+ ```sh
+ bash /tmp/replica.sh
+ ```
+
+ When prompted, enter the IP/FQDN of the **primary** node, and the password you set up
+ for the `gitlab_replicator` user in the first step.
+
+ You should use `verify-ca` for the `sslmode`. You can use `disable` if you
+ are happy to skip PostgreSQL TLS authentication altogether (e.g., you know
+ the network path is secure, or you are using a site-to-site VPN). This is
+ **not** safe over the public Internet!
+
+ You can read more details about each `sslmode` in the
+ [PostgreSQL documentation][pg-docs-ssl];
+ the instructions above are carefully written to ensure protection against
+ both passive eavesdroppers and active "man-in-the-middle" attackers.
+
+The replication process is now over.
+
+## PGBouncer support (optional)
+
+1. First, enter the PostgreSQL console as an admin user.
+
+1. Then create the read-only user:
+
+ ```sql
+ -- NOTE: Use the password defined earlier
+ CREATE USER gitlab_geo_fdw WITH password '<your_password_here>';
+ GRANT CONNECT ON DATABASE gitlabhq_production to gitlab_geo_fdw;
+ GRANT USAGE ON SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL TABLES IN SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO gitlab_geo_fdw;
+
+ -- Tables created by "gitlab" should be made read-only for "gitlab_geo_fdw"
+ -- automatically.
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON TABLES TO gitlab_geo_fdw;
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON SEQUENCES TO gitlab_geo_fdw;
+ ```
+
+1. Enter the PostgreSQL console on the **secondary** tracking database and change the user mapping to this new user:
+
+ ```
+ ALTER USER MAPPING FOR gitlab_geo SERVER gitlab_secondary OPTIONS (SET user 'gitlab_geo_fdw')
+ ```
+
+## MySQL replication
+
+MySQL replication is not supported for Geo.
+
+## Troubleshooting
+
+Read the [troubleshooting document](troubleshooting.md).
+
+[replication-slots-article]: https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75
+[pgback]: http://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html
+[replication user]:https://wiki.postgresql.org/wiki/Streaming_Replication
+[FDW]: https://www.postgresql.org/docs/9.6/static/postgres-fdw.html
+[database]: database.md
+[add-geo-node]: configuration.md#step-3-add-the-secondary-gitlab-node
+[database-replication]: database.md#step-2-configure-the-secondary-server
+[pg-docs-ssl]: https://www.postgresql.org/docs/9.6/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION
+[pg-docs-runtime-conn]: https://www.postgresql.org/docs/9.6/static/runtime-config-connection.html
+[pg-docs-runtime-replication]: https://www.postgresql.org/docs/9.6/static/runtime-config-replication.html
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
new file mode 100644
index 00000000000..5b02b861c61
--- /dev/null
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -0,0 +1,23 @@
+# Docker Registry for a secondary node **[PREMIUM ONLY]**
+
+You can set up a [Docker Registry] on your
+**secondary** Geo node that mirrors the one on the **primary** Geo node.
+
+## Storage support
+
+CAUTION: **Warning:**
+If you use [local storage][registry-storage]
+for the Container Registry you **cannot** replicate it to a **secondary** node.
+
+Docker Registry currently supports a few types of storages. If you choose a
+distributed storage (`azure`, `gcs`, `s3`, `swift`, or `oss`) for your Docker
+Registry on the **primary** node, you can use the same storage for a **secondary**
+Docker Registry as well. For more information, read the
+[Load balancing considerations][registry-load-balancing]
+when deploying the Registry, and how to set up the storage driver for GitLab's
+integrated [Container Registry][registry-storage].
+
+[ee]: https://about.gitlab.com/pricing/
+[Docker Registry]: https://docs.docker.com/registry/
+[registry-storage]: ../../container_registry.md#container-registry-storage-driver
+[registry-load-balancing]: https://docs.docker.com/registry/deploying/#load-balancing-considerations
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
new file mode 100644
index 00000000000..dae5ed911b0
--- /dev/null
+++ b/doc/administration/geo/replication/external_database.md
@@ -0,0 +1,219 @@
+# Geo with external PostgreSQL instances **[PREMIUM ONLY]**
+
+This document is relevant if you are using a PostgreSQL instance that is *not
+managed by Omnibus*. This includes cloud-managed instances like AWS RDS, or
+manually installed and configured PostgreSQL instances.
+
+NOTE: **Note**:
+We strongly recommend running Omnibus-managed instances as they are actively
+developed and tested. We aim to be compatible with most external
+(not managed by Omnibus) databases but we do not guarantee compatibility.
+
+## **Primary** node
+
+1. SSH into a GitLab **primary** application server and login as root:
+
+ ```sh
+ sudo -i
+ ```
+
+1. Execute the command below to define the node as **primary** node:
+
+ ```sh
+ gitlab-ctl set-geo-primary-node
+ ```
+
+ This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
+
+
+### Configure the external database to be replicated
+
+To set up an external database, you can either:
+
+- Set up streaming replication yourself (for example, in AWS RDS).
+- Perform the Omnibus configuration manually as follows.
+
+#### Leverage your cloud provider's tools to replicate the primary database
+
+Given you have a primary node set up on AWS EC2 that uses RDS.
+You can now just create a read-only replica in a different region and the
+replication process will be managed by AWS. Make sure you've set Network ACL, Subnet, and
+Security Group according to your needs, so the secondary application node can access the database.
+Skip to the [Configure secondary application node](#configure-secondary-application-nodes-to-use-the-external-read-replica) section below.
+
+#### Manually configure the primary database for replication
+
+The [geo_primary_role](https://docs.gitlab.com/omnibus/roles/#gitlab-geo-roles)
+configures the **primary** node's database to be replicated by making changes to
+`pg_hba.conf` and `postgresql.conf`. Make the following configuration changes
+manually to your external database configuration:
+
+```
+##
+## Geo Primary Role
+## - pg_hba.conf
+##
+host replication gitlab_replicator <trusted secondary IP>/32 md5
+```
+
+```
+##
+## Geo Primary Role
+## - postgresql.conf
+##
+sql_replication_user = gitlab_replicator
+wal_level = hot_standby
+max_wal_senders = 10
+wal_keep_segments = 50
+max_replication_slots = 1 # number of secondary instances
+hot_standby = on
+```
+
+## **Secondary** nodes
+
+### Manually configure the replica database
+
+Make the following configuration changes manually to your `postgresql.conf`
+of external replica database:
+
+```
+##
+## Geo Secondary Role
+## - postgresql.conf
+##
+wal_level = hot_standby
+max_wal_senders = 10
+wal_keep_segments = 10
+hot_standby = on
+```
+
+### Configure **secondary** application nodes to use the external read-replica
+
+With Omnibus, the
+[geo_secondary_role](https://docs.gitlab.com/omnibus/roles/#gitlab-geo-roles)
+has three main functions:
+
+1. Configure the replica database.
+1. Configure the tracking database.
+1. Enable the [Geo Log Cursor](index.md#geo-log-cursor) (not covered in this section).
+
+To configure the connection to the external read-replica database and enable Log Cursor:
+
+1. SSH into a GitLab **secondary** application server and login as root:
+
+ ```bash
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following
+
+ ```ruby
+ ##
+ ## Geo Secondary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_secondary_role']
+
+ # note this is shared between both databases,
+ # make sure you define the same password in both
+ gitlab_rails['db_password'] = '<your_password_here>'
+
+ gitlab_rails['db_username'] = 'gitlab'
+ gitlab_rails['db_host'] = '<database_read_replica_host>'
+ ```
+1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
+
+### Configure the tracking database
+
+**Secondary** nodes use a separate PostgreSQL installation as a tracking
+database to keep track of replication status and automatically recover from
+potential replication issues. Omnibus automatically configures a tracking database
+when `roles ['geo_secondary_role']` is set. For high availability,
+refer to [Geo High Availability](https://docs.gitlab.com/ee/administration/high_availability).
+If you want to run this database external to Omnibus, please follow the instructions below.
+
+The tracking database requires an [FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
+connection with the **secondary** replica database for improved performance.
+
+If you have an external database ready to be used as the tracking database,
+follow the instructions below to use it:
+
+NOTE: **Note:**
+If you want to use AWS RDS as a tracking database, make sure it has access to
+the secondary database. Unfortunately, just assigning the same security group is not enough as
+outbound rules do not apply to RDS PostgreSQL databases. Therefore, you need to explicitly add an inbound
+rule to the read-replica's security group allowing any TCP traffic from
+the tracking database on port 5432.
+
+1. SSH into a GitLab **secondary** server and login as root:
+
+ ```bash
+ sudo -i
+ ```
+
+1. Edit `/etc/gitlab/gitlab.rb` with the connection params and credentials for
+ the machine with the PostgreSQL instance:
+
+ ```ruby
+ geo_secondary['db_username'] = 'gitlab_geo'
+ geo_secondary['db_password'] = '<your_password_here>'
+
+ geo_secondary['db_host'] = '<tracking_database_host>'
+ geo_secondary['db_port'] = <tracking_database_port> # change to the correct port
+ geo_secondary['db_fdw'] = true # enable FDW
+ geo_postgresql['enable'] = false # don't use internal managed instance
+ ```
+
+1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
+
+1. Run the tracking database migrations:
+
+ ```bash
+ gitlab-rake geo:db:create
+ gitlab-rake geo:db:migrate
+ ```
+
+1. Configure the
+ [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
+ connection and credentials:
+
+ Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
+ params to match your environment. Execute it to set up the FDW connection.
+
+ ```bash
+ #!/bin/bash
+
+ # Secondary Database connection params:
+ DB_HOST="<public_ip_or_vpc_private_ip>"
+ DB_NAME="gitlabhq_production"
+ DB_USER="gitlab"
+ DB_PASS="<your_password_here>"
+ DB_PORT="5432"
+
+ # Tracking Database connection params:
+ GEO_DB_HOST="<public_ip_or_vpc_private_ip>"
+ GEO_DB_NAME="gitlabhq_geo_production"
+ GEO_DB_USER="gitlab_geo"
+ GEO_DB_PORT="5432"
+
+ query_exec () {
+ gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
+ }
+
+ query_exec "CREATE EXTENSION postgres_fdw;"
+ query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
+ query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');"
+ query_exec "CREATE SCHEMA gitlab_secondary;"
+ query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
+ ```
+
+ NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
+ but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
+ `psql` for AWS RDS.
+
+1. Save the file and [restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart)
+1. Populate the FDW tables:
+
+ ```bash
+ gitlab-rake geo:db:refresh_foreign_tables
+ ```
diff --git a/doc/administration/geo/replication/faq.md b/doc/administration/geo/replication/faq.md
new file mode 100644
index 00000000000..dd1af0dbf9c
--- /dev/null
+++ b/doc/administration/geo/replication/faq.md
@@ -0,0 +1,62 @@
+# Geo Frequently Asked Questions **[PREMIUM ONLY]**
+
+## What are the minimum requirements to run Geo?
+
+The requirements are listed [on the index page](index.md#requirements-for-running-geo)
+
+## How does Geo know which projects to sync?
+
+On each **secondary** node, there is a read-only replicated copy of the GitLab database.
+A **secondary** node also has a tracking database where it stores which projects have been synced.
+Geo compares the two databases to find projects that are not yet tracked.
+
+At the start, this tracking database is empty, so Geo will start trying to update from every project that it can see in the GitLab database.
+
+For each project to sync:
+
+1. Geo will issue a `git fetch geo --mirror` to get the latest information from the **primary** node.
+If there are no changes, the sync will be fast and end quickly. Otherwise, it will pull the latest commits.
+1. The **secondary** node will update the tracking database to store the fact that it has synced projects A, B, C, etc.
+1. Repeat until all projects are synced.
+
+When someone pushes a commit to the **primary** node, it generates an event in the GitLab database that the repository has changed.
+The **secondary** node sees this event, marks the project in question as dirty, and schedules the project to be resynced.
+
+To ensure that problems with pipelines (for example, syncs failing too many times or jobs being lost) don't permanently stop projects syncing, Geo also periodically checks the tracking database for projects that are marked as dirty. This check happens when
+the number of concurrent syncs falls below `repos_max_capacity` and there are no new projects waiting to be synced.
+
+Geo also has a checksum feature which runs a SHA256 sum across all the Git references to the SHA values.
+If the refs don't match between the **primary** node and the **secondary** node, then the **secondary** node will mark that project as dirty and try to resync it.
+So even if we have an outdated tracking database, the validation should activate and find discrepancies in the repository state and resync.
+
+## Can I use Geo in a disaster recovery situation?
+
+Yes, but there are limitations to what we replicate (see
+[What data is replicated to a **secondary** node?](#what-data-is-replicated-to-a-secondary-node)).
+
+Read the documentation for [Disaster Recovery](../disaster_recovery/index.md).
+
+## What data is replicated to a **secondary** node?
+
+We currently replicate project repositories, LFS objects, generated
+attachments / avatars and the whole database. This means user accounts,
+issues, merge requests, groups, project data, etc., will be available for
+query.
+
+## Can I git push to a **secondary** node?
+
+Yes! Pushing directly to a **secondary** node (for both HTTP and SSH, including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+
+## How long does it take to have a commit replicated to a **secondary** node?
+
+All replication operations are asynchronous and are queued to be dispatched. Therefore, it depends on a lot of
+factors including the amount of traffic, how big your commit is, the
+connectivity between your nodes, your hardware, etc.
+
+## What if the SSH server runs at a different port?
+
+That's totally fine. We use HTTP(s) to fetch repository changes from the **primary** node to all **secondary** nodes.
+
+## Is this possible to set up a Docker Registry for a **secondary** node that mirrors the one on the **primary** node?
+
+Yes. See [Docker Registry for a **secondary** node](docker_registry.md).
diff --git a/doc/administration/geo/replication/high_availability.md b/doc/administration/geo/replication/high_availability.md
new file mode 100644
index 00000000000..c87c50ce1be
--- /dev/null
+++ b/doc/administration/geo/replication/high_availability.md
@@ -0,0 +1,228 @@
+# Geo High Availability **[PREMIUM ONLY]**
+
+This document describes a minimal reference architecture for running Geo
+in a high availability configuration. If your HA setup differs from the one
+described, it is possible to adapt these instructions to your needs.
+
+## Architecture overview
+
+![Geo HA Diagram](https://docs.gitlab.com/ee/administration/img/high_availability/geo-ha-diagram.png)
+
+_[diagram source - gitlab employees only][diagram-source]_
+
+The topology above assumes that the **primary** and **secondary** Geo clusters
+are located in two separate locations, on their own virtual network
+with private IP addresses. The network is configured such that all machines within
+one geographic location can communicate with each other using their private IP addresses.
+The IP addresses given are examples and may be different depending on the
+network topology of your deployment.
+
+The only external way to access the two Geo deployments is by HTTPS at
+`gitlab.us.example.com` and `gitlab.eu.example.com` in the example above.
+
+NOTE: **Note:**
+The **primary** and **secondary** Geo deployments must be able to communicate to each other over HTTPS.
+
+## Redis and PostgreSQL High Availability
+
+The **primary** and **secondary** Redis and PostgreSQL should be configured
+for high availability. Because of the additional complexity involved
+in setting up this configuration for PostgreSQL and Redis,
+it is not covered by this Geo HA documentation.
+
+For more information about setting up a highly available PostgreSQL cluster and Redis cluster using the omnibus package see the high availability documentation for
+[PostgreSQL](../../high_availability/database.md) and
+[Redis](../../high_availability/redis.md), respectively.
+
+NOTE: **Note:**
+It is possible to use cloud hosted services for PostgreSQL and Redis, but this is beyond the scope of this document.
+
+## Prerequisites: A working GitLab HA cluster
+
+This cluster will serve as the **primary** node. Use the
+[GitLab HA documentation](../../high_availability/README.md) to set this up.
+
+## Configure the GitLab cluster to be the **primary** node
+
+The following steps enable a GitLab cluster to serve as the **primary** node.
+
+### Step 1: Configure the **primary** frontend servers
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following:
+
+ ```ruby
+ ##
+ ## Enable the Geo primary role
+ ##
+ roles ['geo_primary_role']
+
+ ##
+ ## Disable automatic migrations
+ ##
+ gitlab_rails['auto_migrate'] = false
+ ```
+
+After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
+
+NOTE: **Note:** PostgreSQL and Redis should have already been disabled on the
+application servers, and connections from the application servers to those
+services on the backend servers configured, during normal GitLab HA set up. See
+high availability configuration documentation for
+[PostgreSQL](https://docs.gitlab.com/ee/administration/high_availability/database.html#configuring-the-application-nodes)
+and [Redis](../../high_availability/redis.md#example-configuration-for-the-gitlab-application).
+
+The **primary** database will require modification later, as part of
+[step 2](#step-2-configure-the-main-read-only-replica-postgresql-database-on-the-secondary-node).
+
+## Configure a **secondary** node
+
+A **secondary** cluster is similar to any other GitLab HA cluster, with two
+major differences:
+
+* The main PostgreSQL database is a read-only replica of the **primary** node's
+ PostgreSQL database.
+* There is also a single PostgreSQL database for the **secondary** cluster,
+ called the "tracking database", which tracks the synchronization state of
+ various resources.
+
+Therefore, we will set up the HA components one-by-one, and include deviations
+from the normal HA setup.
+
+### Step 1: Configure the Redis and NFS services on the **secondary** node
+
+Configure the following services, again using the non-Geo high availability
+documentation:
+
+* [Configuring Redis for GitLab HA](../../high_availability/redis.md) for high
+ availability.
+* [NFS](../../high_availability/nfs.md) which will store data that is
+ synchronized from the **primary** node.
+
+### Step 2: Configure the main read-only replica PostgreSQL database on the **secondary** node
+
+NOTE: **Note:** The following documentation assumes the database will be run on
+only a single machine, rather than as a PostgreSQL cluster.
+
+Configure the [**secondary** database](database.md) as a read-only replica of
+the **primary** database.
+
+If using an external PostgreSQL instance, refer also to
+[Geo with external PostgreSQL instances](external_database.md).
+
+### Step 3: Configure the tracking database on the **secondary** node
+
+NOTE: **Note:** This documentation assumes the tracking database will be run on
+only a single machine, rather than as a PostgreSQL cluster.
+
+Configure the tracking database.
+
+1. Edit `/etc/gitlab/gitlab.rb` in the tracking database machine, and add the
+ following:
+
+ ```ruby
+ ##
+ ## Enable the Geo secondary tracking database
+ ##
+ geo_postgresql['enable'] = true
+ geo_postgresql['ha'] = true
+ ```
+
+After making these changes [Reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
+
+If using an external PostgreSQL instance, refer also to
+[Geo with external PostgreSQL instances](external_database.md).
+
+### Step 4: Configure the frontend application servers on the **secondary** node
+
+In the architecture overview, there are two machines running the GitLab
+application services. These services are enabled selectively in the
+configuration.
+
+Configure the application servers following
+[Configuring GitLab for HA](../../high_availability/gitlab.md), then make the
+following modifications:
+
+1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary**
+ cluster, and add the following:
+
+ ```ruby
+ ##
+ ## Enable the Geo secondary role
+ ##
+ roles ['geo_secondary_role', 'application_role']
+
+ ##
+ ## Disable automatic migrations
+ ##
+ gitlab_rails['auto_migrate'] = false
+
+ ##
+ ## Configure the connection to the tracking DB. And disable application
+ ## servers from running tracking databases.
+ ##
+ geo_secondary['db_host'] = '<geo_tracking_db_host>'
+ geo_secondary['db_password'] = '<geo_tracking_db_password>'
+ geo_postgresql['enable'] = false
+
+ ##
+ ## Configure connection to the streaming replica database, if you haven't
+ ## already
+ ##
+ gitlab_rails['db_host'] = '<replica_database_host>'
+ gitlab_rails['db_password'] = '<replica_database_password>'
+
+ ##
+ ## Configure connection to Redis, if you haven't already
+ ##
+ gitlab_rails['redis_host'] = '<redis_host>'
+ gitlab_rails['redis_password'] = '<redis_password>'
+
+ ##
+ ## If you are using custom users not managed by Omnibus, you need to specify
+ ## UIDs and GIDs like below, and ensure they match between servers in a
+ ## cluster to avoid permissions issues
+ ##
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
+NOTE: **Note:**
+If you had set up PostgreSQL cluster using the omnibus package and you had set
+up `postgresql['sql_user_password'] = 'md5 digest of secret'` setting, keep in
+mind that `gitlab_rails['db_password']` and `geo_secondary['db_password']`
+mentioned above contains the plaintext passwords. This is used to let the Rails
+servers connect to the databases.
+
+NOTE: **Note:**
+Make sure that current node IP is listed in `postgresql['md5_auth_cidr_addresses']` setting of your remote database.
+
+After making these changes [Reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
+
+On the secondary the following GitLab frontend services will be enabled:
+
+* geo-logcursor
+* gitlab-pages
+* gitlab-workhorse
+* logrotate
+* nginx
+* registry
+* remote-syslog
+* sidekiq
+* unicorn
+
+Verify these services by running `sudo gitlab-ctl status` on the frontend
+application servers.
+
+### Step 5: Set up the LoadBalancer for the **secondary** node
+
+In this topology, a load balancer is required at each geographic location to
+route traffic to the application servers.
+
+See [Load Balancer for GitLab HA](../../high_availability/load_balancer.md) for
+more information.
+
+[diagram-source]: https://docs.google.com/drawings/d/1z0VlizKiLNXVVVaERFwgsIOuEgjcUqDTWPdQYsE7Z4c/edit
+[gitlab-reconfigure]: ../../restart_gitlab.md#omnibus-gitlab-reconfigure
diff --git a/doc/administration/geo/replication/img/geo_architecture.png b/doc/administration/geo/replication/img/geo_architecture.png
new file mode 100644
index 00000000000..d318cd5d0f4
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_architecture.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_node_dashboard.png b/doc/administration/geo/replication/img/geo_node_dashboard.png
new file mode 100644
index 00000000000..99792d0770d
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_node_dashboard.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_node_healthcheck.png b/doc/administration/geo/replication/img/geo_node_healthcheck.png
new file mode 100644
index 00000000000..33a31f7ab49
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_node_healthcheck.png
Binary files differ
diff --git a/doc/administration/geo/replication/img/geo_overview.png b/doc/administration/geo/replication/img/geo_overview.png
new file mode 100644
index 00000000000..01c1615212c
--- /dev/null
+++ b/doc/administration/geo/replication/img/geo_overview.png
Binary files differ
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
new file mode 100644
index 00000000000..6abea2cf271
--- /dev/null
+++ b/doc/administration/geo/replication/index.md
@@ -0,0 +1,309 @@
+# Geo Replication **[PREMIUM ONLY]**
+
+Geo is the solution for widely distributed development teams.
+
+## Overview
+
+Fetching large repositories can take a long time for teams located far from a single GitLab instance.
+
+Geo provides local, read-only instances of your GitLab instances, reducing the time it takes to clone and fetch large repositories and speeding up development.
+
+> - Geo is part of [GitLab Premium](https://about.gitlab.com/pricing/#self-managed).
+> - Introduced in GitLab Enterprise Edition 8.9.
+> - We recommend you use:
+> - At least GitLab Enterprise Edition 10.0 for basic Geo features.
+> - The latest version for a better experience.
+> - Make sure that all nodes run the same GitLab version.
+> - Geo requires PostgreSQL 9.6 and Git 2.9, in addition to GitLab's usual [minimum requirements](../../../install/requirements.md).
+> - Using Geo in combination with [High Availability](../../high_availability/README.md) is considered **Generally Available** (GA) in GitLab [GitLab Premium](https://about.gitlab.com/pricing/) 10.4.
+
+For a video introduction to Geo, see [Introduction to GitLab Geo - GitLab Features](https://www.youtube.com/watch?v=-HDLxSjEh6w).
+
+CAUTION: **Caution:**
+Geo undergoes significant changes from release to release. Upgrades **are** supported and [documented](#updating-geo), but you should ensure that you're using the right version of the documentation for your installation.
+
+To make sure you're using the right version of the documentation, navigate to [the source version of this page on GitLab.com](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/administration/geo/replication/index.md) and choose the appropriate release from the **Switch branch/tag** dropdown. For example, [`v11.2.3-ee`](https://gitlab.com/gitlab-org/gitlab-ee/blob/v11.2.3-ee/doc/administration/geo/replication/index.md).
+
+## Use cases
+
+Implementing Geo provides the following benefits:
+
+- Reduce from minutes to seconds the time taken for your distributed developers to clone and fetch large repositories and projects.
+- Enable all of your developers to contribute ideas and work in parallel, no matter where they are.
+- Balance the load between your **primary** and **secondary** nodes, or offload your automated tests to a **secondary** node.
+
+In addition, it:
+
+- Can be used for cloning and fetching projects, in addition to reading any data available in the GitLab web interface (see [current limitations](#current-limitations)).
+- Overcomes slow connections between distant offices, saving time by improving speed for distributed teams.
+- Helps reducing the loading time for automated tasks, custom integrations, and internal workflows.
+- Can quickly fail over to a **secondary** node in a [disaster recovery](../disaster_recovery/index.md) scenario.
+- Allows [planned failover](../disaster_recovery/planned_failover.md) to a **secondary** node.
+
+Geo provides:
+
+- Read-only **secondary** nodes: Maintain one **primary** GitLab node while still enabling read-only **secondary** nodes for each of your distributed teams.
+- Authentication system hooks: **Secondary** nodes receives all authentication data (like user accounts and logins) from the **primary** instance.
+- An intuitive UI: **Secondary** nodes utilize the same web interface your team has grown accustomed to. In addition, there are visual notifications that block write operations and make it clear that a user is on a **secondary** node.
+
+## How it works
+
+Your Geo instance can be used for cloning and fetching projects, in addition to reading any data. This will make working with large repositories over large distances much faster.
+
+![Geo overview](img/geo_overview.png)
+
+When Geo is enabled, the:
+
+- Original instance is known as the **primary** node.
+- Replicated read-only nodes are known as **secondary** nodes.
+
+Keep in mind that:
+
+- **Secondary** nodes talk to the **primary** node to:
+ - Get user data for logins (API).
+ - Replicate repositories, LFS Objects, and Attachments (HTTPS + JWT).
+- Since GitLab Premium 10.0, the **primary** node no longer talks to **secondary** nodes to notify for changes (API).
+- Pushing directly to a **secondary** node (for both HTTP and SSH, including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+- There are [limitations](#current-limitations) in the current implementation.
+
+### Architecture
+
+The following diagram illustrates the underlying architecture of Geo.
+
+![Geo architecture](img/geo_architecture.png)
+
+In this diagram:
+
+- There is the **primary** node and the details of one **secondary** node.
+- Writes to the database can only be performed on the **primary** node. A **secondary** node receives database
+ updates via PostgreSQL streaming replication.
+- If present, the [LDAP server](#ldap) should be configured to replicate for [Disaster Recovery](../disaster_recovery/index.md) scenarios.
+- A **secondary** node performs different type of synchronizations against the **primary** node, using a special
+ authorization protected by JWT:
+ - Repositories are cloned/updated via Git over HTTPS.
+ - Attachments, LFS objects, and other files are downloaded via HTTPS using a private API endpoint.
+
+From the perspective of a user performing Git operations:
+
+- The **primary** node behaves as a full read-write GitLab instance.
+- **Secondary** nodes are read-only but proxy Git push operations to the **primary** node. This makes **secondary** nodes appear to support push operations themselves.
+
+To simplify the diagram, some necessary components are omitted. Note that:
+
+- Git over SSH requires [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) and OpenSSH.
+- Git over HTTPS required [`gitlab-workhorse`](https://gitlab.com/gitlab-org/gitlab-workhorse).
+
+Note that a **secondary** node needs two different PostgreSQL databases:
+
+- A read-only database instance that streams data from the main GitLab database.
+- [Another database instance](#geo-tracking-database) used internally by the **secondary** node to record what data has been replicated.
+
+In **secondary** nodes, there is an additional daemon: [Geo Log Cursor](#geo-log-cursor).
+
+## Requirements for running Geo
+
+The following are required to run Geo:
+
+- An operating system that supports OpenSSH 6.9+ (needed for
+ [fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md))
+ The following operating systems are known to ship with a current version of OpenSSH:
+ - [CentOS](https://www.centos.org) 7.4+
+ - [Ubuntu](https://www.ubuntu.com) 16.04+
+- PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
+- Git 2.9+
+
+### Firewall rules
+
+The following table lists basic ports that must be open between the **primary** and **secondary** nodes for Geo.
+
+| **Primary** node | **Secondary** node | Protocol |
+|:-----------------|:-------------------|:-------------|
+| 80 | 80 | HTTP |
+| 443 | 443 | TCP or HTTPS |
+| 22 | 22 | TCP |
+| 5432 | | PostgreSQL |
+
+See the full list of ports used by GitLab in [Package defaults](https://docs.gitlab.com/omnibus/package-information/defaults.html)
+
+NOTE: **Note:**
+[Web terminal](../../../ci/environments.md#web-terminals) support requires your load balancer to correctly handle WebSocket connections.
+When using HTTP or HTTPS proxying, your load balancer must be configured to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the [web terminal](../../integration/terminal.md) integration guide for more details.
+
+NOTE: **Note:**
+When using HTTPS protocol for port 443, you will need to add an SSL certificate to the load balancers.
+If you wish to terminate SSL at the GitLab application server instead, use TCP protocol.
+
+### LDAP
+
+We recommend that if you use LDAP on your **primary** node, you also set up secondary LDAP servers on each **secondary** node. Otherwise, users will not be able to perform Git operations over HTTP(s) on the **secondary** node using HTTP Basic Authentication. However, Git via SSH and personal access tokens will still work.
+
+NOTE: **Note:**
+It is possible for all **secondary** nodes to share an LDAP server, but additional latency can be an issue. Also, consider what LDAP server will be available in a [disaster recovery](../disaster_recovery/index.md) scenario if a **secondary** node is promoted to be a **primary** node.
+
+Check for instructions on how to set up replication in your LDAP service. Instructions will be different depending on the software or service used. For example, OpenLDAP provides [these instructions](https://www.openldap.org/doc/admin24/replication.html).
+
+### Geo Tracking Database
+
+The tracking database instance is used as metadata to control what needs to be updated on the disk of the local instance. For example:
+
+- Download new assets.
+- Fetch new LFS Objects.
+- Fetch changes from a repository that has recently been updated.
+
+Because the replicated database instance is read-only, we need this additional database instance for each **secondary** node.
+The tracking database requires the `postgres_fdw` extension.
+
+### Geo Log Cursor
+
+This daemon:
+
+- Reads a log of events replicated by the **primary** node to the **secondary** database instance.
+- Updates the Geo Tracking Database instance with changes that need to be executed.
+
+When something is marked to be updated in the tracking database instance, asynchronous jobs running on the **secondary** node will execute the required operations and update the state.
+
+This new architecture allows GitLab to be resilient to connectivity issues between the nodes. It doesn't matter how long the **secondary** node is disconnected from the **primary** node as it will be able to replay all the events in the correct order and become synchronized with the **primary** node again.
+
+## Setup instructions
+
+These instructions assume you have a working instance of GitLab. They guide you through:
+
+1. Making your existing instance the **primary** node.
+1. Adding **secondary** nodes.
+
+CAUTION: **Caution:**
+The steps below should be followed in the order they appear. **Make sure the GitLab version is the same on all nodes.**
+
+### Using Omnibus GitLab
+
+If you installed GitLab using the Omnibus packages (highly recommended):
+
+1. [Install GitLab Enterprise Edition](https://about.gitlab.com/installation/) on the server that will serve as the **secondary** node. Do not create an account or log in to the new **secondary** node.
+1. [Upload the GitLab License](https://docs.gitlab.com/ee/user/admin_area/license.html) on the **primary** node to unlock Geo. The license must be for [GitLab Premium](https://about.gitlab.com/pricing/) or higher.
+1. [Set up the database replication](database.md) (`primary (read-write) <-> secondary (read-only)` topology).
+1. [Configure fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md). This step is required and needs to be done on **both** the **primary** and **secondary** nodes.
+1. [Configure GitLab](configuration.md) to set the **primary** and **secondary** nodes.
+1. Optional: [Configure a secondary LDAP server](../../auth/ldap.md) for the **secondary** node. See [notes on LDAP](#ldap).
+1. [Follow the "Using a Geo Server" guide](using_a_geo_server.md).
+
+### Using GitLab installed from source (Deprecated)
+
+NOTE: **Note:**
+In GitLab 11.5, support for using Geo in GitLab source installations was deprecated and will be removed in a future release. Please consider [migrating to GitLab Omnibus install](https://docs.gitlab.com/omnibus/update/convert_to_omnibus.html).
+
+If you installed GitLab from source:
+
+1. [Install GitLab Enterprise Edition](../../../install/installation.md) on the server that will serve as the **secondary** node. Do not create an account or log in to the new **secondary** node.
+1. [Upload the GitLab License](https://docs.gitlab.com/ee/user/admin_area/license.html) on the **primary** node to unlock Geo. The license must be for [GitLab Premium](https://about.gitlab.com/pricing/) or higher.
+1. [Set up the database replication](database_source.md) (`primary (read-write) <-> secondary (read-only)` topology).
+1. [Configure fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md). Do this step for **both** **primary** and **secondary** nodes.
+1. [Configure GitLab](configuration_source.md) to set the **primary** and **secondary** nodes.
+1. [Follow the "Using a Geo Server" guide](using_a_geo_server.md).
+
+## Post-installation documentation
+
+After installing GitLab on the **secondary** nodes and performing the initial configuration, see the following documentation for post-installation information.
+
+### Configuring Geo
+
+For information on configuring Geo, see:
+
+- [Geo configuration (GitLab Omnibus)](configuration.md).
+- [Geo configuration (source)](configuration_source.md). Configuring Geo in GitLab source installations was **deprecated** in GitLab 11.5.
+
+### Updating Geo
+
+For information on how to update your Geo nodes to the latest GitLab version, see [Updating the Geo nodes](updating_the_geo_nodes.md).
+
+### Configuring Geo high availability
+
+For information on configuring Geo for high availability, see [Geo High Availability](high_availability.md).
+
+### Configuring Geo with Object Storage
+
+For information on configuring Geo with object storage, see [Geo with Object storage](object_storage.md).
+
+### Disaster Recovery
+
+For information on using Geo in disaster recovery situations to mitigate data-loss and restore services, see [Disaster Recovery](../disaster_recovery/index.md).
+
+### Replicating the Container Registry
+
+For more information on how to replicate the Container Registry, see [Docker Registry for a **secondary** node](docker_registry.md).
+
+### Security Review
+
+For more information on Geo security, see [Geo security review](security_review.md).
+
+### Tuning Geo
+
+For more information on tuning Geo, see [Tuning Geo](tuning.md).
+
+## Remove Geo node
+
+For more information on removing a Geo node, see [Removing **secondary** Geo nodes](remove_geo_node.md).
+
+## Current limitations
+
+CAUTION: **Caution:**
+This list of limitations only reflects the latest version of GitLab. If you are using an older version, extra limitations may be in place.
+
+- Pushing directly to a **secondary** node redirects (for HTTP) or proxies (for SSH) the request to the **primary** node instead of [handling it directly](https://gitlab.com/gitlab-org/gitlab-ee/issues/1381), except when using Git over HTTP with credentials embedded within the URI. For example, `https://user:password@secondary.tld`.
+- The **primary** node has to be online for OAuth login to happen. Existing sessions and Git are not affected.
+- The installation takes multiple manual steps that together can take about an hour depending on circumstances. We are working on improving this experience. See [gitlab-org/omnibus-gitlab#2978](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/2978) for details.
+- Real-time updates of issues/merge requests (for example, via long polling) doesn't work on the **secondary** node.
+- [Selective synchronization](configuration.md#selective-synchronization) applies only to files and repositories. Other datasets are replicated to the **secondary** node in full, making it inappropriate for use as an access control mechanism.
+- Object pools for forked project deduplication work only on the **primary** node, and are duplicated on the **secondary** node.
+- [External merge request diffs](../../merge_request_diffs.md) will not be replicated if they are on-disk, and viewing merge requests will fail. However, external MR diffs in object storage **are** supported. The default configuration (in-database) does work.
+
+### Limitations on replication
+
+Only the following items are replicated to the **secondary** node:
+
+- All database content. For example, snippets, epics, issues, merge requests, groups, and project metadata.
+- Project repositories.
+- Project wiki repositories.
+- User uploads. For example, attachments to issues, merge requests, epics, and avatars.
+- CI job artifacts and traces.
+
+DANGER: **DANGER**
+Data not on this list is unavailable on the **secondary** node. Failing over without manually replicating data not on this list will cause the data to be **lost**.
+
+### Examples of data not replicated
+
+Take special note that these examples of GitLab features are both:
+
+- Commonly used.
+- **Not** replicated by Geo at present.
+
+Examples include:
+
+- [Elasticsearch integration](https://docs.gitlab.com/ee/integration/elasticsearch.html).
+- [Container Registry](../../container_registry.md). [Object Storage](object_storage.md) can mitigate this.
+- [GitLab Pages](../../pages/index.md).
+- [Mattermost integration](https://docs.gitlab.com/omnibus/gitlab-mattermost/).
+
+CAUTION: **Caution:**
+If you wish to use them on a **secondary** node, or to execute a failover successfully, you will need to replicate their data using some other means.
+
+## Frequently Asked Questions
+
+For answers to common questions, see the [Geo FAQ](faq.md).
+
+## Log files
+
+Since GitLab 9.5, Geo stores structured log messages in a `geo.log` file. For Omnibus installations, this file is at `/var/log/gitlab/gitlab-rails/geo.log`.
+
+This file contains information about when Geo attempts to sync repositories and files. Each line in the file contains a separate JSON entry that can be ingested into Elasticsearch, Splunk, etc.
+
+For example:
+
+```json
+{"severity":"INFO","time":"2017-08-06T05:40:16.104Z","message":"Repository update","project_id":1,"source":"repository","resync_repository":true,"resync_wiki":true,"class":"Gitlab::Geo::LogCursor::Daemon","cursor_delay_s":0.038}
+```
+
+This message shows that Geo detected that a repository update was needed for project `1`.
+
+## Troubleshooting
+
+For troubleshooting steps, see [Geo Troubleshooting](troubleshooting.md).
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
new file mode 100644
index 00000000000..c3c11dbaf1e
--- /dev/null
+++ b/doc/administration/geo/replication/object_storage.md
@@ -0,0 +1,43 @@
+# Geo with Object storage **[PREMIUM ONLY]**
+
+Geo can be used in combination with Object Storage (AWS S3, or
+other compatible object storage).
+
+## Configuration
+
+At this time it is required that if object storage is enabled on the
+**primary** node, it must also be enabled on each **secondary** node.
+
+**Secondary** nodes can use the same storage bucket as the **primary** node, or
+they can use a replicated storage bucket. At this time GitLab does not
+take care of content replication in object storage.
+
+For LFS, follow the documentation to
+[set up LFS object storage](../../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
+
+For CI job artifacts, there is similar documentation to configure
+[jobs artifact object storage](../../job_artifacts.md#using-object-storage)
+
+For user uploads, there is similar documentation to configure [upload object storage](../../uploads.md#using-object-storage-core-only)
+
+You should enable and configure object storage on both **primary** and **secondary**
+nodes. Migrating existing data to object storage should be performed on the
+**primary** node only. **Secondary** nodes will automatically notice that the migrated
+files are now in object storage.
+
+## Replication
+
+When using Amazon S3, you can use
+[CRR](https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) to
+have automatic replication between the bucket used by the **primary** node and
+the bucket used by **secondary** nodes.
+
+If you are using Google Cloud Storage, consider using
+[Multi-Regional Storage](https://cloud.google.com/storage/docs/storage-classes#multi-regional).
+Or you can use the [Storage Transfer Service](https://cloud.google.com/storage/transfer/),
+although this only supports daily synchronization.
+
+For manual synchronization, or scheduled by `cron`, please have a look at:
+
+- [`s3cmd sync`](http://s3tools.org/s3cmd-sync)
+- [`gsutil rsync`](https://cloud.google.com/storage/docs/gsutil/commands/rsync)
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
new file mode 100644
index 00000000000..b190fe7d42d
--- /dev/null
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -0,0 +1,50 @@
+# Removing secondary Geo nodes **[PREMIUM ONLY]**
+
+**Secondary** nodes can be removed from the Geo cluster using the Geo admin page of the **primary** node. To remove a **secondary** node:
+
+1. Navigate to **Admin Area > Geo** (`/admin/geo/nodes`).
+1. Click the **Remove** button for the **secondary** node you want to remove.
+1. Confirm by clicking **Remove** when the prompt appears.
+
+Once removed from the Geo admin page, you must stop and uninstall the **secondary** node:
+
+1. On the **secondary** node, stop GitLab:
+
+ ```bash
+ sudo gitlab-ctl stop
+ ```
+1. On the **secondary** node, uninstall GitLab:
+
+ ```bash
+ # Stop gitlab and remove its supervision process
+ sudo gitlab-ctl uninstall
+
+ # Debian/Ubuntu
+ sudo dpkg --remove gitlab-ee
+
+ # Redhat/Centos
+ sudo rpm --erase gitlab-ee
+ ```
+
+Once GitLab has been uninstalled from the **secondary** node, the replication slot must be dropped from the **primary** node's database as follows:
+
+1. On the **primary** node, start a PostgreSQL console session:
+
+ ```bash
+ sudo gitlab-psql
+ ```
+
+ NOTE: **Note:**
+ Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
+
+1. Find the name of the relevant replication slot. This is the slot that is specified with `--slot-name` when running the replicate command: `gitlab-ctl replicate-geo-database`.
+
+ ```sql
+ SELECT * FROM pg_replication_slots;
+ ```
+
+1. Remove the replication slot for the **secondary** node:
+
+ ```sql
+ SELECT pg_drop_replication_slot('<name_of_slot>');
+ ```
diff --git a/doc/administration/geo/replication/security_review.md b/doc/administration/geo/replication/security_review.md
new file mode 100644
index 00000000000..46d3e68ab63
--- /dev/null
+++ b/doc/administration/geo/replication/security_review.md
@@ -0,0 +1,286 @@
+# Geo security review (Q&A) **[PREMIUM ONLY]**
+
+The following security review of the Geo feature set focuses on security
+aspects of the feature as they apply to customers running their own GitLab
+instances. The review questions are based in part on the [application security architecture](https://www.owasp.org/index.php/Application_Security_Architecture_Cheat_Sheet)
+questions from [owasp.org](https://www.owasp.org).
+
+## Business Model
+
+### What geographic areas does the application service?
+
+- This varies by customer. Geo allows customers to deploy to multiple areas,
+ and they get to choose where they are.
+- Region and node selection is entirely manual.
+
+## Data Essentials
+
+### What data does the application receive, produce, and process?
+
+- Geo streams almost all data held by a GitLab instance between sites. This
+ includes full database replication, most files (user-uploaded attachments,
+ etc) and repository + wiki data. In a typical configuration, this will
+ happen across the public Internet, and be TLS-encrypted.
+- PostgreSQL replication is TLS-encrypted.
+- See also: [only TLSv1.2 should be supported](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/2948)
+
+### How can the data be classified into categories according to its sensitivity?
+
+- GitLab’s model of sensitivity is centered around public vs. internal vs.
+ private projects. Geo replicates them all indiscriminately. “Selective sync”
+ exists for files and repositories (but not database content), which would permit
+ only less-sensitive projects to be replicated to a **secondary** node if desired.
+- See also: [developing a data classification policy](https://gitlab.com/gitlab-com/security/issues/4).
+
+### What data backup and retention requirements have been defined for the application?
+
+- Geo is designed to provide replication of a certain subset of the application
+ data. It is part of the solution, rather than part of the problem.
+
+## End-Users
+
+### Who are the application's end‐users?
+
+- **Secondary** nodes are created in regions that are distant (in terms of
+ Internet latency) from the main GitLab installation (the **primary** node). They are
+ intended to be used by anyone who would ordinarily use the **primary** node, who finds
+ that the **secondary** node is closer to them (in terms of Internet latency).
+
+### How do the end‐users interact with the application?
+
+- **Secondary** nodes provide all the interfaces a **primary** node does
+ (notably a HTTP/HTTPS web application, and HTTP/HTTPS or SSH git repository
+ access), but is constrained to read-only activities. The principal use case is
+ envisioned to be cloning git repositories from the **secondary** node in favor of the
+ **primary** node, but end-users may use the GitLab web interface to view projects,
+ issues, merge requests, snippets, etc.
+
+### What security expectations do the end‐users have?
+
+- The replication process must be secure. It would typically be unacceptable to
+ transmit the entire database contents or all files and repositories across the
+ public Internet in plaintext, for instance.
+- **Secondary** nodes must have the same access controls over its content as the
+ **primary** node - unauthenticated users must not be able to gain access to privileged
+ information on the **primary** node by querying the **secondary** node.
+- Attackers must not be able to impersonate the **secondary** node to the **primary** node, and
+ thus gain access to privileged information.
+
+## Administrators
+
+### Who has administrative capabilities in the application?
+
+- Nothing Geo-specific. Any user where `admin: true` is set in the database is
+ considered an admin with super-user privileges.
+- See also: [more granular access control](https://gitlab.com/gitlab-org/gitlab-ce/issues/32730)
+ (not geo-specific)
+- Much of Geo’s integration (database replication, for instance) must be
+ configured with the application, typically by system administrators.
+
+### What administrative capabilities does the application offer?
+
+- **Secondary** nodes may be added, modified, or removed by users with
+ administrative access.
+- The replication process may be controlled (start/stop) via the Sidekiq
+ administrative controls.
+
+## Network
+
+### What details regarding routing, switching, firewalling, and load‐balancing have been defined?
+
+- Geo requires the **primary** node and **secondary** node to be able to communicate with each
+ other across a TCP/IP network. In particular, the **secondary** nodes must be able to
+ access HTTP/HTTPS and PostgreSQL services on the **primary** node.
+
+### What core network devices support the application?
+
+- Varies from customer to customer.
+
+### What network performance requirements exist?
+
+- Maximum replication speeds between **primary** node and **secondary** node is limited by the
+ available bandwidth between sites. No hard requirements exist - time to complete
+ replication (and ability to keep up with changes on the **primary** node) is a function
+ of the size of the data set, tolerance for latency, and available network
+ capacity.
+
+### What private and public network links support the application?
+
+- Customers choose their own networks. As sites are intended to be
+ geographically separated, it is envisioned that replication traffic will pass
+ over the public Internet in a typical deployment, but this is not a requirement.
+
+## Systems
+
+### What operating systems support the application?
+
+- Geo imposes no additional restrictions on operating system (see the
+ [GitLab installation](https://about.gitlab.com/installation/) page for more
+ details), however we recommend using the operating systems listed in the [Geo documentation](index.md#requirements-for-running-geo).
+
+### What details regarding required OS components and lock‐down needs have been defined?
+
+- The recommended installation method (Omnibus) packages most components itself.
+ A from-source installation method exists. Both are documented at
+ <https://docs.gitlab.com/ee/administration/geo/replication/index.html>
+- There are significant dependencies on the system-installed OpenSSH daemon (Geo
+ requires users to set up custom authentication methods) and the omnibus or
+ system-provided PostgreSQL daemon (it must be configured to listen on TCP,
+ additional users and replication slots must be added, etc).
+- The process for dealing with security updates (for example, if there is a
+ significant vulnerability in OpenSSH or other services, and the customer
+ wants to patch those services on the OS) is identical to the non-Geo
+ situation: security updates to OpenSSH would be provided to the user via the
+ usual distribution channels. Geo introduces no delay there.
+
+## Infrastructure Monitoring
+
+### What network and system performance monitoring requirements have been defined?
+
+- None specific to Geo.
+
+### What mechanisms exist to detect malicious code or compromised application components?
+
+- None specific to Geo.
+
+### What network and system security monitoring requirements have been defined?
+
+- None specific to Geo.
+
+## Virtualization and Externalization
+
+### What aspects of the application lend themselves to virtualization?
+
+- All.
+
+## What virtualization requirements have been defined for the application?
+
+- Nothing Geo-specific, but everything in GitLab needs to have full
+ functionality in such an environment.
+
+### What aspects of the product may or may not be hosted via the cloud computing model?
+
+- GitLab is “cloud native” and this applies to Geo as much as to the rest of the
+ product. Deployment in clouds is a common and supported scenario.
+
+## If applicable, what approach(es) to cloud computing will be taken (Managed Hosting versus "Pure" Cloud, a "full machine" approach such as AWS-EC2 versus a "hosted database" approach such as AWS-RDS and Azure, etc)?
+
+- To be decided by our customers, according to their operational needs.
+
+## Environment
+
+### What frameworks and programming languages have been used to create the application?
+
+- Ruby on Rails, Ruby.
+
+### What process, code, or infrastructure dependencies have been defined for the application?
+
+- Nothing specific to Geo.
+
+### What databases and application servers support the application?
+
+- PostgreSQL >= 9.6, Redis, Sidekiq, Unicorn.
+
+### How will database connection strings, encryption keys, and other sensitive components be stored, accessed, and protected from unauthorized detection?
+
+- There are some Geo-specific values. Some are shared secrets which must be
+ securely transmitted from the **primary** node to the **secondary** node at setup time. Our
+ documentation recommends transmitting them from the **primary** node to the system
+ administrator via SSH, and then back out to the **secondary** node in the same manner.
+ In particular, this includes the PostgreSQL replication credentials and a secret
+ key (`db_key_base`) which is used to decrypt certain columns in the database.
+ The `db_key_base` secret is stored unencrypted on the filesystem, in
+ `/etc/gitlab/gitlab-secrets.json`, along with a number of other secrets. There is
+ no at-rest protection for them.
+
+## Data Processing
+
+### What data entry paths does the application support?
+
+- Data is entered via the web application exposed by GitLab itself. Some data is
+ also entered using system administration commands on the GitLab servers (e.g.,
+ `gitlab-ctl set-primary-node`).
+- **Secondary** nodes also receive inputs via PostgreSQL streaming replication from the **primary** node.
+
+### What data output paths does the application support?
+
+- **Primary** nodes output via PostgreSQL streaming replication to the **secondary** node.
+ Otherwise, principally via the web application exposed by GitLab itself, and via
+ SSH `git clone` operations initiated by the end-user.
+
+### How does data flow across the application's internal components?
+
+- **Secondary** nodes and **primary** nodes interact via HTTP/HTTPS (secured with JSON web
+ tokens) and via PostgreSQL streaming replication.
+- Within a **primary** node or **secondary** node, the SSOT is the filesystem and the database
+ (including Geo tracking database on **secondary** node). The various internal components
+ are orchestrated to make alterations to these stores.
+
+### What data input validation requirements have been defined?
+
+- **Secondary** nodes must have a faithful replication of the **primary** node’s data.
+
+### What data does the application store and how?
+
+- Git repositories and files, tracking information related to the them, and the GitLab database contents.
+
+### What data is or may need to be encrypted and what key management requirements have been defined?
+
+- Neither **primary** nodes or **secondary** nodes encrypt Git repository or filesystem data at
+ rest. A subset of database columns are encrypted at rest using the `db_otp_key`.
+- A static secret shared across all hosts in a GitLab deployment.
+- In transit, data should be encrypted, although the application does permit
+ communication to proceed unencrypted. The two main transits are the **secondary** node’s
+ replication process for PostgreSQL, and for git repositories/files. Both should
+ be protected using TLS, with the keys for that managed via Omnibus per existing
+ configuration for end-user access to GitLab.
+
+### What capabilities exist to detect the leakage of sensitive data?
+
+- Comprehensive system logs exist, tracking every connection to GitLab and PostgreSQL.
+
+### What encryption requirements have been defined for data in transit - including transmission over WAN, LAN, SecureFTP, or publicly accessible protocols such as http: and https:?
+
+- Data must have the option to be encrypted in transit, and be secure against
+ both passive and active attack (e.g., MITM attacks should not be possible).
+
+## Access
+
+### What user privilege levels does the application support?
+
+- Geo adds one type of privilege: **secondary** nodes can access a special Geo API to
+ download files over HTTP/HTTPS, and to clone repositories using HTTP/HTTPS.
+
+### What user identification and authentication requirements have been defined?
+
+- **Secondary** nodes identify to Geo **primary** nodes via OAuth or JWT authentication
+ based on the shared database (HTTP access) or a PostgreSQL replication user (for
+ database replication). The database replication also requires IP-based access
+ controls to be defined.
+
+### What user authorization requirements have been defined?
+
+- **Secondary** nodes must only be able to *read* data. They are not currently able to mutate data on the **primary** node.
+
+### What session management requirements have been defined?
+
+- Geo JWTs are defined to last for only two minutes before needing to be regenerated.
+- Geo JWTs are generated for one of the following specific scopes:
+ - Geo API access.
+ - Git access.
+ - LFS and File ID.
+ - Upload and File ID.
+ - Job Artifact and File ID.
+
+### What access requirements have been defined for URI and Service calls?
+
+- **Secondary** nodes make many calls to the **primary** node's API. This is how file
+ replication proceeds, for instance. This endpoint is only accessible with a JWT token.
+- The **primary** node also makes calls to the **secondary** node to get status information.
+
+## Application Monitoring
+
+### What application auditing requirements have been defined? How are audit and debug logs accessed, stored, and secured?
+
+- Structured JSON log is written to the filesystem, and can also be ingested
+ into a Kibana installation for further analysis.
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
new file mode 100644
index 00000000000..9c95720487d
--- /dev/null
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -0,0 +1,457 @@
+# Geo Troubleshooting **[PREMIUM ONLY]**
+
+NOTE: **Note:**
+This list is an attempt to document all the moving parts that can go wrong.
+We are working into getting all this steps verified automatically in a
+rake task in the future.
+
+Setting up Geo requires careful attention to details and sometimes it's easy to
+miss a step. Here is a list of questions you should ask to try to detect
+what you need to fix (all commands and path locations are for Omnibus installs):
+
+## First check the health of the **secondary** node
+
+Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
+your browser. We perform the following health checks on each **secondary** node
+to help identify if something is wrong:
+
+- Is the node running?
+- Is the node's secondary database configured for streaming replication?
+- Is the node's secondary tracking database configured?
+- Is the node's secondary tracking database connected?
+- Is the node's secondary tracking database up-to-date?
+
+![Geo health check](img/geo_node_healthcheck.png)
+
+If the UI is not working, or you are unable to log in, you can run the Geo
+health check manually to get this information as well as a few more details.
+This rake task can be run on an app node in the **primary** or **secondary**
+Geo nodes:
+
+```sh
+sudo gitlab-rake gitlab:geo:check
+```
+
+Example output:
+
+```
+Checking Geo ...
+
+GitLab Geo is available ... yes
+GitLab Geo is enabled ... yes
+GitLab Geo secondary database is correctly configured ... yes
+Using database streaming replication? ... yes
+GitLab Geo tracking database is configured to use Foreign Data Wrapper? ... yes
+GitLab Geo tracking database Foreign Data Wrapper schema is up-to-date? ... yes
+GitLab Geo HTTP(S) connectivity ...
+* Can connect to the primary node ... yes
+HTTP/HTTPS repository cloning is enabled ... yes
+Machine clock is synchronized ... yes
+Git user has default SSH configuration? ... yes
+OpenSSH configured to use AuthorizedKeysCommand ... yes
+GitLab configured to disable writing to authorized_keys file ... yes
+GitLab configured to store new projects in hashed storage? ... yes
+All projects are in hashed storage? ... yes
+
+Checking Geo ... Finished
+```
+
+Current sync information can be found manually by running this rake task on any
+**secondary** app node:
+
+```sh
+sudo gitlab-rake geo:status
+```
+
+Example output:
+
+```
+http://secondary.example.com/
+-----------------------------------------------------
+ GitLab Version: 11.8.1-ee
+ Geo Role: Secondary
+ Health Status: Healthy
+ Repositories: 190/190 (100%)
+ Verified Repositories: 190/190 (100%)
+ Wikis: 190/190 (100%)
+ Verified Wikis: 190/190 (100%)
+ LFS Objects: 35/35 (100%)
+ Attachments: 528/528 (100%)
+ CI job artifacts: 477/477 (100%)
+ Repositories Checked: 0/190 (0%)
+ Sync Settings: Full
+ Database replication lag: 0 seconds
+ Last event ID seen from primary: 2158 (about 2 minute ago)
+ Last event ID processed by cursor: 2158 (about 2 minute ago)
+ Last status report was: 4 minutes ago
+```
+
+## Is Postgres replication working?
+
+### Are my nodes pointing to the correct database instance?
+
+You should make sure your **primary** Geo node points to the instance with
+writing permissions.
+
+Any **secondary** nodes should point only to read-only instances.
+
+### Can Geo detect my current node correctly?
+
+Geo uses the defined node from the **Admin Area > Geo** screen, and tries to match
+it with the value defined in the `/etc/gitlab/gitlab.rb` configuration file.
+The relevant line looks like: `external_url "http://gitlab.example.com"`.
+
+To check if the node on the current machine is correctly detected type:
+
+```sh
+sudo gitlab-rails runner "puts Gitlab::Geo.current_node.inspect"
+```
+
+and expect something like:
+
+```
+#<GeoNode id: 2, schema: "https", host: "gitlab.example.com", port: 443, relative_url_root: "", primary: false, ...>
+```
+
+By running the command above, `primary` should be `true` when executed in
+the **primary** node, and `false` on any **secondary** node.
+
+## How do I fix the message, "ERROR: replication slots can only be used if max_replication_slots > 0"?
+
+This means that the `max_replication_slots` PostgreSQL variable needs to
+be set on the **primary** database. In GitLab 9.4, we have made this setting
+default to 1. You may need to increase this value if you have more
+**secondary** nodes. Be sure to restart PostgreSQL for this to take
+effect. See the [PostgreSQL replication
+setup][database-pg-replication] guide for more details.
+
+## How do I fix the message, "FATAL: could not start WAL streaming: ERROR: replication slot "geo_secondary_my_domain_com" does not exist"?
+
+This occurs when PostgreSQL does not have a replication slot for the
+**secondary** node by that name. You may want to rerun the [replication
+process](database.md) on the **secondary** node .
+
+## How do I fix the message, "Command exceeded allowed execution time" when setting up replication?
+
+This may happen while [initiating the replication process][database-start-replication] on the **secondary** node,
+and indicates that your initial dataset is too large to be replicated in the default timeout (30 minutes).
+
+Re-run `gitlab-ctl replicate-geo-database`, but include a larger value for
+`--backup-timeout`:
+
+```sh
+sudo gitlab-ctl \
+ replicate-geo-database \
+ --host=<primary_node_hostname> \
+ --slot-name=<secondary_slot_name> \
+ --backup-timeout=21600
+```
+
+This will give the initial replication up to six hours to complete, rather than
+the default thirty minutes. Adjust as required for your installation.
+
+## How do I fix the message, "PANIC: could not write to file 'pg_xlog/xlogtemp.123': No space left on device"
+
+Determine if you have any unused replication slots in the **primary** database. This can cause large amounts of
+log data to build up in `pg_xlog`. Removing the unused slots can reduce the amount of space used in the `pg_xlog`.
+
+1. Start a PostgreSQL console session:
+
+ ```sh
+ sudo gitlab-psql gitlabhq_production
+ ```
+
+ > Note that using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
+
+1. View your replication slots with:
+
+ ```sql
+ SELECT * FROM pg_replication_slots;
+ ```
+
+Slots where `active` is `f` are not active.
+
+- When this slot should be active, because you have a **secondary** node configured using that slot,
+ log in to that **secondary** node and check the PostgreSQL logs why the replication is not running.
+
+- If you are no longer using the slot (e.g. you no longer have Geo enabled), you can remove it with in the
+ PostgreSQL console session:
+
+ ```sql
+ SELECT pg_drop_replication_slot('<name_of_extra_slot>');
+ ```
+
+## Very large repositories never successfully synchronize on the **secondary** node
+
+GitLab places a timeout on all repository clones, including project imports
+and Geo synchronization operations. If a fresh `git clone` of a repository
+on the primary takes more than a few minutes, you may be affected by this.
+To increase the timeout, add the following line to `/etc/gitlab/gitlab.rb`
+on the **secondary** node:
+
+```ruby
+gitlab_rails['gitlab_shell_git_timeout'] = 10800
+```
+
+Then reconfigure GitLab:
+
+```sh
+sudo gitlab-ctl reconfigure
+```
+
+This will increase the timeout to three hours (10800 seconds). Choose a time
+long enough to accommodate a full clone of your largest repositories.
+
+## How to reset Geo **secondary** node replication
+
+If you get a **secondary** node in a broken state and want to reset the replication state,
+to start again from scratch, there are a few steps that can help you:
+
+1. Stop Sidekiq and the Geo LogCursor
+
+ It's possible to make Sidekiq stop gracefully, but making it stop getting new jobs and
+ wait until the current jobs to finish processing.
+
+ You need to send a **SIGTSTP** kill signal for the first phase and them a **SIGTERM**
+ when all jobs have finished. Otherwise just use the `gitlab-ctl stop` commands.
+
+ ```sh
+ gitlab-ctl status sidekiq
+ # run: sidekiq: (pid 10180) <- this is the PID you will use
+ kill -TSTP 10180 # change to the correct PID
+
+ gitlab-ctl stop sidekiq
+ gitlab-ctl stop geo-logcursor
+ ```
+
+ You can watch sidekiq logs to know when sidekiq jobs processing have finished:
+
+ ```sh
+ gitlab-ctl tail sidekiq
+ ```
+
+1. Rename repository storage folders and create new ones
+
+ ```sh
+ mv /var/opt/gitlab/git-data/repositories /var/opt/gitlab/git-data/repositories.old
+ mkdir -p /var/opt/gitlab/git-data/repositories
+ chown git:git /var/opt/gitlab/git-data/repositories
+ ```
+
+ TIP: **Tip**
+ You may want to remove the `/var/opt/gitlab/git-data/repositories.old` in the future
+ as soon as you confirmed that you don't need it anymore, to save disk space.
+
+1. _(Optional)_ Rename other data folders and create new ones
+
+ CAUTION: **Caution**:
+ You may still have files on the **secondary** node that have been removed from **primary** node but
+ removal have not been reflected. If you skip this step, they will never be removed
+ from this Geo node.
+
+ Any uploaded content like file attachments, avatars or LFS objects are stored in a
+ subfolder in one of the two paths below:
+
+ 1. /var/opt/gitlab/gitlab-rails/shared
+ 1. /var/opt/gitlab/gitlab-rails/uploads
+
+ To rename all of them:
+
+ ```sh
+ gitlab-ctl stop
+
+ mv /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared.old
+ mkdir -p /var/opt/gitlab/gitlab-rails/shared
+
+ mv /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads.old
+ mkdir -p /var/opt/gitlab/gitlab-rails/uploads
+ ```
+
+ Reconfigure in order to recreate the folders and make sure permissions and ownership
+ are correctly
+
+ ```sh
+ gitlab-ctl reconfigure
+ ```
+
+1. Reset the Tracking Database
+
+ ```sh
+ gitlab-rake geo:db:reset
+ ```
+
+1. Restart previously stopped services
+
+ ```sh
+ gitlab-ctl start
+ ```
+
+## How do I fix a "Foreign Data Wrapper (FDW) is not configured" error?
+
+When setting up Geo, you might see this warning in the `gitlab-rake
+gitlab:geo:check` output:
+
+```
+GitLab Geo tracking database Foreign Data Wrapper schema is up-to-date? ... foreign data wrapper is not configured
+```
+
+There are a few key points to remember:
+
+1. The FDW settings are configured on the Geo **tracking** database.
+1. The configured foreign server enables a login to the Geo
+**secondary**, read-only database.
+
+By default, the Geo secondary and tracking database are running on the
+same host on different ports. That is, 5432 and 5431 respectively.
+
+### Checking configuration
+
+NOTE: **Note:**
+The following steps are for Omnibus installs only. Using Geo with source-based installs [is deprecated](index.md#using-gitlab-installed-from-source-deprecated).
+
+To check the configuration:
+
+1. Enter the database console:
+
+ ```sh
+ gitlab-geo-psql
+ ```
+
+1. Check whether any tables are present. If everything is working, you
+should see something like this:
+
+ ```sql
+ gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
+ foreign_table_catalog | foreign_table_schema | foreign_table_name | foreign_server_catalog | foreign_server_n
+ ame
+ -------------------------+----------------------+-------------------------------------------------+-------------------------+-----------------
+ ----
+ gitlabhq_geo_production | gitlab_secondary | abuse_reports | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | appearances | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | application_setting_terms | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | application_settings | gitlabhq_geo_production | gitlab_secondary
+ <snip>
+ ```
+
+ However, if the query returns with `0 rows`, then continue onto the next steps.
+
+1. Check that the foreign server mapping is correct via `\des+`. The
+ results should look something like this:
+
+ ```sql
+ gitlabhq_geo_production=# \des+
+ List of foreign servers
+ -[ RECORD 1 ]--------+------------------------------------------------------------
+ Name | gitlab_secondary
+ Owner | gitlab-psql
+ Foreign-data wrapper | postgres_fdw
+ Access privileges | "gitlab-psql"=U/"gitlab-psql" +
+ | gitlab_geo=U/"gitlab-psql"
+ Type |
+ Version |
+ FDW Options | (host '0.0.0.0', port '5432', dbname 'gitlabhq_production')
+ Description |
+ ```
+
+ NOTE: **Note:** Pay particular attention to the host and port under
+ FDW options. That configuration should point to the Geo secondary
+ database.
+
+ If you need to experiment with changing the host or password, the
+ following queries demonstrate how:
+
+ ```sql
+ ALTER SERVER gitlab_secondary OPTIONS (SET host '<my_new_host>');
+ ALTER SERVER gitlab_secondary OPTIONS (SET port 5432);
+ ```
+
+ If you change the host and/or port, you will also have to adjust the
+ following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
+ reconfigure`:
+
+ - `gitlab_rails['db_host']`
+ - `gitlab_rails['db_port']`
+
+1. Check that the user mapping is configured properly via `\deu+`:
+
+ ```sql
+ gitlabhq_geo_production=# \deu+
+ List of user mappings
+ Server | User name | FDW Options
+ ------------------+------------+--------------------------------------------------------------------------------
+ gitlab_secondary | gitlab_geo | ("user" 'gitlab', password 'YOUR-PASSWORD-HERE')
+ (1 row)
+ ```
+
+ Make sure the password is correct. You can test that logins work by running `psql`:
+
+ ```sh
+ # Connect to the tracking database as the `gitlab_geo` user
+ sudo \
+ -u git /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/geo-postgresql \
+ -p 5431 \
+ -U gitlab_geo \
+ -W \
+ -d gitlabhq_geo_production
+ ```
+
+ If you need to correct the password, the following query shows how:
+
+ ```sql
+ ALTER USER MAPPING FOR gitlab_geo SERVER gitlab_secondary OPTIONS (SET password '<my_new_password>');
+ ```
+
+ If you change the user or password, you will also have to adjust the
+ following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
+ reconfigure`:
+
+ - `gitlab_rails['db_username']`
+ - `gitlab_rails['db_password']`
+
+ If you are using [PgBouncer in front of the secondary
+ database](database.md#pgbouncer-support-optional), be sure to update
+ the following settings:
+
+ - `geo_postgresql['fdw_external_user']`
+ - `geo_postgresql['fdw_external_password']`
+
+### Manual reload of FDW schema
+
+If you're still unable to get FDW working, you may want to try a manual
+reload of the FDW schema. To manually reload the FDW schema:
+
+1. On the node running the Geo tracking database, enter the PostgreSQL console via
+ the `gitlab_geo` user:
+
+ ```sh
+ sudo \
+ -u git /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/geo-postgresql \
+ -p 5431 \
+ -U gitlab_geo \
+ -W \
+ -d gitlabhq_geo_production
+ ```
+
+ Be sure to adjust the port and hostname for your configuration. You
+ may be asked to enter a password.
+
+1. Reload the schema via:
+
+ ```sql
+ DROP SCHEMA IF EXISTS gitlab_secondary CASCADE;
+ CREATE SCHEMA gitlab_secondary;
+ GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO gitlab_geo;
+ IMPORT FOREIGN SCHEMA public FROM SERVER gitlab_secondary INTO gitlab_secondary;
+ ```
+
+1. Test that queries work:
+
+ ```sql
+ SELECT * from information_schema.foreign_tables;
+ SELECT * FROM gitlab_secondary.projects limit 1;
+ ```
+
+[database-start-replication]: database.md#step-3-initiate-the-replication-process
+[database-pg-replication]: database.md#postgresql-replication
diff --git a/doc/administration/geo/replication/tuning.md b/doc/administration/geo/replication/tuning.md
new file mode 100644
index 00000000000..b9921b2e69f
--- /dev/null
+++ b/doc/administration/geo/replication/tuning.md
@@ -0,0 +1,17 @@
+# Tuning Geo **[PREMIUM ONLY]**
+
+## Changing the sync capacity values
+
+In the Geo admin page (`/admin/geo/nodes`), there are several variables that
+can be tuned to improve performance of Geo:
+
+- Repository sync capacity.
+- File sync capacity.
+
+Increasing these values will increase the number of jobs that are scheduled.
+However, this may not lead to more downloads in parallel unless the number of
+available Sidekiq threads is also increased. For example, if repository sync
+capacity is increased from 25 to 50, you may also want to increase the number
+of Sidekiq threads from 25 to 50. See the
+[Sidekiq concurrency documentation](https://docs.gitlab.com/ee/administration/operations/extra_sidekiq_processes.html#number-of-threads)
+for more details.
diff --git a/doc/administration/geo/replication/updating_the_geo_nodes.md b/doc/administration/geo/replication/updating_the_geo_nodes.md
new file mode 100644
index 00000000000..66728366e24
--- /dev/null
+++ b/doc/administration/geo/replication/updating_the_geo_nodes.md
@@ -0,0 +1,403 @@
+# Updating the Geo nodes **[PREMIUM ONLY]**
+
+Depending on which version of Geo you are updating to/from, there may be
+different steps.
+
+## General update steps
+
+In order to update the Geo nodes when a new GitLab version is released,
+all you need to do is update GitLab itself:
+
+1. Log into each node (**primary** and **secondary** nodes).
+1. [Update GitLab][update].
+1. [Update tracking database on **secondary** node](#update-tracking-database-on-secondary-node) when
+ the tracking database is enabled.
+1. [Test](#check-status-after-updating) **primary** and **secondary** nodes, and check version in each.
+
+## Upgrading to GitLab 10.8
+
+Before 10.8, broadcast messages would not propagate without flushing the cache on the **secondary** nodes. This has been fixed in 10.8, but requires one last cache flush on each **secondary** node:
+
+```sh
+sudo gitlab-rake cache:clear
+```
+
+## Upgrading to GitLab 10.6
+
+In 10.4, we started to recommend that you define a password for database user (`gitlab`).
+
+We now require this change as we use this password to enable the Foreign Data Wrapper, as a way to optimize
+the Geo Tracking Database. We are also improving security by disabling the use of **trust**
+authentication method.
+
+1. **[primary]** Login to your **primary** node and run:
+
+ ```sh
+ gitlab-ctl pg-password-md5 gitlab
+ # Enter password: <your_password_here>
+ # Confirm password: <your_password_here>
+ # fca0b89a972d69f00eb3ec98a5838484
+ ```
+
+ Copy the generated hash and edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
+ ```
+
+ Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
+
+ ```ruby
+ postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32'] # <- Remove this
+ ```
+
+1. **[primary]** Reconfigure and restart:
+
+ ```sh
+ sudo gitlab-ctl reconfigure
+ sudo gitlab-ctl restart
+ ```
+
+1. **[secondary]** Login to all **secondary** nodes and edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
+
+ # Enable Foreign Data Wrapper
+ geo_secondary['db_fdw'] = true
+
+ # Secondary address in CIDR format, for example '5.6.7.8/32'
+ postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
+ ```
+
+ Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
+
+ ```ruby
+ postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','5.6.7.8/32'] # <- Remove this
+ ```
+
+1. **[secondary]** Reconfigure and restart:
+
+ ```sh
+ sudo gitlab-ctl reconfigure
+ sudo gitlab-ctl restart
+ ```
+
+## Upgrading to GitLab 10.5
+
+For Geo Disaster Recovery to work with minimum downtime, your **secondary** node
+should use the same set of secrets as the **primary** node. However, setup instructions
+prior to the 10.5 release only synchronized the `db_key_base` secret.
+
+To rectify this error on existing installations, you should **overwrite** the
+contents of `/etc/gitlab/gitlab-secrets.json` on each **secondary** node with the
+contents of `/etc/gitlab/gitlab-secrets.json` on the **primary** node, then run the
+following command on each **secondary** node:
+
+```sh
+sudo gitlab-ctl reconfigure
+```
+
+If you do not perform this step, you may find that two-factor authentication
+[is broken following DR](../disaster_recovery/index.html#i-followed-the-disaster-recovery-instructions-and-now-two-factor-auth-is-broken).
+
+To prevent SSH requests to the newly promoted **primary** node from failing
+due to SSH host key mismatch when updating the **primary** node domain's DNS record
+you should perform the step to [Manually replicate **primary** SSH host keys](configuration.md#step-2-manually-replicate-the-primary-nodes-ssh-host-keys) in each
+**secondary** node.
+
+## Upgrading to GitLab 10.4
+
+There are no Geo-specific steps to take!
+
+## Upgrading to GitLab 10.3
+
+### Support for SSH repository synchronization removed
+
+In GitLab 10.2, synchronizing secondaries over SSH was deprecated. In 10.3,
+support is removed entirely. All installations will switch to the HTTP/HTTPS
+cloning method instead. Before upgrading, ensure that all your Geo nodes are
+configured to use this method and that it works for your installation. In
+particular, ensure that [Git access over HTTP/HTTPS is enabled](configuration.md#step-6-enable-git-access-over-httphttps).
+
+Synchronizing repositories over the public Internet using HTTP is insecure, so
+you should ensure that you have HTTPS configured before upgrading. Note that
+file synchronization is **also** insecure in these cases!
+
+## Upgrading to GitLab 10.2
+
+### Secure PostgreSQL replication
+
+Support for TLS-secured PostgreSQL replication has been added. If you are
+currently using PostgreSQL replication across the open internet without an
+external means of securing the connection (e.g., a site-to-site VPN), then you
+should immediately reconfigure your **primary** and **secondary** PostgreSQL instances
+according to the [updated instructions][database].
+
+If you *are* securing the connections externally and wish to continue doing so,
+ensure you include the new option `--sslmode=prefer` in future invocations of
+`gitlab-ctl replicate-geo-database`.
+
+### HTTPS repository sync
+
+Support for replicating repositories and wikis over HTTP/HTTPS has been added.
+Replicating over SSH has been deprecated, and support for this option will be
+removed in a future release.
+
+To switch to HTTP/HTTPS replication, log into the **primary** node as an admin and visit
+**Admin Area > Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
+press the "Edit" button, change the "Repository cloning" setting from
+"SSH (deprecated)" to "HTTP/HTTPS", and press "Save changes". This should take
+effect immediately.
+
+Any new secondaries should be created using HTTP/HTTPS replication - this is the
+default setting.
+
+After you've verified that HTTP/HTTPS replication is working, you should remove
+the now-unused SSH keys from your secondaries, as they may cause problems if the
+**secondary** node if ever promoted to a **primary** node:
+
+1. **[secondary]** Login to **all** your **secondary** nodes and run:
+
+ ```ruby
+ sudo -u git -H rm ~git/.ssh/id_rsa ~git/.ssh/id_rsa.pub
+ ```
+
+### Hashed Storage
+
+CAUTION: **Warning:**
+Hashed storage is in **Alpha**. It is considered experimental and not
+production-ready. See [Hashed Storage] for more detail.
+
+If you previously enabled Hashed Storage and migrated all your existing
+projects to Hashed Storage, disabling hashed storage will not migrate projects
+to their previous project based storage path. As such, once enabled and
+migrated we recommend leaving Hashed Storage enabled.
+
+## Upgrading to GitLab 10.1
+
+CAUTION: **Warning:**
+Hashed storage is in **Alpha**. It is considered experimental and not
+production-ready. See [Hashed Storage] for more detail.
+
+[Hashed storage] was introduced in GitLab 10.0, and a [migration path][hashed-migration]
+for existing repositories was added in GitLab 10.1.
+
+## Upgrading to GitLab 10.0
+
+Since GitLab 10.0, we require all **Geo** systems to [use SSH key lookups via
+the database][ssh-fast-lookup] to avoid having to maintain consistency of the
+`authorized_keys` file for SSH access. Failing to do this will prevent users
+from being able to clone via SSH.
+
+Note that in older versions of Geo, attachments downloaded on the **secondary**
+nodes would be saved to the wrong directory. We recommend that you do the
+following to clean this up.
+
+On the **secondary** Geo nodes, run as root:
+
+```sh
+mv /var/opt/gitlab/gitlab-rails/working /var/opt/gitlab/gitlab-rails/working.old
+mkdir /var/opt/gitlab/gitlab-rails/working
+chmod 700 /var/opt/gitlab/gitlab-rails/working
+chown git:git /var/opt/gitlab/gitlab-rails/working
+```
+
+You may delete `/var/opt/gitlab/gitlab-rails/working.old` any time.
+
+Once this is done, we advise restarting GitLab on the **secondary** nodes for the
+new working directory to be used:
+
+```sh
+sudo gitlab-ctl restart
+```
+
+## Upgrading from GitLab 9.3 or older
+
+If you started running Geo on GitLab 9.3 or older, we recommend that you
+resync your **secondary** PostgreSQL databases to use replication slots. If you
+started using Geo with GitLab 9.4 or 10.x, no further action should be
+required because replication slots are used by default. However, if you
+started with GitLab 9.3 and upgraded later, you should still follow the
+instructions below.
+
+When in doubt, it does not hurt to do a resync. The easiest way to do this in
+Omnibus is the following:
+
+ 1. Make sure you have Omnibus GitLab on the **primary** server.
+ 1. Run `gitlab-ctl reconfigure` and `gitlab-ctl restart postgresql`. This will enable replication slots on the **primary** database.
+ 1. Check the steps about defining `postgresql['sql_user_password']`, `gitlab_rails['db_password']`.
+ 1. Make sure `postgresql['max_replication_slots']` matches the number of **secondary** Geo nodes locations.
+ 1. Install GitLab on the **secondary** server.
+ 1. Re-run the [database replication process][database-replication].
+
+## Special update notes for 9.0.x
+
+> **IMPORTANT**:
+With GitLab 9.0, the PostgreSQL version is upgraded to 9.6 and manual steps are
+required in order to update the **secondary** nodes and keep the Streaming
+Replication working. Downtime is required, so plan ahead.
+
+The following steps apply only if you upgrade from a 8.17 GitLab version to
+9.0+. For previous versions, update to GitLab 8.17 first before attempting to
+upgrade to 9.0+.
+
+---
+
+Make sure to follow the steps in the exact order as they appear below and pay
+extra attention in what node (either **primary** or **secondary**) you execute them! Each step
+is prepended with the relevant node for better clarity:
+
+1. **[secondary]** Login to **all** your **secondary** nodes and stop all services:
+
+ ```ruby
+ sudo gitlab-ctl stop
+ ```
+
+1. **[secondary]** Make a backup of the `recovery.conf` file on **all**
+ **secondary** nodes to preserve PostgreSQL's credentials:
+
+ ```sh
+ sudo cp /var/opt/gitlab/postgresql/data/recovery.conf /var/opt/gitlab/
+ ```
+
+1. **[primary]** Update the **primary** node to GitLab 9.0 following the
+ [regular update docs][update]. At the end of the update, the **primary** node
+ will be running with PostgreSQL 9.6.
+
+1. **[primary]** To prevent a de-synchronization of the repository replication,
+ stop all services except `postgresql` as we will use it to re-initialize the
+ **secondary** node's database:
+
+ ```sh
+ sudo gitlab-ctl stop
+ sudo gitlab-ctl start postgresql
+ ```
+
+1. **[secondary]** Run the following steps on each of the **secondary** nodes:
+
+ 1. **[secondary]** Stop all services:
+
+ ```sh
+ sudo gitlab-ctl stop
+ ```
+
+ 1. **[secondary]** Prevent running database migrations:
+
+ ```sh
+ sudo touch /etc/gitlab/skip-auto-migrations
+ ```
+
+ 1. **[secondary]** Move the old database to another directory:
+
+ ```sh
+ sudo mv /var/opt/gitlab/postgresql{,.bak}
+ ```
+
+ 1. **[secondary]** Update to GitLab 9.0 following the [regular update docs][update].
+ At the end of the update, the node will be running with PostgreSQL 9.6.
+
+ 1. **[secondary]** Make sure all services are up:
+
+ ```sh
+ sudo gitlab-ctl start
+ ```
+
+ 1. **[secondary]** Reconfigure GitLab:
+
+ ```sh
+ sudo gitlab-ctl reconfigure
+ ```
+
+ 1. **[secondary]** Run the PostgreSQL upgrade command:
+
+ ```sh
+ sudo gitlab-ctl pg-upgrade
+ ```
+
+ 1. **[secondary]** See the stored credentials for the database that you will
+ need to re-initialize the replication:
+
+ ```sh
+ sudo grep -s primary_conninfo /var/opt/gitlab/recovery.conf
+ ```
+
+ 1. **[secondary]** Create the `replica.sh` script as described in the
+ [database configuration document][database-source-replication].
+
+ 1. **[secondary]** Run the recovery script using the credentials from the
+ previous step:
+
+ ```sh
+ sudo bash /tmp/replica.sh
+ ```
+
+ 1. **[secondary]** Reconfigure GitLab:
+
+ ```sh
+ sudo gitlab-ctl reconfigure
+ ```
+
+ 1. **[secondary]** Start all services:
+
+ ```sh
+ sudo gitlab-ctl start
+ ```
+
+ 1. **[secondary]** Repeat the steps for the remaining **secondary** nodes.
+
+1. **[primary]** After all **secondary** nodes are updated, start all services in
+ **primary** node:
+
+ ```sh
+ sudo gitlab-ctl start
+ ```
+
+## Check status after updating
+
+Now that the update process is complete, you may want to check whether
+everything is working correctly:
+
+1. Run the Geo raketask on all nodes, everything should be green:
+
+ ```sh
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+1. Check the **primary** node's Geo dashboard for any errors.
+1. Test the data replication by pushing code to the **primary** node and see if it
+ is received by **secondary** nodes.
+
+## Update tracking database on **secondary** node
+
+After updating a **secondary** node, you might need to run migrations on
+the tracking database. The tracking database was added in GitLab 9.1,
+and it is required since 10.0.
+
+1. Run database migrations on tracking database:
+
+ ```sh
+ sudo gitlab-rake geo:db:migrate
+ ```
+
+1. Repeat this step for each **secondary** node.
+
+[update]: ../../../update/README.md
+[database]: database.md
+[database-replication]: database.md#step-3-initiate-the-replication-process
+[database-source-replication]: database_source.md#step-3-initiate-the-replication-process
+[Hashed Storage]: ../../repository_storage_types.md
+[hashed-migration]: ../../raketasks/storage.md
+[ssh-fast-lookup]: ../../operations/fast_ssh_key_lookup.md
diff --git a/doc/administration/geo/replication/using_a_geo_server.md b/doc/administration/geo/replication/using_a_geo_server.md
new file mode 100644
index 00000000000..f1f1fe48748
--- /dev/null
+++ b/doc/administration/geo/replication/using_a_geo_server.md
@@ -0,0 +1,18 @@
+[//]: # (Please update EE::GitLab::GeoGitAccess::GEO_SERVER_DOCS_URL if this file is moved)
+
+# Using a Geo Server **[PREMIUM ONLY]**
+
+After you set up the [database replication and configure the Geo nodes][req], use your closest GitLab node as you would a normal standalone GitLab instance.
+
+Pushing directly to a **secondary** node (for both HTTP, SSH including git-lfs) was [introduced](https://about.gitlab.com/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3.
+
+Example of the output you will see when pushing to a **secondary** node:
+
+```bash
+$ git push
+> GitLab: You're pushing to a Geo secondary.
+> GitLab: We'll help you by proxying this request to the primary: ssh://git@primary.geo/user/repo.git
+Everything up-to-date
+```
+
+[req]: index.md#setup-instructions