summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorMarcel Amirault <mamirault@gitlab.com>2019-07-03 01:37:27 +0000
committerEvan Read <eread@gitlab.com>2019-07-03 01:37:27 +0000
commitf3ed4eccd5444e9130634a61a116c3aaf4872fee (patch)
treeb2aff35f5537f6ebefbae6d4b909d4d2527e6eba /doc
parent4771ad2c9d01a6b1b0d5e35809100ab86ca31c32 (diff)
downloadgitlab-ce-f3ed4eccd5444e9130634a61a116c3aaf4872fee.tar.gz
Fix spacing in geo docs
Spacing following lists should be 2 spaces, and 3 for numbered lists
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md10
-rw-r--r--doc/administration/geo/disaster_recovery/bring_primary_back.md28
-rw-r--r--doc/administration/geo/disaster_recovery/index.md229
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md41
-rw-r--r--doc/administration/geo/replication/configuration.md175
-rw-r--r--doc/administration/geo/replication/database.md600
-rw-r--r--doc/administration/geo/replication/external_database.md160
-rw-r--r--doc/administration/geo/replication/high_availability.md252
-rw-r--r--doc/administration/geo/replication/index.md8
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md45
-rw-r--r--doc/administration/geo/replication/troubleshooting.md326
-rw-r--r--doc/administration/geo/replication/updating_the_geo_nodes.md309
12 files changed, 1092 insertions, 1091 deletions
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index d4c8c2d3624..e19cd9bbfec 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -143,11 +143,13 @@ If the **primary** and **secondary** nodes have a checksum verification mismatch
1. On the project admin page get the **Gitaly storage name**, and **Gitaly relative path**:
![Project admin page](img/checksum-differences-admin-project-page.png)
-1. Navigate to the project's repository directory on both **primary** and **secondary** nodes (the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs` is customized, check the directory layout on your server to be sure.
+1. Navigate to the project's repository directory on both **primary** and **secondary** nodes
+ (the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs`
+ is customized, check the directory layout on your server to be sure.
- ```sh
- cd /var/opt/gitlab/git-data/repositories
- ```
+ ```sh
+ cd /var/opt/gitlab/git-data/repositories
+ ```
1. Run the following command on the **primary** node, redirecting the output to a file:
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
index f4d31a98080..9a981b49349 100644
--- a/doc/administration/geo/disaster_recovery/bring_primary_back.md
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -21,20 +21,20 @@ To bring the former **primary** node up to date:
1. SSH into the former **primary** node that has fallen behind.
1. Make sure all the services are up:
- ```sh
- sudo gitlab-ctl start
- ```
-
- > **Note 1:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary],
- > you need to undo those steps now. For Debian/Ubuntu you just need to run
- > `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
- > the GitLab instance from scratch and set it up as a **secondary** node by
- > following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
- >
- > **Note 2:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
- > for this node during disaster recovery procedure you may need to [block
- > all the writes to this node](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/gitlab-geo/planned-failover.md#block-primary-traffic)
- > during this procedure.
+ ```sh
+ sudo gitlab-ctl start
+ ```
+
+ NOTE: **Note:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary],
+ you need to undo those steps now. For Debian/Ubuntu you just need to run
+ `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
+ the GitLab instance from scratch and set it up as a **secondary** node by
+ following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
+
+ NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
+ for this node during disaster recovery procedure you may need to [block
+ all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node)
+ during this procedure.
1. [Setup database replication][database-replication]. Note that in this
case, **primary** node refers to the current **primary** node, and **secondary** node refers to the
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index 71dc797f281..86182b84062 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -39,48 +39,50 @@ must disable the **primary** node.
1. SSH into the **primary** node to stop and disable GitLab, if possible:
- ```sh
- sudo gitlab-ctl stop
- ```
+ ```sh
+ sudo gitlab-ctl stop
+ ```
- Prevent GitLab from starting up again if the server unexpectedly reboots:
+ Prevent GitLab from starting up again if the server unexpectedly reboots:
- ```sh
- sudo systemctl disable gitlab-runsvdir
- ```
+ ```sh
+ sudo systemctl disable gitlab-runsvdir
+ ```
- > **CentOS only**: In CentOS 6 or older, there is no easy way to prevent GitLab from being
- > started if the machine reboots isn't available (see [gitlab-org/omnibus-gitlab#3058]).
- > It may be safest to uninstall the GitLab package completely:
+ NOTE: **Note:**
+ (**CentOS only**) In CentOS 6 or older, there is no easy way to prevent GitLab from being
+ started if the machine reboots isn't available (see [gitlab-org/omnibus-gitlab#3058]).
+ It may be safest to uninstall the GitLab package completely:
- ```sh
- yum remove gitlab-ee
- ```
+ ```sh
+ yum remove gitlab-ee
+ ```
- > **Ubuntu 14.04 LTS**: If you are using an older version of Ubuntu
- > or any other distro based on the Upstart init system, you can prevent GitLab
- > from starting if the machine reboots by doing the following:
+ NOTE: **Note:**
+ (**Ubuntu 14.04 LTS**) If you are using an older version of Ubuntu
+ or any other distro based on the Upstart init system, you can prevent GitLab
+ from starting if the machine reboots by doing the following:
- ```sh
- initctl stop gitlab-runsvvdir
- echo 'manual' > /etc/init/gitlab-runsvdir.override
- initctl reload-configuration
- ```
+ ```sh
+ initctl stop gitlab-runsvvdir
+ echo 'manual' > /etc/init/gitlab-runsvdir.override
+ initctl reload-configuration
+ ```
1. If you do not have SSH access to the **primary** node, take the machine offline and
- prevent it from rebooting by any means at your disposal.
- Since there are many ways you may prefer to accomplish this, we will avoid a
- single recommendation. You may need to:
- - Reconfigure the load balancers.
- - Change DNS records (e.g., point the primary DNS record to the **secondary**
- node in order to stop usage of the **primary** node).
- - Stop the virtual servers.
- - Block traffic through a firewall.
- - Revoke object storage permissions from the **primary** node.
- - Physically disconnect a machine.
-
-1. If you plan to
- [update the primary domain DNS record](#step-4-optional-updating-the-primary-domain-dns-record),
+ prevent it from rebooting by any means at your disposal.
+ Since there are many ways you may prefer to accomplish this, we will avoid a
+ single recommendation. You may need to:
+
+ - Reconfigure the load balancers.
+ - Change DNS records (e.g., point the primary DNS record to the **secondary**
+ node in order to stop usage of the **primary** node).
+ - Stop the virtual servers.
+ - Block traffic through a firewall.
+ - Revoke object storage permissions from the **primary** node.
+ - Physically disconnect a machine.
+
+1. If you plan to [update the primary domain DNS record](#step-4-optional-updating-the-primary-domain-dns-record),
you may wish to lower the TTL now to speed up propagation.
### Step 3. Promoting a **secondary** node
@@ -94,26 +96,26 @@ the **secondary** to the **primary**.
1. SSH in to your **secondary** node and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Edit `/etc/gitlab/gitlab.rb` to reflect its new status as **primary** by
removing any lines that enabled the `geo_secondary_role`:
- ```ruby
- ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
- geo_secondary_role['enable'] = true
+ ```ruby
+ ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
+ geo_secondary_role['enable'] = true
- ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
- roles ['geo_secondary_role']
- ```
+ ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
+ roles ['geo_secondary_role']
+ ```
1. Promote the **secondary** node to the **primary** node. Execute:
- ```sh
- gitlab-ctl promote-to-primary-node
- ```
+ ```sh
+ gitlab-ctl promote-to-primary-node
+ ```
1. Verify you can connect to the newly promoted **primary** node using the URL used
previously for the **secondary** node.
@@ -129,31 +131,31 @@ do this manually.
1. SSH in to the database node in the **secondary** and trigger PostgreSQL to
promote to read-write:
- ```bash
- sudo gitlab-pg-ctl promote
- ```
+ ```bash
+ sudo gitlab-pg-ctl promote
+ ```
1. Edit `/etc/gitlab/gitlab.rb` on every machine in the **secondary** to
reflect its new status as **primary** by removing any lines that enabled the
`geo_secondary_role`:
- ```ruby
- ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
- geo_secondary_role['enable'] = true
+ ```ruby
+ ## In pre-11.5 documentation, the role was enabled as follows. Remove this line.
+ geo_secondary_role['enable'] = true
- ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
- roles ['geo_secondary_role']
- ```
+ ## In 11.5+ documentation, the role was enabled as follows. Remove this line.
+ roles ['geo_secondary_role']
+ ```
- After making these changes [Reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) each
- machine so the changes take effect.
+ After making these changes [Reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) each
+ machine so the changes take effect.
1. Promote the **secondary** to **primary**. SSH into a single application
server and execute:
- ```bash
- sudo gitlab-rake geo:set_secondary_as_primary
- ```
+ ```bash
+ sudo gitlab-rake geo:set_secondary_as_primary
+ ```
1. Verify you can connect to the newly promoted **primary** using the URL used
previously for the **secondary**.
@@ -167,37 +169,37 @@ secondary domain, like changing Git remotes and API URLs.
1. SSH into the **secondary** node and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Update the primary domain's DNS record. After updating the primary domain's
DNS records to point to the **secondary** node, edit `/etc/gitlab/gitlab.rb` on the
**secondary** node to reflect the new URL:
- ```ruby
- # Change the existing external_url configuration
- external_url 'https://<new_external_url>'
- ```
+ ```ruby
+ # Change the existing external_url configuration
+ external_url 'https://<new_external_url>'
+ ```
- NOTE: **Note**
- Changing `external_url` won't prevent access via the old secondary URL, as
- long as the secondary DNS records are still intact.
+ NOTE: **Note**
+ Changing `external_url` won't prevent access via the old secondary URL, as
+ long as the secondary DNS records are still intact.
1. Reconfigure the **secondary** node for the change to take effect:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
1. Execute the command below to update the newly promoted **primary** node URL:
- ```sh
- gitlab-rake geo:update_primary_node_url
- ```
+ ```sh
+ gitlab-rake geo:update_primary_node_url
+ ```
- This command will use the changed `external_url` configuration defined
- in `/etc/gitlab/gitlab.rb`.
+ This command will use the changed `external_url` configuration defined
+ in `/etc/gitlab/gitlab.rb`.
1. Verify you can connect to the newly promoted **primary** using its URL.
If you updated the DNS records for the primary domain, these changes may
@@ -231,62 +233,61 @@ and after that you also need two extra steps.
1. SSH into the new **primary** node and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Edit `/etc/gitlab/gitlab.rb`
- ```ruby
- ## Enable a Geo Primary role (if you haven't yet)
- roles ['geo_primary_role']
+ ```ruby
+ ## Enable a Geo Primary role (if you haven't yet)
+ roles ['geo_primary_role']
- ##
- # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
- # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
- ##
- postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
+ ##
+ # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
+ # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
+ ##
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
- # Every secondary server needs to have its own slot so specify the number of secondary nodes you're going to have
- postgresql['max_replication_slots'] = 1
+ # Every secondary server needs to have its own slot so specify the number of secondary nodes you're going to have
+ postgresql['max_replication_slots'] = 1
- ##
- ## Disable automatic database migrations temporarily
- ## (until PostgreSQL is restarted and listening on the private address).
- ##
- gitlab_rails['auto_migrate'] = false
+ ##
+ ## Disable automatic database migrations temporarily
+ ## (until PostgreSQL is restarted and listening on the private address).
+ ##
+ gitlab_rails['auto_migrate'] = false
+ ```
- ```
-
- (For more details about these settings you can read [Configure the primary server][configure-the-primary-server])
+ (For more details about these settings you can read [Configure the primary server][configure-the-primary-server])
1. Save the file and reconfigure GitLab for the database listen changes and
the replication slot changes to be applied.
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
- Restart PostgreSQL for its changes to take effect:
+ Restart PostgreSQL for its changes to take effect:
- ```sh
- gitlab-ctl restart postgresql
- ```
+ ```sh
+ gitlab-ctl restart postgresql
+ ```
1. Re-enable migrations now that PostgreSQL is restarted and listening on the
private address.
- Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
+ Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
- ```ruby
- gitlab_rails['auto_migrate'] = true
- ```
+ ```ruby
+ gitlab_rails['auto_migrate'] = true
+ ```
- Save the file and reconfigure GitLab:
+ Save the file and reconfigure GitLab:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
### Step 2. Initiate the replication process
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index b8071b5993f..c1a95157f8d 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -143,26 +143,26 @@ access to the **primary** node during the maintenance window.
all HTTP, HTTPS and SSH traffic to/from the **primary** node, **except** for your IP and
the **secondary** node's IP.
- For instance, you might run the following commands on the server(s) making up your **primary** node:
+ For instance, you might run the following commands on the server(s) making up your **primary** node:
- ```sh
- sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 22 -j ACCEPT
- sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 22 -j ACCEPT
- sudo iptables -A INPUT --destination-port 22 -j REJECT
+ ```sh
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 22 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 22 -j ACCEPT
+ sudo iptables -A INPUT --destination-port 22 -j REJECT
- sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 80 -j ACCEPT
- sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 80 -j ACCEPT
- sudo iptables -A INPUT --tcp-dport 80 -j REJECT
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 80 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 80 -j ACCEPT
+ sudo iptables -A INPUT --tcp-dport 80 -j REJECT
- sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 443 -j ACCEPT
- sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 443 -j ACCEPT
- sudo iptables -A INPUT --tcp-dport 443 -j REJECT
- ```
+ sudo iptables -A INPUT -p tcp -s <secondary_node_ip> --destination-port 443 -j ACCEPT
+ sudo iptables -A INPUT -p tcp -s <your_ip> --destination-port 443 -j ACCEPT
+ sudo iptables -A INPUT --tcp-dport 443 -j REJECT
+ ```
- From this point, users will be unable to view their data or make changes on the
- **primary** node. They will also be unable to log in to the **secondary** node.
- However, existing sessions will work for the remainder of the maintenance period, and
- public data will be accessible throughout.
+ From this point, users will be unable to view their data or make changes on the
+ **primary** node. They will also be unable to log in to the **secondary** node.
+ However, existing sessions will work for the remainder of the maintenance period, and
+ public data will be accessible throughout.
1. Verify the **primary** node is blocked to HTTP traffic by visiting it in browser via
another IP. The server should refuse connection.
@@ -187,10 +187,11 @@ access to the **primary** node during the maintenance window.
before it is completed will cause the work to be lost.
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
- - All replication meters to each 100% replicated, 0% failures.
- - All verification meters reach 100% verified, 0% failures.
- - Database replication lag is 0ms.
- - The Geo log cursor is up to date (0 events behind).
+
+ - All replication meters to each 100% replicated, 0% failures.
+ - All verification meters reach 100% verified, 0% failures.
+ - Database replication lag is 0ms.
+ - The Geo log cursor is up to date (0 events behind).
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 3d4f69d3abe..dd5e09c0dd7 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -16,11 +16,10 @@ The basic steps of configuring a **secondary** node are to:
You are encouraged to first read through all the steps before executing them
in your testing/production environment.
-> **Notes:**
-> - **Do not** setup any custom authentication for the **secondary** nodes. This will be
- handled by the **primary** node.
-> - Any change that requires access to the **Admin Area** needs to be done in the
- **primary** node because the **secondary** node is a read-only replica.
+NOTE: **Note:**
+**Do not** set up any custom authentication for the **secondary** nodes. This will be handled by the **primary** node.
+Any change that requires access to the **Admin Area** needs to be done in the
+**primary** node because the **secondary** node is a read-only replica.
### Step 1. Manually replicate secret GitLab values
@@ -31,47 +30,47 @@ they must be manually replicated to the **secondary** node.
1. SSH into the **primary** node, and execute the command below:
- ```sh
- sudo cat /etc/gitlab/gitlab-secrets.json
- ```
+ ```sh
+ sudo cat /etc/gitlab/gitlab-secrets.json
+ ```
- This will display the secrets that need to be replicated, in JSON format.
+ This will display the secrets that need to be replicated, in JSON format.
1. SSH into the **secondary** node and login as the `root` user:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Make a backup of any existing secrets:
- ```sh
- mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.`date +%F`
- ```
+ ```sh
+ mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.`date +%F`
+ ```
1. Copy `/etc/gitlab/gitlab-secrets.json` from the **primary** node to the **secondary** node, or
copy-and-paste the file contents between nodes:
- ```sh
- sudo editor /etc/gitlab/gitlab-secrets.json
+ ```sh
+ sudo editor /etc/gitlab/gitlab-secrets.json
- # paste the output of the `cat` command you ran on the primary
- # save and exit
- ```
+ # paste the output of the `cat` command you ran on the primary
+ # save and exit
+ ```
1. Ensure the file permissions are correct:
- ```sh
- chown root:root /etc/gitlab/gitlab-secrets.json
- chmod 0600 /etc/gitlab/gitlab-secrets.json
- ```
+ ```sh
+ chown root:root /etc/gitlab/gitlab-secrets.json
+ chmod 0600 /etc/gitlab/gitlab-secrets.json
+ ```
1. Reconfigure the **secondary** node for the change to take effect:
- ```sh
- gitlab-ctl reconfigure
- gitlab-ctl restart
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ gitlab-ctl restart
+ ```
### Step 2. Manually replicate the **primary** node's SSH host keys
@@ -89,80 +88,80 @@ keys must be manually replicated to the **secondary** node.
1. SSH into the **secondary** node and login as the `root` user:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Make a backup of any existing SSH host keys:
- ```sh
- find /etc/ssh -iname ssh_host_* -exec cp {} {}.backup.`date +%F` \;
- ```
+ ```sh
+ find /etc/ssh -iname ssh_host_* -exec cp {} {}.backup.`date +%F` \;
+ ```
1. Copy OpenSSH host keys from the **primary** node:
- If you can access your **primary** node using the **root** user:
+ If you can access your **primary** node using the **root** user:
- ```sh
- # Run this from the secondary node, change `<primary_node_fqdn>` for the IP or FQDN of the server
- scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh
- ```
+ ```sh
+ # Run this from the secondary node, change `<primary_node_fqdn>` for the IP or FQDN of the server
+ scp root@<primary_node_fqdn>:/etc/ssh/ssh_host_*_key* /etc/ssh
+ ```
- If you only have access through a user with **sudo** privileges:
+ If you only have access through a user with **sudo** privileges:
- ```sh
- # Run this from your primary node:
- sudo tar --transform 's/.*\///g' -zcvf ~/geo-host-key.tar.gz /etc/ssh/ssh_host_*_key*
+ ```sh
+ # Run this from your primary node:
+ sudo tar --transform 's/.*\///g' -zcvf ~/geo-host-key.tar.gz /etc/ssh/ssh_host_*_key*
- # Run this from your secondary node:
- scp <user_with_sudo>@<primary_node_fqdn>:geo-host-key.tar.gz .
- tar zxvf ~/geo-host-key.tar.gz -C /etc/ssh
- ```
+ # Run this from your secondary node:
+ scp <user_with_sudo>@<primary_node_fqdn>:geo-host-key.tar.gz .
+ tar zxvf ~/geo-host-key.tar.gz -C /etc/ssh
+ ```
1. On your **secondary** node, ensure the file permissions are correct:
- ```sh
- chown root:root /etc/ssh/ssh_host_*_key*
- chmod 0600 /etc/ssh/ssh_host_*_key*
- ```
+ ```sh
+ chown root:root /etc/ssh/ssh_host_*_key*
+ chmod 0600 /etc/ssh/ssh_host_*_key*
+ ```
1. To verify key fingerprint matches, execute the following command on both nodes:
- ```sh
- for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
- ```
+ ```sh
+ for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
+ ```
- You should get an output similar to this one and they should be identical on both nodes:
+ You should get an output similar to this one and they should be identical on both nodes:
- ```sh
- 1024 SHA256:FEZX2jQa2bcsd/fn/uxBzxhKdx4Imc4raXrHwsbtP0M root@serverhostname (DSA)
- 256 SHA256:uw98R35Uf+fYEQ/UnJD9Br4NXUFPv7JAUln5uHlgSeY root@serverhostname (ECDSA)
- 256 SHA256:sqOUWcraZQKd89y/QQv/iynPTOGQxcOTIXU/LsoPmnM root@serverhostname (ED25519)
- 2048 SHA256:qwa+rgir2Oy86QI+PZi/QVR+MSmrdrpsuH7YyKknC+s root@serverhostname (RSA)
- ```
+ ```sh
+ 1024 SHA256:FEZX2jQa2bcsd/fn/uxBzxhKdx4Imc4raXrHwsbtP0M root@serverhostname (DSA)
+ 256 SHA256:uw98R35Uf+fYEQ/UnJD9Br4NXUFPv7JAUln5uHlgSeY root@serverhostname (ECDSA)
+ 256 SHA256:sqOUWcraZQKd89y/QQv/iynPTOGQxcOTIXU/LsoPmnM root@serverhostname (ED25519)
+ 2048 SHA256:qwa+rgir2Oy86QI+PZi/QVR+MSmrdrpsuH7YyKknC+s root@serverhostname (RSA)
+ ```
1. Verify that you have the correct public keys for the existing private keys:
- ```sh
- # This will print the fingerprint for private keys:
- for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
+ ```sh
+ # This will print the fingerprint for private keys:
+ for file in /etc/ssh/ssh_host_*_key; do ssh-keygen -lf $file; done
- # This will print the fingerprint for public keys:
- for file in /etc/ssh/ssh_host_*_key.pub; do ssh-keygen -lf $file; done
- ```
+ # This will print the fingerprint for public keys:
+ for file in /etc/ssh/ssh_host_*_key.pub; do ssh-keygen -lf $file; done
+ ```
- NOTE: **Note**:
- The output for private keys and public keys command should generate the same fingerprint.
+ NOTE: **Note:**
+ The output for private keys and public keys command should generate the same fingerprint.
1. Restart sshd on your **secondary** node:
- ```sh
- # Debian or Ubuntu installations
- sudo service ssh reload
+ ```sh
+ # Debian or Ubuntu installations
+ sudo service ssh reload
- # CentOS installations
- sudo service sshd reload
- ```
+ # CentOS installations
+ sudo service sshd reload
+ ```
### Step 3. Add the **secondary** node
@@ -176,22 +175,22 @@ keys must be manually replicated to the **secondary** node.
1. Click the **Add node** button.
1. SSH into your GitLab **secondary** server and restart the services:
- ```sh
- gitlab-ctl restart
- ```
+ ```sh
+ gitlab-ctl restart
+ ```
- Check if there are any common issue with your Geo setup by running:
+ Check if there are any common issue with your Geo setup by running:
- ```sh
- gitlab-rake gitlab:geo:check
- ```
+ ```sh
+ gitlab-rake gitlab:geo:check
+ ```
1. SSH into your **primary** server and login as root to verify the
**secondary** node is reachable or there are any common issue with your Geo setup:
- ```sh
- gitlab-rake gitlab:geo:check
- ```
+ ```sh
+ gitlab-rake gitlab:geo:check
+ ```
Once added to the admin panel and restarted, the **secondary** node will automatically start
replicating missing data from the **primary** node in a process known as **backfill**.
@@ -250,9 +249,8 @@ The two most obvious issues that can become apparent in the dashboard are:
1. Database replication not working well.
1. Instance to instance notification not working. In that case, it can be
something of the following:
- - You are using a custom certificate or custom CA (see the
- [troubleshooting document]).
- - The instance is firewalled (check your firewall rules).
+ - You are using a custom certificate or custom CA (see the [troubleshooting document](troubleshooting.md)).
+ - The instance is firewalled (check your firewall rules).
Please note that disabling a **secondary** node will stop the synchronization process.
@@ -304,5 +302,4 @@ See the [troubleshooting document](troubleshooting.md).
[gitlab-org/gitlab-ee#3789]: https://gitlab.com/gitlab-org/gitlab-ee/issues/3789
[gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821
[omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html
-[troubleshooting document]: troubleshooting.md
[using-geo]: using_a_geo_server.md
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 6658c37752a..021ed2d9f3c 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -52,186 +52,188 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
1. SSH into your GitLab **primary** server and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Execute the command below to define the node as **primary** node:
- ```sh
- gitlab-ctl set-geo-primary-node
- ```
+ ```sh
+ gitlab-ctl set-geo-primary-node
+ ```
- This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
+ This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
1. GitLab 10.4 and up only: Do the following to make sure the `gitlab` database user has a password defined:
- Generate a MD5 hash of the desired password:
+ Generate a MD5 hash of the desired password:
- ```sh
- gitlab-ctl pg-password-md5 gitlab
- # Enter password: <your_password_here>
- # Confirm password: <your_password_here>
- # fca0b89a972d69f00eb3ec98a5838484
- ```
+ ```sh
+ gitlab-ctl pg-password-md5 gitlab
+ # Enter password: <your_password_here>
+ # Confirm password: <your_password_here>
+ # fca0b89a972d69f00eb3ec98a5838484
+ ```
- Edit `/etc/gitlab/gitlab.rb`:
+ Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
- postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
- # Every node that runs Unicorn or Sidekiq needs to have the database
- # password specified as below. If you have a high-availability setup, this
- # must be present in all application nodes.
- gitlab_rails['db_password'] = '<your_password_here>'
- ```
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
+ ```
1. Omnibus GitLab already has a [replication user]
called `gitlab_replicator`. You must set the password for this user manually.
You will be prompted to enter a password:
- ```sh
- gitlab-ctl set-replication-password
- ```
+ ```sh
+ gitlab-ctl set-replication-password
+ ```
- This command will also read the `postgresql['sql_replication_user']` Omnibus
- setting in case you have changed `gitlab_replicator` username to something
- else.
+ This command will also read the `postgresql['sql_replication_user']` Omnibus
+ setting in case you have changed `gitlab_replicator` username to something
+ else.
- If you are using an external database not managed by Omnibus GitLab, you need
- to create the replicator user and define a password to it manually:
+ If you are using an external database not managed by Omnibus GitLab, you need
+ to create the replicator user and define a password to it manually:
- ```sql
- --- Create a new user 'replicator'
- CREATE USER gitlab_replicator;
+ ```sql
+ --- Create a new user 'replicator'
+ CREATE USER gitlab_replicator;
- --- Set/change a password and grants replication privilege
- ALTER USER gitlab_replicator WITH REPLICATION ENCRYPTED PASSWORD '<replication_password>';
- ```
+ --- Set/change a password and grants replication privilege
+ ALTER USER gitlab_replicator WITH REPLICATION ENCRYPTED PASSWORD '<replication_password>';
+ ```
1. Configure PostgreSQL to listen on network interfaces:
- For security reasons, PostgreSQL does not listen on any network interfaces
- by default. However, Geo requires the **secondary** node to be able to
- connect to the **primary** node's database. For this reason, we need the address of
- each node. Note: For external PostgreSQL instances, see [additional instructions](external_database.md).
-
- If you are using a cloud provider, you can lookup the addresses for each
- Geo node through your cloud provider's management console.
-
- To lookup the address of a Geo node, SSH in to the Geo node and execute:
-
- ```sh
- ##
- ## Private address
- ##
- ip route get 255.255.255.255 | awk '{print "Private address:", $NF; exit}'
-
- ##
- ## Public address
- ##
- echo "External address: $(curl --silent ipinfo.io/ip)"
- ```
-
- In most cases, the following addresses will be used to configure GitLab
- Geo:
-
- | Configuration | Address |
- |:----------------------------------------|:------------------------------------------------------|
- | `postgresql['listen_address']` | **Primary** node's public or VPC private address. |
- | `postgresql['md5_auth_cidr_addresses']` | **Secondary** node's public or VPC private addresses. |
-
- If you are using Google Cloud Platform, SoftLayer, or any other vendor that
- provides a virtual private cloud (VPC) you can use the **secondary** node's private
- address (corresponds to "internal address" for Google Cloud Platform) for
- `postgresql['md5_auth_cidr_addresses']` and `postgresql['listen_address']`.
-
- The `listen_address` option opens PostgreSQL up to network connections
- with the interface corresponding to the given address. See [the PostgreSQL
- documentation][pg-docs-runtime-conn] for more details.
-
- Depending on your network configuration, the suggested addresses may not
- be correct. If your **primary** node and **secondary** nodes connect over a local
- area network, or a virtual network connecting availability zones like
- [Amazon's VPC](https://aws.amazon.com/vpc/) or [Google's VPC](https://cloud.google.com/vpc/)
- you should use the **secondary** node's private address for `postgresql['md5_auth_cidr_addresses']`.
-
- Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
- addresses with addresses appropriate to your network configuration:
-
- ```ruby
- ##
- ## Geo Primary role
- ## - configure dependent flags automatically to enable Geo
- ##
- roles ['geo_primary_role']
-
- ##
- ## Primary address
- ## - replace '<primary_node_ip>' with the public or VPC address of your Geo primary node
- ##
- postgresql['listen_address'] = '<primary_node_ip>'
-
- ##
- # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
- # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
- ##
- postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
-
- ##
- ## Replication settings
- ## - set this to be the number of Geo secondary nodes you have
- ##
- postgresql['max_replication_slots'] = 1
- # postgresql['max_wal_senders'] = 10
- # postgresql['wal_keep_segments'] = 10
-
- ##
- ## Disable automatic database migrations temporarily
- ## (until PostgreSQL is restarted and listening on the private address).
- ##
- gitlab_rails['auto_migrate'] = false
- ```
+ For security reasons, PostgreSQL does not listen on any network interfaces
+ by default. However, Geo requires the **secondary** node to be able to
+ connect to the **primary** node's database. For this reason, we need the address of
+ each node.
+
+ NOTE: **Note:** For external PostgreSQL instances, see [additional instructions](external_database.md).
+
+ If you are using a cloud provider, you can lookup the addresses for each
+ Geo node through your cloud provider's management console.
+
+ To lookup the address of a Geo node, SSH in to the Geo node and execute:
+
+ ```sh
+ ##
+ ## Private address
+ ##
+ ip route get 255.255.255.255 | awk '{print "Private address:", $NF; exit}'
+
+ ##
+ ## Public address
+ ##
+ echo "External address: $(curl --silent ipinfo.io/ip)"
+ ```
+
+ In most cases, the following addresses will be used to configure GitLab
+ Geo:
+
+ | Configuration | Address |
+ |:----------------------------------------|:------------------------------------------------------|
+ | `postgresql['listen_address']` | **Primary** node's public or VPC private address. |
+ | `postgresql['md5_auth_cidr_addresses']` | **Secondary** node's public or VPC private addresses. |
+
+ If you are using Google Cloud Platform, SoftLayer, or any other vendor that
+ provides a virtual private cloud (VPC) you can use the **secondary** node's private
+ address (corresponds to "internal address" for Google Cloud Platform) for
+ `postgresql['md5_auth_cidr_addresses']` and `postgresql['listen_address']`.
+
+ The `listen_address` option opens PostgreSQL up to network connections
+ with the interface corresponding to the given address. See [the PostgreSQL
+ documentation][pg-docs-runtime-conn] for more details.
+
+ Depending on your network configuration, the suggested addresses may not
+ be correct. If your **primary** node and **secondary** nodes connect over a local
+ area network, or a virtual network connecting availability zones like
+ [Amazon's VPC](https://aws.amazon.com/vpc/) or [Google's VPC](https://cloud.google.com/vpc/)
+ you should use the **secondary** node's private address for `postgresql['md5_auth_cidr_addresses']`.
+
+ Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
+ addresses with addresses appropriate to your network configuration:
+
+ ```ruby
+ ##
+ ## Geo Primary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_primary_role']
+
+ ##
+ ## Primary address
+ ## - replace '<primary_node_ip>' with the public or VPC address of your Geo primary node
+ ##
+ postgresql['listen_address'] = '<primary_node_ip>'
+
+ ##
+ # Allow PostgreSQL client authentication from the primary and secondary IPs. These IPs may be
+ # public or VPC addresses in CIDR format, for example ['198.51.100.1/32', '198.51.100.2/32']
+ ##
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32']
+
+ ##
+ ## Replication settings
+ ## - set this to be the number of Geo secondary nodes you have
+ ##
+ postgresql['max_replication_slots'] = 1
+ # postgresql['max_wal_senders'] = 10
+ # postgresql['wal_keep_segments'] = 10
+
+ ##
+ ## Disable automatic database migrations temporarily
+ ## (until PostgreSQL is restarted and listening on the private address).
+ ##
+ gitlab_rails['auto_migrate'] = false
+ ```
1. Optional: If you want to add another **secondary** node, the relevant setting would look like:
- ```ruby
- postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32', '<another_secondary_node_ip>/32']
- ```
+ ```ruby
+ postgresql['md5_auth_cidr_addresses'] = ['<primary_node_ip>/32', '<secondary_node_ip>/32', '<another_secondary_node_ip>/32']
+ ```
- You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
- match your database replication requirements. Consult the [PostgreSQL -
- Replication documentation][pg-docs-runtime-replication]
- for more information.
+ You may also want to edit the `wal_keep_segments` and `max_wal_senders` to
+ match your database replication requirements. Consult the [PostgreSQL -
+ Replication documentation][pg-docs-runtime-replication]
+ for more information.
1. Save the file and reconfigure GitLab for the database listen changes and
the replication slot changes to be applied:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
- Restart PostgreSQL for its changes to take effect:
+ Restart PostgreSQL for its changes to take effect:
- ```sh
- gitlab-ctl restart postgresql
- ```
+ ```sh
+ gitlab-ctl restart postgresql
+ ```
1. Re-enable migrations now that PostgreSQL is restarted and listening on the
private address.
- Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
+ Edit `/etc/gitlab/gitlab.rb` and **change** the configuration to `true`:
- ```ruby
- gitlab_rails['auto_migrate'] = true
- ```
+ ```ruby
+ gitlab_rails['auto_migrate'] = true
+ ```
- Save the file and reconfigure GitLab:
+ Save the file and reconfigure GitLab:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
1. Now that the PostgreSQL server is set up to accept remote connections, run
`netstat -plnt | grep 5432` to make sure that PostgreSQL is listening on port
@@ -241,143 +243,143 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
will be used automatically to protect your PostgreSQL traffic from
eavesdroppers, but to protect against active ("man-in-the-middle") attackers,
the **secondary** node needs a copy of the certificate. Make a copy of the PostgreSQL
- `server.crt` file on the **primary** node by running this command:
+ `server.crt` file on the **primary** node by running this command:
- ```sh
- cat ~gitlab-psql/data/server.crt
- ```
+ ```sh
+ cat ~gitlab-psql/data/server.crt
+ ```
- Copy the output into a clipboard or into a local file. You
- will need it when setting up the **secondary** node! The certificate is not sensitive
- data.
+ Copy the output into a clipboard or into a local file. You
+ will need it when setting up the **secondary** node! The certificate is not sensitive
+ data.
### Step 2. Configure the **secondary** server
1. SSH into your GitLab **secondary** server and login as root:
- ```
- sudo -i
- ```
+ ```
+ sudo -i
+ ```
1. Stop application server and Sidekiq
- ```
- gitlab-ctl stop unicorn
- gitlab-ctl stop sidekiq
- ```
+ ```
+ gitlab-ctl stop unicorn
+ gitlab-ctl stop sidekiq
+ ```
- NOTE: **Note**:
- This step is important so we don't try to execute anything before the node is fully configured.
+ NOTE: **Note:**
+ This step is important so we don't try to execute anything before the node is fully configured.
1. [Check TCP connectivity][rake-maintenance] to the **primary** node's PostgreSQL server:
- ```sh
- gitlab-rake gitlab:tcp_check[<primary_node_ip>,5432]
- ```
+ ```sh
+ gitlab-rake gitlab:tcp_check[<primary_node_ip>,5432]
+ ```
- NOTE: **Note**:
- If this step fails, you may be using the wrong IP address, or a firewall may
- be preventing access to the server. Check the IP address, paying close
- attention to the difference between public and private addresses and ensure
- that, if a firewall is present, the **secondary** node is permitted to connect to the
- **primary** node on port 5432.
+ NOTE: **Note:**
+ If this step fails, you may be using the wrong IP address, or a firewall may
+ be preventing access to the server. Check the IP address, paying close
+ attention to the difference between public and private addresses and ensure
+ that, if a firewall is present, the **secondary** node is permitted to connect to the
+ **primary** node on port 5432.
1. Create a file `server.crt` in the **secondary** server, with the content you got on the last step of the **primary** node's setup:
- ```
- editor server.crt
- ```
+ ```
+ editor server.crt
+ ```
1. Set up PostgreSQL TLS verification on the **secondary** node:
- Install the `server.crt` file:
+ Install the `server.crt` file:
- ```sh
- install \
- -D \
- -o gitlab-psql \
- -g gitlab-psql \
- -m 0400 \
- -T server.crt ~gitlab-psql/.postgresql/root.crt
- ```
+ ```sh
+ install \
+ -D \
+ -o gitlab-psql \
+ -g gitlab-psql \
+ -m 0400 \
+ -T server.crt ~gitlab-psql/.postgresql/root.crt
+ ```
- PostgreSQL will now only recognize that exact certificate when verifying TLS
- connections. The certificate can only be replicated by someone with access
- to the private key, which is **only** present on the **primary** node.
+ PostgreSQL will now only recognize that exact certificate when verifying TLS
+ connections. The certificate can only be replicated by someone with access
+ to the private key, which is **only** present on the **primary** node.
1. Test that the `gitlab-psql` user can connect to the **primary** node's database
(the default Omnibus database name is gitlabhq_production):
- ```sh
- sudo \
- -u gitlab-psql /opt/gitlab/embedded/bin/psql \
- --list \
- -U gitlab_replicator \
- -d "dbname=gitlabhq_production sslmode=verify-ca" \
- -W \
- -h <primary_node_ip>
- ```
+ ```sh
+ sudo \
+ -u gitlab-psql /opt/gitlab/embedded/bin/psql \
+ --list \
+ -U gitlab_replicator \
+ -d "dbname=gitlabhq_production sslmode=verify-ca" \
+ -W \
+ -h <primary_node_ip>
+ ```
- When prompted enter the password you set in the first step for the
- `gitlab_replicator` user. If all worked correctly, you should see
- the list of **primary** node's databases.
+ When prompted enter the password you set in the first step for the
+ `gitlab_replicator` user. If all worked correctly, you should see
+ the list of **primary** node's databases.
- A failure to connect here indicates that the TLS configuration is incorrect.
- Ensure that the contents of `~gitlab-psql/data/server.crt` on the **primary** node
- match the contents of `~gitlab-psql/.postgresql/root.crt` on the **secondary** node.
+ A failure to connect here indicates that the TLS configuration is incorrect.
+ Ensure that the contents of `~gitlab-psql/data/server.crt` on the **primary** node
+ match the contents of `~gitlab-psql/.postgresql/root.crt` on the **secondary** node.
1. Configure PostgreSQL to enable FDW support:
- This step is similar to how we configured the **primary** instance.
- We need to enable this, to enable FDW support, even if using a single node.
-
- Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
- addresses with addresses appropriate to your network configuration:
-
- ```ruby
- ##
- ## Geo Secondary role
- ## - configure dependent flags automatically to enable Geo
- ##
- roles ['geo_secondary_role']
-
- ##
- ## Secondary address
- ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
- ##
- postgresql['listen_address'] = '<secondary_node_ip>'
- postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
-
- ##
- ## Database credentials password (defined previously in primary node)
- ## - replicate same values here as defined in primary node
- ##
- postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
- gitlab_rails['db_password'] = '<your_password_here>'
-
- ##
- ## Enable FDW support for the Geo Tracking Database (improves performance)
- ##
- geo_secondary['db_fdw'] = true
- ```
-
- For external PostgreSQL instances, see [additional instructions](external_database.md).
- If you bring a former **primary** node back online to serve as a **secondary** node, then you also need to remove `roles ['geo_primary_role']` or `geo_primary_role['enable'] = true`.
+ This step is similar to how we configured the **primary** instance.
+ We need to enable this, to enable FDW support, even if using a single node.
+
+ Edit `/etc/gitlab/gitlab.rb` and add the following, replacing the IP
+ addresses with addresses appropriate to your network configuration:
+
+ ```ruby
+ ##
+ ## Geo Secondary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_secondary_role']
+
+ ##
+ ## Secondary address
+ ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
+ ##
+ postgresql['listen_address'] = '<secondary_node_ip>'
+ postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
+
+ ##
+ ## Database credentials password (defined previously in primary node)
+ ## - replicate same values here as defined in primary node
+ ##
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ gitlab_rails['db_password'] = '<your_password_here>'
+
+ ##
+ ## Enable FDW support for the Geo Tracking Database (improves performance)
+ ##
+ geo_secondary['db_fdw'] = true
+ ```
+
+ For external PostgreSQL instances, see [additional instructions](external_database.md).
+ If you bring a former **primary** node back online to serve as a **secondary** node, then you also need to remove `roles ['geo_primary_role']` or `geo_primary_role['enable'] = true`.
1. Reconfigure GitLab for the changes to take effect:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
1. Restart PostgreSQL for the IP change to take effect and reconfigure again:
- ```sh
- gitlab-ctl restart postgresql
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl restart postgresql
+ gitlab-ctl reconfigure
+ ```
- This last reconfigure will provision the FDW configuration and enable it.
+ This last reconfigure will provision the FDW configuration and enable it.
### Step 3. Initiate the replication process
@@ -394,9 +396,9 @@ data before running `pg_basebackup`.
1. SSH into your GitLab **secondary** server and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Choose a database-friendly name to use for your **secondary** node to
use as the replication slot name. For example, if your domain is
@@ -404,38 +406,40 @@ data before running `pg_basebackup`.
name as shown in the commands below.
1. Execute the command below to start a backup/restore and begin the replication
- CAUTION: **Warning:** Each Geo **secondary** node must have its own unique replication slot name.
- Using the same slot name between two secondaries will break PostgreSQL replication.
-
- ```sh
- gitlab-ctl replicate-geo-database \
- --slot-name=<secondary_node_name> \
- --host=<primary_node_ip>
- ```
-
- When prompted, enter the _plaintext_ password you set up for the `gitlab_replicator`
- user in the first step.
-
- This command also takes a number of additional options. You can use `--help`
- to list them all, but here are a couple of tips:
- - If PostgreSQL is listening on a non-standard port, add `--port=` as well.
- - If your database is too large to be transferred in 30 minutes, you will need
- to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
- initial replication to take under an hour.
- - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
- (e.g., you know the network path is secure, or you are using a site-to-site
- VPN). This is **not** safe over the public Internet!
- - You can read more details about each `sslmode` in the
- [PostgreSQL documentation][pg-docs-ssl];
- the instructions above are carefully written to ensure protection against
- both passive eavesdroppers and active "man-in-the-middle" attackers.
- - Change the `--slot-name` to the name of the replication slot
- to be used on the **primary** database. The script will attempt to create the
- replication slot automatically if it does not exist.
- - If you're repurposing an old server into a Geo **secondary** node, you'll need to
- add `--force` to the command line.
- - When not in a production machine you can disable backup step if you
- really sure this is what you want by adding `--skip-backup`
+
+ CAUTION: **Warning:** Each Geo **secondary** node must have its own unique replication slot name.
+ Using the same slot name between two secondaries will break PostgreSQL replication.
+
+ ```sh
+ gitlab-ctl replicate-geo-database \
+ --slot-name=<secondary_node_name> \
+ --host=<primary_node_ip>
+ ```
+
+ When prompted, enter the _plaintext_ password you set up for the `gitlab_replicator`
+ user in the first step.
+
+ This command also takes a number of additional options. You can use `--help`
+ to list them all, but here are a couple of tips:
+
+ - If PostgreSQL is listening on a non-standard port, add `--port=` as well.
+ - If your database is too large to be transferred in 30 minutes, you will need
+ to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
+ initial replication to take under an hour.
+ - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
+ (e.g., you know the network path is secure, or you are using a site-to-site
+ VPN). This is **not** safe over the public Internet!
+ - You can read more details about each `sslmode` in the
+ [PostgreSQL documentation][pg-docs-ssl];
+ the instructions above are carefully written to ensure protection against
+ both passive eavesdroppers and active "man-in-the-middle" attackers.
+ - Change the `--slot-name` to the name of the replication slot
+ to be used on the **primary** database. The script will attempt to create the
+ replication slot automatically if it does not exist.
+ - If you're repurposing an old server into a Geo **secondary** node, you'll need to
+ add `--force` to the command line.
+ - When not in a production machine you can disable backup step if you
+ really sure this is what you want by adding `--skip-backup`
The replication process is now complete.
@@ -452,42 +456,42 @@ it will need a separate read-only user to make [PostgreSQL FDW queries][FDW]
work:
1. On the **primary** Geo database, enter the PostgreSQL on the console as an
- admin user. If you are using an Omnibus-managed database, log onto the **primary**
- node that is running the PostgreSQL database (the default Omnibus database name is gitlabhq_production):
+ admin user. If you are using an Omnibus-managed database, log onto the **primary**
+ node that is running the PostgreSQL database (the default Omnibus database name is gitlabhq_production):
- ```sh
- sudo \
- -u gitlab-psql /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/postgresql gitlabhq_production
- ```
+ ```sh
+ sudo \
+ -u gitlab-psql /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/postgresql gitlabhq_production
+ ```
1. Then create the read-only user:
- ```sql
- -- NOTE: Use the password defined earlier
- CREATE USER gitlab_geo_fdw WITH password 'mypassword';
- GRANT CONNECT ON DATABASE gitlabhq_production to gitlab_geo_fdw;
- GRANT USAGE ON SCHEMA public TO gitlab_geo_fdw;
- GRANT SELECT ON ALL TABLES IN SCHEMA public TO gitlab_geo_fdw;
- GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO gitlab_geo_fdw;
+ ```sql
+ -- NOTE: Use the password defined earlier
+ CREATE USER gitlab_geo_fdw WITH password 'mypassword';
+ GRANT CONNECT ON DATABASE gitlabhq_production to gitlab_geo_fdw;
+ GRANT USAGE ON SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL TABLES IN SCHEMA public TO gitlab_geo_fdw;
+ GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO gitlab_geo_fdw;
- -- Tables created by "gitlab" should be made read-only for "gitlab_geo_fdw"
- -- automatically.
- ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON TABLES TO gitlab_geo_fdw;
- ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON SEQUENCES TO gitlab_geo_fdw;
- ```
+ -- Tables created by "gitlab" should be made read-only for "gitlab_geo_fdw"
+ -- automatically.
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON TABLES TO gitlab_geo_fdw;
+ ALTER DEFAULT PRIVILEGES FOR USER gitlab IN SCHEMA public GRANT SELECT ON SEQUENCES TO gitlab_geo_fdw;
+ ```
1. On the **secondary** nodes, change `/etc/gitlab/gitlab.rb`:
- ```
- geo_postgresql['fdw_external_user'] = 'gitlab_geo_fdw'
- ```
+ ```
+ geo_postgresql['fdw_external_user'] = 'gitlab_geo_fdw'
+ ```
1. Save the file and reconfigure GitLab for the changes to be applied:
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
## Troubleshooting
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
index 177ca68613e..452e4f490a6 100644
--- a/doc/administration/geo/replication/external_database.md
+++ b/doc/administration/geo/replication/external_database.md
@@ -4,7 +4,7 @@ This document is relevant if you are using a PostgreSQL instance that is *not
managed by Omnibus*. This includes cloud-managed instances like AWS RDS, or
manually installed and configured PostgreSQL instances.
-NOTE: **Note**:
+NOTE: **Note:**
We strongly recommend running Omnibus-managed instances as they are actively
developed and tested. We aim to be compatible with most external
(not managed by Omnibus) databases but we do not guarantee compatibility.
@@ -13,17 +13,17 @@ developed and tested. We aim to be compatible with most external
1. SSH into a GitLab **primary** application server and login as root:
- ```sh
- sudo -i
- ```
+ ```sh
+ sudo -i
+ ```
1. Execute the command below to define the node as **primary** node:
- ```sh
- gitlab-ctl set-geo-primary-node
- ```
+ ```sh
+ gitlab-ctl set-geo-primary-node
+ ```
- This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
+ This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
### Configure the external database to be replicated
@@ -101,26 +101,27 @@ To configure the connection to the external read-replica database and enable Log
1. SSH into a GitLab **secondary** application server and login as root:
- ```bash
- sudo -i
- ```
+ ```bash
+ sudo -i
+ ```
1. Edit `/etc/gitlab/gitlab.rb` and add the following
- ```ruby
- ##
- ## Geo Secondary role
- ## - configure dependent flags automatically to enable Geo
- ##
- roles ['geo_secondary_role']
+ ```ruby
+ ##
+ ## Geo Secondary role
+ ## - configure dependent flags automatically to enable Geo
+ ##
+ roles ['geo_secondary_role']
+
+ # note this is shared between both databases,
+ # make sure you define the same password in both
+ gitlab_rails['db_password'] = '<your_password_here>'
- # note this is shared between both databases,
- # make sure you define the same password in both
- gitlab_rails['db_password'] = '<your_password_here>'
+ gitlab_rails['db_username'] = 'gitlab'
+ gitlab_rails['db_host'] = '<database_read_replica_host>'
+ ```
- gitlab_rails['db_username'] = 'gitlab'
- gitlab_rails['db_host'] = '<database_read_replica_host>'
- ```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
### Configure the tracking database
@@ -147,73 +148,72 @@ the tracking database on port 5432.
1. SSH into a GitLab **secondary** server and login as root:
- ```bash
- sudo -i
- ```
+ ```bash
+ sudo -i
+ ```
1. Edit `/etc/gitlab/gitlab.rb` with the connection params and credentials for
- the machine with the PostgreSQL instance:
+ the machine with the PostgreSQL instance:
- ```ruby
- geo_secondary['db_username'] = 'gitlab_geo'
- geo_secondary['db_password'] = '<your_password_here>'
+ ```ruby
+ geo_secondary['db_username'] = 'gitlab_geo'
+ geo_secondary['db_password'] = '<your_password_here>'
- geo_secondary['db_host'] = '<tracking_database_host>'
- geo_secondary['db_port'] = <tracking_database_port> # change to the correct port
- geo_secondary['db_fdw'] = true # enable FDW
- geo_postgresql['enable'] = false # don't use internal managed instance
- ```
+ geo_secondary['db_host'] = '<tracking_database_host>'
+ geo_secondary['db_port'] = <tracking_database_port> # change to the correct port
+ geo_secondary['db_fdw'] = true # enable FDW
+ geo_postgresql['enable'] = false # don't use internal managed instance
+ ```
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure)
1. Run the tracking database migrations:
- ```bash
- gitlab-rake geo:db:create
- gitlab-rake geo:db:migrate
- ```
-
-1. Configure the
- [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
- connection and credentials:
-
- Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
- params to match your environment. Execute it to set up the FDW connection.
-
- ```bash
- #!/bin/bash
-
- # Secondary Database connection params:
- DB_HOST="<public_ip_or_vpc_private_ip>"
- DB_NAME="gitlabhq_production"
- DB_USER="gitlab"
- DB_PASS="<your_password_here>"
- DB_PORT="5432"
-
- # Tracking Database connection params:
- GEO_DB_HOST="<public_ip_or_vpc_private_ip>"
- GEO_DB_NAME="gitlabhq_geo_production"
- GEO_DB_USER="gitlab_geo"
- GEO_DB_PORT="5432"
-
- query_exec () {
- gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
- }
-
- query_exec "CREATE EXTENSION postgres_fdw;"
- query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
- query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');"
- query_exec "CREATE SCHEMA gitlab_secondary;"
- query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
- ```
-
- NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
- but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
- `psql` for AWS RDS.
+ ```bash
+ gitlab-rake geo:db:create
+ gitlab-rake geo:db:migrate
+ ```
+
+1. Configure the [PostgreSQL FDW](https://www.postgresql.org/docs/9.6/static/postgres-fdw.html)
+ connection and credentials:
+
+ Save the script below in a file, ex. `/tmp/geo_fdw.sh` and modify the connection
+ params to match your environment. Execute it to set up the FDW connection.
+
+ ```bash
+ #!/bin/bash
+
+ # Secondary Database connection params:
+ DB_HOST="<public_ip_or_vpc_private_ip>"
+ DB_NAME="gitlabhq_production"
+ DB_USER="gitlab"
+ DB_PASS="<your_password_here>"
+ DB_PORT="5432"
+
+ # Tracking Database connection params:
+ GEO_DB_HOST="<public_ip_or_vpc_private_ip>"
+ GEO_DB_NAME="gitlabhq_geo_production"
+ GEO_DB_USER="gitlab_geo"
+ GEO_DB_PORT="5432"
+
+ query_exec () {
+ gitlab-psql -h $GEO_DB_HOST -d $GEO_DB_NAME -p $GEO_DB_PORT -c "${1}"
+ }
+
+ query_exec "CREATE EXTENSION postgres_fdw;"
+ query_exec "CREATE SERVER gitlab_secondary FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '${DB_HOST}', dbname '${DB_NAME}', port '${DB_PORT}');"
+ query_exec "CREATE USER MAPPING FOR ${GEO_DB_USER} SERVER gitlab_secondary OPTIONS (user '${DB_USER}', password '${DB_PASS}');"
+ query_exec "CREATE SCHEMA gitlab_secondary;"
+ query_exec "GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO ${GEO_DB_USER};"
+ ```
+
+ NOTE: **Note:** The script template above uses `gitlab-psql` as it's intended to be executed from the Geo machine,
+ but you can change it to `psql` and run it from any machine that has access to the database. We also recommend using
+ `psql` for AWS RDS.
1. Save the file and [restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart)
1. Populate the FDW tables:
- ```bash
- gitlab-rake geo:db:refresh_foreign_tables
- ```
+ ```bash
+ gitlab-rake geo:db:refresh_foreign_tables
+ ```
diff --git a/doc/administration/geo/replication/high_availability.md b/doc/administration/geo/replication/high_availability.md
index 28ad89c4446..61e18df2480 100644
--- a/doc/administration/geo/replication/high_availability.md
+++ b/doc/administration/geo/replication/high_availability.md
@@ -50,17 +50,17 @@ The following steps enable a GitLab cluster to serve as the **primary** node.
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
- ```ruby
- ##
- ## Enable the Geo primary role
- ##
- roles ['geo_primary_role']
-
- ##
- ## Disable automatic migrations
- ##
- gitlab_rails['auto_migrate'] = false
- ```
+ ```ruby
+ ##
+ ## Enable the Geo primary role
+ ##
+ roles ['geo_primary_role']
+
+ ##
+ ## Disable automatic migrations
+ ##
+ gitlab_rails['auto_migrate'] = false
+ ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
@@ -107,36 +107,36 @@ Configure the [**secondary** database](database.md) as a read-only replica of
the **primary** database. Use the following as a guide.
1. Edit `/etc/gitlab/gitlab.rb` in the replica database machine, and add the
- following:
-
- ```ruby
- ##
- ## Configure the PostgreSQL role
- ##
- roles ['postgres_role']
-
- ##
- ## Secondary address
- ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
- ## - replace '<tracking_database_ip>' with the public or VPC address of your Geo tracking database node
- ##
- postgresql['listen_address'] = '<secondary_node_ip>'
- postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32', '<tracking_database_ip>/32']
-
- ##
- ## Database credentials password (defined previously in primary node)
- ## - replicate same values here as defined in primary node
- ##
- postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
- gitlab_rails['db_password'] = '<your_password_here>'
-
- ##
- ## When running the Geo tracking database on a separate machine, disable it
- ## here and allow connections from the tracking database host. And ensure
- ## the tracking database IP is in postgresql['md5_auth_cidr_addresses'] above.
- ##
- geo_postgresql['enable'] = false
- ```
+ following:
+
+ ```ruby
+ ##
+ ## Configure the PostgreSQL role
+ ##
+ roles ['postgres_role']
+
+ ##
+ ## Secondary address
+ ## - replace '<secondary_node_ip>' with the public or VPC address of your Geo secondary node
+ ## - replace '<tracking_database_ip>' with the public or VPC address of your Geo tracking database node
+ ##
+ postgresql['listen_address'] = '<secondary_node_ip>'
+ postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32', '<tracking_database_ip>/32']
+
+ ##
+ ## Database credentials password (defined previously in primary node)
+ ## - replicate same values here as defined in primary node
+ ##
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ gitlab_rails['db_password'] = '<your_password_here>'
+
+ ##
+ ## When running the Geo tracking database on a separate machine, disable it
+ ## here and allow connections from the tracking database host. And ensure
+ ## the tracking database IP is in postgresql['md5_auth_cidr_addresses'] above.
+ ##
+ geo_postgresql['enable'] = false
+ ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
@@ -151,47 +151,47 @@ only a single machine, rather than as a PostgreSQL cluster.
Configure the tracking database.
1. Edit `/etc/gitlab/gitlab.rb` in the tracking database machine, and add the
- following:
-
- ```ruby
- ##
- ## Enable the Geo secondary tracking database
- ##
- geo_postgresql['enable'] = true
- geo_postgresql['listen_address'] = '<ip_address_of_this_host>'
- geo_postgresql['sql_user_password'] = '<tracking_database_password_md5_hash>'
-
- ##
- ## Configure FDW connection to the replica database
- ##
- geo_secondary['db_fdw'] = true
- geo_postgresql['fdw_external_password'] = '<replica_database_password_plaintext>'
- geo_postgresql['md5_auth_cidr_addresses'] = ['<replica_database_ip>/32']
- gitlab_rails['db_host'] = '<replica_database_ip>'
-
- # Prevent reconfigure from attempting to run migrations on the replica DB
- gitlab_rails['auto_migrate'] = false
-
- ##
- ## Disable all other services that aren't needed, since we don't have a role
- ## that does this.
- ##
- alertmanager['enable'] = false
- consul['enable'] = false
- gitaly['enable'] = false
- gitlab_monitor['enable'] = false
- gitlab_workhorse['enable'] = false
- nginx['enable'] = false
- node_exporter['enable'] = false
- pgbouncer_exporter['enable'] = false
- postgresql['enable'] = false
- prometheus['enable'] = false
- redis['enable'] = false
- redis_exporter['enable'] = false
- repmgr['enable'] = false
- sidekiq['enable'] = false
- unicorn['enable'] = false
- ```
+ following:
+
+ ```ruby
+ ##
+ ## Enable the Geo secondary tracking database
+ ##
+ geo_postgresql['enable'] = true
+ geo_postgresql['listen_address'] = '<ip_address_of_this_host>'
+ geo_postgresql['sql_user_password'] = '<tracking_database_password_md5_hash>'
+
+ ##
+ ## Configure FDW connection to the replica database
+ ##
+ geo_secondary['db_fdw'] = true
+ geo_postgresql['fdw_external_password'] = '<replica_database_password_plaintext>'
+ geo_postgresql['md5_auth_cidr_addresses'] = ['<replica_database_ip>/32']
+ gitlab_rails['db_host'] = '<replica_database_ip>'
+
+ # Prevent reconfigure from attempting to run migrations on the replica DB
+ gitlab_rails['auto_migrate'] = false
+
+ ##
+ ## Disable all other services that aren't needed, since we don't have a role
+ ## that does this.
+ ##
+ alertmanager['enable'] = false
+ consul['enable'] = false
+ gitaly['enable'] = false
+ gitlab_monitor['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = false
+ node_exporter['enable'] = false
+ pgbouncer_exporter['enable'] = false
+ postgresql['enable'] = false
+ prometheus['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ repmgr['enable'] = false
+ sidekiq['enable'] = false
+ unicorn['enable'] = false
+ ```
After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect.
@@ -211,50 +211,50 @@ following modifications:
1. Edit `/etc/gitlab/gitlab.rb` on each application server in the **secondary**
cluster, and add the following:
- ```ruby
- ##
- ## Enable the Geo secondary role
- ##
- roles ['geo_secondary_role', 'application_role']
-
- ##
- ## Disable automatic migrations
- ##
- gitlab_rails['auto_migrate'] = false
-
- ##
- ## Configure the connection to the tracking DB. And disable application
- ## servers from running tracking databases.
- ##
- geo_secondary['db_host'] = '<geo_tracking_db_host>'
- geo_secondary['db_password'] = '<geo_tracking_db_password>'
- geo_postgresql['enable'] = false
-
- ##
- ## Configure connection to the streaming replica database, if you haven't
- ## already
- ##
- gitlab_rails['db_host'] = '<replica_database_host>'
- gitlab_rails['db_password'] = '<replica_database_password>'
-
- ##
- ## Configure connection to Redis, if you haven't already
- ##
- gitlab_rails['redis_host'] = '<redis_host>'
- gitlab_rails['redis_password'] = '<redis_password>'
-
- ##
- ## If you are using custom users not managed by Omnibus, you need to specify
- ## UIDs and GIDs like below, and ensure they match between servers in a
- ## cluster to avoid permissions issues
- ##
- user['uid'] = 9000
- user['gid'] = 9000
- web_server['uid'] = 9001
- web_server['gid'] = 9001
- registry['uid'] = 9002
- registry['gid'] = 9002
- ```
+ ```ruby
+ ##
+ ## Enable the Geo secondary role
+ ##
+ roles ['geo_secondary_role', 'application_role']
+
+ ##
+ ## Disable automatic migrations
+ ##
+ gitlab_rails['auto_migrate'] = false
+
+ ##
+ ## Configure the connection to the tracking DB. And disable application
+ ## servers from running tracking databases.
+ ##
+ geo_secondary['db_host'] = '<geo_tracking_db_host>'
+ geo_secondary['db_password'] = '<geo_tracking_db_password>'
+ geo_postgresql['enable'] = false
+
+ ##
+ ## Configure connection to the streaming replica database, if you haven't
+ ## already
+ ##
+ gitlab_rails['db_host'] = '<replica_database_host>'
+ gitlab_rails['db_password'] = '<replica_database_password>'
+
+ ##
+ ## Configure connection to Redis, if you haven't already
+ ##
+ gitlab_rails['redis_host'] = '<redis_host>'
+ gitlab_rails['redis_password'] = '<redis_password>'
+
+ ##
+ ## If you are using custom users not managed by Omnibus, you need to specify
+ ## UIDs and GIDs like below, and ensure they match between servers in a
+ ## cluster to avoid permissions issues
+ ##
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
NOTE: **Note:**
If you had set up PostgreSQL cluster using the omnibus package and you had set
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index 54377f7ae4e..8e1d1cb46ba 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -80,8 +80,8 @@ In this diagram:
- If present, the [LDAP server](#ldap) should be configured to replicate for [Disaster Recovery](../disaster_recovery/index.md) scenarios.
- A **secondary** node performs different type of synchronizations against the **primary** node, using a special
authorization protected by JWT:
- - Repositories are cloned/updated via Git over HTTPS.
- - Attachments, LFS objects, and other files are downloaded via HTTPS using a private API endpoint.
+ - Repositories are cloned/updated via Git over HTTPS.
+ - Attachments, LFS objects, and other files are downloaded via HTTPS using a private API endpoint.
From the perspective of a user performing Git operations:
@@ -107,8 +107,8 @@ The following are required to run Geo:
- An operating system that supports OpenSSH 6.9+ (needed for
[fast lookup of authorized SSH keys in the database](../../operations/fast_ssh_key_lookup.md))
The following operating systems are known to ship with a current version of OpenSSH:
- - [CentOS](https://www.centos.org) 7.4+
- - [Ubuntu](https://www.ubuntu.com) 16.04+
+ - [CentOS](https://www.centos.org) 7.4+
+ - [Ubuntu](https://www.ubuntu.com) 16.04+
- PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
- Git 2.9+
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
index b190fe7d42d..6bdaad8f783 100644
--- a/doc/administration/geo/replication/remove_geo_node.md
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -10,41 +10,42 @@ Once removed from the Geo admin page, you must stop and uninstall the **secondar
1. On the **secondary** node, stop GitLab:
- ```bash
- sudo gitlab-ctl stop
- ```
+ ```bash
+ sudo gitlab-ctl stop
+ ```
+
1. On the **secondary** node, uninstall GitLab:
- ```bash
- # Stop gitlab and remove its supervision process
- sudo gitlab-ctl uninstall
+ ```bash
+ # Stop gitlab and remove its supervision process
+ sudo gitlab-ctl uninstall
- # Debian/Ubuntu
- sudo dpkg --remove gitlab-ee
+ # Debian/Ubuntu
+ sudo dpkg --remove gitlab-ee
- # Redhat/Centos
- sudo rpm --erase gitlab-ee
- ```
+ # Redhat/Centos
+ sudo rpm --erase gitlab-ee
+ ```
Once GitLab has been uninstalled from the **secondary** node, the replication slot must be dropped from the **primary** node's database as follows:
1. On the **primary** node, start a PostgreSQL console session:
- ```bash
- sudo gitlab-psql
- ```
+ ```bash
+ sudo gitlab-psql
+ ```
- NOTE: **Note:**
- Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
+ NOTE: **Note:**
+ Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
1. Find the name of the relevant replication slot. This is the slot that is specified with `--slot-name` when running the replicate command: `gitlab-ctl replicate-geo-database`.
- ```sql
- SELECT * FROM pg_replication_slots;
- ```
+ ```sql
+ SELECT * FROM pg_replication_slots;
+ ```
1. Remove the replication slot for the **secondary** node:
- ```sql
- SELECT pg_drop_replication_slot('<name_of_slot>');
- ```
+ ```sql
+ SELECT pg_drop_replication_slot('<name_of_slot>');
+ ```
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 5bd6cc81362..c7c78407084 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -184,17 +184,17 @@ log data to build up in `pg_xlog`. Removing the unused slots can reduce the amou
1. Start a PostgreSQL console session:
- ```sh
- sudo gitlab-psql gitlabhq_production
- ```
+ ```sh
+ sudo gitlab-psql gitlabhq_production
+ ```
- > Note that using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
+ Note: **Note:** Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
1. View your replication slots with:
- ```sql
- SELECT * FROM pg_replication_slots;
- ```
+ ```sql
+ SELECT * FROM pg_replication_slots;
+ ```
Slots where `active` is `f` are not active.
@@ -204,9 +204,9 @@ Slots where `active` is `f` are not active.
- If you are no longer using the slot (e.g. you no longer have Geo enabled), you can remove it with in the
PostgreSQL console session:
- ```sql
- SELECT pg_drop_replication_slot('<name_of_extra_slot>');
- ```
+ ```sql
+ SELECT pg_drop_replication_slot('<name_of_extra_slot>');
+ ```
### Very large repositories never successfully synchronize on the **secondary** node
@@ -237,82 +237,82 @@ to start again from scratch, there are a few steps that can help you:
1. Stop Sidekiq and the Geo LogCursor
- It's possible to make Sidekiq stop gracefully, but making it stop getting new jobs and
- wait until the current jobs to finish processing.
+ It's possible to make Sidekiq stop gracefully, but making it stop getting new jobs and
+ wait until the current jobs to finish processing.
- You need to send a **SIGTSTP** kill signal for the first phase and them a **SIGTERM**
- when all jobs have finished. Otherwise just use the `gitlab-ctl stop` commands.
+ You need to send a **SIGTSTP** kill signal for the first phase and them a **SIGTERM**
+ when all jobs have finished. Otherwise just use the `gitlab-ctl stop` commands.
- ```sh
- gitlab-ctl status sidekiq
- # run: sidekiq: (pid 10180) <- this is the PID you will use
- kill -TSTP 10180 # change to the correct PID
+ ```sh
+ gitlab-ctl status sidekiq
+ # run: sidekiq: (pid 10180) <- this is the PID you will use
+ kill -TSTP 10180 # change to the correct PID
- gitlab-ctl stop sidekiq
- gitlab-ctl stop geo-logcursor
- ```
+ gitlab-ctl stop sidekiq
+ gitlab-ctl stop geo-logcursor
+ ```
- You can watch sidekiq logs to know when sidekiq jobs processing have finished:
+ You can watch sidekiq logs to know when sidekiq jobs processing have finished:
- ```sh
- gitlab-ctl tail sidekiq
- ```
+ ```sh
+ gitlab-ctl tail sidekiq
+ ```
1. Rename repository storage folders and create new ones
- ```sh
- mv /var/opt/gitlab/git-data/repositories /var/opt/gitlab/git-data/repositories.old
- mkdir -p /var/opt/gitlab/git-data/repositories
- chown git:git /var/opt/gitlab/git-data/repositories
- ```
+ ```sh
+ mv /var/opt/gitlab/git-data/repositories /var/opt/gitlab/git-data/repositories.old
+ mkdir -p /var/opt/gitlab/git-data/repositories
+ chown git:git /var/opt/gitlab/git-data/repositories
+ ```
- TIP: **Tip**
- You may want to remove the `/var/opt/gitlab/git-data/repositories.old` in the future
- as soon as you confirmed that you don't need it anymore, to save disk space.
+ TIP: **Tip**
+ You may want to remove the `/var/opt/gitlab/git-data/repositories.old` in the future
+ as soon as you confirmed that you don't need it anymore, to save disk space.
1. _(Optional)_ Rename other data folders and create new ones
- CAUTION: **Caution**:
- You may still have files on the **secondary** node that have been removed from **primary** node but
- removal have not been reflected. If you skip this step, they will never be removed
- from this Geo node.
+ CAUTION: **Caution**:
+ You may still have files on the **secondary** node that have been removed from **primary** node but
+ removal have not been reflected. If you skip this step, they will never be removed
+ from this Geo node.
- Any uploaded content like file attachments, avatars or LFS objects are stored in a
- subfolder in one of the two paths below:
+ Any uploaded content like file attachments, avatars or LFS objects are stored in a
+ subfolder in one of the two paths below:
- 1. /var/opt/gitlab/gitlab-rails/shared
- 1. /var/opt/gitlab/gitlab-rails/uploads
+ - /var/opt/gitlab/gitlab-rails/shared
+ - /var/opt/gitlab/gitlab-rails/uploads
- To rename all of them:
+ To rename all of them:
- ```sh
- gitlab-ctl stop
+ ```sh
+ gitlab-ctl stop
- mv /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared.old
- mkdir -p /var/opt/gitlab/gitlab-rails/shared
+ mv /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared.old
+ mkdir -p /var/opt/gitlab/gitlab-rails/shared
- mv /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads.old
- mkdir -p /var/opt/gitlab/gitlab-rails/uploads
- ```
+ mv /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads.old
+ mkdir -p /var/opt/gitlab/gitlab-rails/uploads
+ ```
- Reconfigure in order to recreate the folders and make sure permissions and ownership
- are correctly
+ Reconfigure in order to recreate the folders and make sure permissions and ownership
+ are correctly
- ```sh
- gitlab-ctl reconfigure
- ```
+ ```sh
+ gitlab-ctl reconfigure
+ ```
1. Reset the Tracking Database
- ```sh
- gitlab-rake geo:db:reset
- ```
+ ```sh
+ gitlab-rake geo:db:reset
+ ```
1. Restart previously stopped services
- ```sh
- gitlab-ctl start
- ```
+ ```sh
+ gitlab-ctl start
+ ```
## Fixing Foreign Data Wrapper errors
@@ -345,108 +345,106 @@ To check the configuration:
1. Enter the database console:
- ```sh
- gitlab-geo-psql
- ```
+ ```sh
+ gitlab-geo-psql
+ ```
1. Check whether any tables are present. If everything is working, you
should see something like this:
- ```sql
- gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
- foreign_table_catalog | foreign_table_schema | foreign_table_name | foreign_server_catalog | foreign_server_n
- ame
- -------------------------+----------------------+-------------------------------------------------+-------------------------+-----------------
- ----
- gitlabhq_geo_production | gitlab_secondary | abuse_reports | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | appearances | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | application_setting_terms | gitlabhq_geo_production | gitlab_secondary
- gitlabhq_geo_production | gitlab_secondary | application_settings | gitlabhq_geo_production | gitlab_secondary
- <snip>
- ```
-
- However, if the query returns with `0 rows`, then continue onto the next steps.
+ ```sql
+ gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
+ foreign_table_catalog | foreign_table_schema | foreign_table_name | foreign_server_catalog | foreign_server_name
+ -------------------------+----------------------+-------------------------------------------------+-------------------------+---------------------
+ gitlabhq_geo_production | gitlab_secondary | abuse_reports | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | appearances | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | application_setting_terms | gitlabhq_geo_production | gitlab_secondary
+ gitlabhq_geo_production | gitlab_secondary | application_settings | gitlabhq_geo_production | gitlab_secondary
+ <snip>
+ ```
+
+ However, if the query returns with `0 rows`, then continue onto the next steps.
1. Check that the foreign server mapping is correct via `\des+`. The
results should look something like this:
- ```sql
- gitlabhq_geo_production=# \des+
- List of foreign servers
- -[ RECORD 1 ]--------+------------------------------------------------------------
- Name | gitlab_secondary
- Owner | gitlab-psql
- Foreign-data wrapper | postgres_fdw
- Access privileges | "gitlab-psql"=U/"gitlab-psql" +
- | gitlab_geo=U/"gitlab-psql"
- Type |
- Version |
- FDW Options | (host '0.0.0.0', port '5432', dbname 'gitlabhq_production')
- Description |
- ```
-
- NOTE: **Note:** Pay particular attention to the host and port under
- FDW options. That configuration should point to the Geo secondary
- database.
-
- If you need to experiment with changing the host or password, the
- following queries demonstrate how:
-
- ```sql
- ALTER SERVER gitlab_secondary OPTIONS (SET host '<my_new_host>');
- ALTER SERVER gitlab_secondary OPTIONS (SET port 5432);
- ```
-
- If you change the host and/or port, you will also have to adjust the
- following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
- reconfigure`:
-
- - `gitlab_rails['db_host']`
- - `gitlab_rails['db_port']`
+ ```sql
+ gitlabhq_geo_production=# \des+
+ List of foreign servers
+ -[ RECORD 1 ]--------+------------------------------------------------------------
+ Name | gitlab_secondary
+ Owner | gitlab-psql
+ Foreign-data wrapper | postgres_fdw
+ Access privileges | "gitlab-psql"=U/"gitlab-psql" +
+ | gitlab_geo=U/"gitlab-psql"
+ Type |
+ Version |
+ FDW Options | (host '0.0.0.0', port '5432', dbname 'gitlabhq_production')
+ Description |
+ ```
+
+ NOTE: **Note:** Pay particular attention to the host and port under
+ FDW options. That configuration should point to the Geo secondary
+ database.
+
+ If you need to experiment with changing the host or password, the
+ following queries demonstrate how:
+
+ ```sql
+ ALTER SERVER gitlab_secondary OPTIONS (SET host '<my_new_host>');
+ ALTER SERVER gitlab_secondary OPTIONS (SET port 5432);
+ ```
+
+ If you change the host and/or port, you will also have to adjust the
+ following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
+ reconfigure`:
+
+ - `gitlab_rails['db_host']`
+ - `gitlab_rails['db_port']`
1. Check that the user mapping is configured properly via `\deu+`:
- ```sql
- gitlabhq_geo_production=# \deu+
- List of user mappings
- Server | User name | FDW Options
- ------------------+------------+--------------------------------------------------------------------------------
- gitlab_secondary | gitlab_geo | ("user" 'gitlab', password 'YOUR-PASSWORD-HERE')
- (1 row)
- ```
+ ```sql
+ gitlabhq_geo_production=# \deu+
+ List of user mappings
+ Server | User name | FDW Options
+ ------------------+------------+--------------------------------------------------------------------------------
+ gitlab_secondary | gitlab_geo | ("user" 'gitlab', password 'YOUR-PASSWORD-HERE')
+ (1 row)
+ ```
- Make sure the password is correct. You can test that logins work by running `psql`:
+ Make sure the password is correct. You can test that logins work by running `psql`:
- ```sh
- # Connect to the tracking database as the `gitlab_geo` user
- sudo \
- -u git /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/geo-postgresql \
- -p 5431 \
- -U gitlab_geo \
- -W \
- -d gitlabhq_geo_production
- ```
+ ```sh
+ # Connect to the tracking database as the `gitlab_geo` user
+ sudo \
+ -u git /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/geo-postgresql \
+ -p 5431 \
+ -U gitlab_geo \
+ -W \
+ -d gitlabhq_geo_production
+ ```
- If you need to correct the password, the following query shows how:
+ If you need to correct the password, the following query shows how:
- ```sql
- ALTER USER MAPPING FOR gitlab_geo SERVER gitlab_secondary OPTIONS (SET password '<my_new_password>');
- ```
+ ```sql
+ ALTER USER MAPPING FOR gitlab_geo SERVER gitlab_secondary OPTIONS (SET password '<my_new_password>');
+ ```
- If you change the user or password, you will also have to adjust the
- following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
- reconfigure`:
+ If you change the user or password, you will also have to adjust the
+ following settings in `/etc/gitlab/gitlab.rb` and run `gitlab-ctl
+ reconfigure`:
- - `gitlab_rails['db_username']`
- - `gitlab_rails['db_password']`
+ - `gitlab_rails['db_username']`
+ - `gitlab_rails['db_password']`
- If you are using [PgBouncer in front of the secondary
- database](database.md#pgbouncer-support-optional), be sure to update
- the following settings:
+ If you are using [PgBouncer in front of the secondary
+ database](database.md#pgbouncer-support-optional), be sure to update
+ the following settings:
- - `geo_postgresql['fdw_external_user']`
- - `geo_postgresql['fdw_external_password']`
+ - `geo_postgresql['fdw_external_user']`
+ - `geo_postgresql['fdw_external_password']`
#### Manual reload of FDW schema
@@ -456,34 +454,34 @@ reload of the FDW schema. To manually reload the FDW schema:
1. On the node running the Geo tracking database, enter the PostgreSQL console via
the `gitlab_geo` user:
- ```sh
- sudo \
- -u git /opt/gitlab/embedded/bin/psql \
- -h /var/opt/gitlab/geo-postgresql \
- -p 5431 \
- -U gitlab_geo \
- -W \
- -d gitlabhq_geo_production
- ```
+ ```sh
+ sudo \
+ -u git /opt/gitlab/embedded/bin/psql \
+ -h /var/opt/gitlab/geo-postgresql \
+ -p 5431 \
+ -U gitlab_geo \
+ -W \
+ -d gitlabhq_geo_production
+ ```
- Be sure to adjust the port and hostname for your configuration. You
- may be asked to enter a password.
+ Be sure to adjust the port and hostname for your configuration. You
+ may be asked to enter a password.
1. Reload the schema via:
- ```sql
- DROP SCHEMA IF EXISTS gitlab_secondary CASCADE;
- CREATE SCHEMA gitlab_secondary;
- GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO gitlab_geo;
- IMPORT FOREIGN SCHEMA public FROM SERVER gitlab_secondary INTO gitlab_secondary;
- ```
+ ```sql
+ DROP SCHEMA IF EXISTS gitlab_secondary CASCADE;
+ CREATE SCHEMA gitlab_secondary;
+ GRANT USAGE ON FOREIGN SERVER gitlab_secondary TO gitlab_geo;
+ IMPORT FOREIGN SCHEMA public FROM SERVER gitlab_secondary INTO gitlab_secondary;
+ ```
1. Test that queries work:
- ```sql
- SELECT * from information_schema.foreign_tables;
- SELECT * FROM gitlab_secondary.projects limit 1;
- ```
+ ```sql
+ SELECT * from information_schema.foreign_tables;
+ SELECT * FROM gitlab_secondary.projects limit 1;
+ ```
[database-start-replication]: database.md#step-3-initiate-the-replication-process
[database-pg-replication]: database.md#postgresql-replication
diff --git a/doc/administration/geo/replication/updating_the_geo_nodes.md b/doc/administration/geo/replication/updating_the_geo_nodes.md
index 933a75c47d8..348b0e9e7e5 100644
--- a/doc/administration/geo/replication/updating_the_geo_nodes.md
+++ b/doc/administration/geo/replication/updating_the_geo_nodes.md
@@ -32,68 +32,68 @@ authentication method.
1. **[primary]** Login to your **primary** node and run:
- ```sh
- gitlab-ctl pg-password-md5 gitlab
- # Enter password: <your_password_here>
- # Confirm password: <your_password_here>
- # fca0b89a972d69f00eb3ec98a5838484
- ```
+ ```sh
+ gitlab-ctl pg-password-md5 gitlab
+ # Enter password: <your_password_here>
+ # Confirm password: <your_password_here>
+ # fca0b89a972d69f00eb3ec98a5838484
+ ```
- Copy the generated hash and edit `/etc/gitlab/gitlab.rb`:
+ Copy the generated hash and edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
- postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
- # Every node that runs Unicorn or Sidekiq needs to have the database
- # password specified as below. If you have a high-availability setup, this
- # must be present in all application nodes.
- gitlab_rails['db_password'] = '<your_password_here>'
- ```
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
+ ```
- Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
+ Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
- ```ruby
- postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32'] # <- Remove this
- ```
+ ```ruby
+ postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','1.2.3.4/32'] # <- Remove this
+ ```
1. **[primary]** Reconfigure and restart:
- ```sh
- sudo gitlab-ctl reconfigure
- sudo gitlab-ctl restart
- ```
+ ```sh
+ sudo gitlab-ctl reconfigure
+ sudo gitlab-ctl restart
+ ```
1. **[secondary]** Login to all **secondary** nodes and edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
- postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab`
+ postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
- # Every node that runs Unicorn or Sidekiq needs to have the database
- # password specified as below. If you have a high-availability setup, this
- # must be present in all application nodes.
- gitlab_rails['db_password'] = '<your_password_here>'
+ # Every node that runs Unicorn or Sidekiq needs to have the database
+ # password specified as below. If you have a high-availability setup, this
+ # must be present in all application nodes.
+ gitlab_rails['db_password'] = '<your_password_here>'
- # Enable Foreign Data Wrapper
- geo_secondary['db_fdw'] = true
+ # Enable Foreign Data Wrapper
+ geo_secondary['db_fdw'] = true
- # Secondary address in CIDR format, for example '5.6.7.8/32'
- postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
- ```
+ # Secondary address in CIDR format, for example '5.6.7.8/32'
+ postgresql['md5_auth_cidr_addresses'] = ['<secondary_node_ip>/32']
+ ```
- Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
+ Still in the configuration file, locate and remove the `trust_auth_cidr_address`:
- ```ruby
- postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','5.6.7.8/32'] # <- Remove this
- ```
+ ```ruby
+ postgresql['trust_auth_cidr_addresses'] = ['127.0.0.1/32','5.6.7.8/32'] # <- Remove this
+ ```
1. **[secondary]** Reconfigure and restart:
- ```sh
- sudo gitlab-ctl reconfigure
- sudo gitlab-ctl restart
- ```
+ ```sh
+ sudo gitlab-ctl reconfigure
+ sudo gitlab-ctl restart
+ ```
## Upgrading to GitLab 10.5
@@ -171,9 +171,9 @@ the now-unused SSH keys from your secondaries, as they may cause problems if the
1. **[secondary]** Login to **all** your **secondary** nodes and run:
- ```ruby
- sudo -u git -H rm ~git/.ssh/id_rsa ~git/.ssh/id_rsa.pub
- ```
+ ```ruby
+ sudo -u git -H rm ~git/.ssh/id_rsa ~git/.ssh/id_rsa.pub
+ ```
### Hashed Storage
@@ -236,12 +236,12 @@ instructions below.
When in doubt, it does not hurt to do a resync. The easiest way to do this in
Omnibus is the following:
- 1. Make sure you have Omnibus GitLab on the **primary** server.
- 1. Run `gitlab-ctl reconfigure` and `gitlab-ctl restart postgresql`. This will enable replication slots on the **primary** database.
- 1. Check the steps about defining `postgresql['sql_user_password']`, `gitlab_rails['db_password']`.
- 1. Make sure `postgresql['max_replication_slots']` matches the number of **secondary** Geo nodes locations.
- 1. Install GitLab on the **secondary** server.
- 1. Re-run the [database replication process][database-replication].
+1. Make sure you have Omnibus GitLab on the **primary** server.
+1. Run `gitlab-ctl reconfigure` and `gitlab-ctl restart postgresql`. This will enable replication slots on the **primary** database.
+1. Check the steps about defining `postgresql['sql_user_password']`, `gitlab_rails['db_password']`.
+1. Make sure `postgresql['max_replication_slots']` matches the number of **secondary** Geo nodes locations.
+1. Install GitLab on the **secondary** server.
+1. Re-run the [database replication process][database-replication].
## Special update notes for 9.0.x
@@ -262,16 +262,16 @@ is prepended with the relevant node for better clarity:
1. **[secondary]** Login to **all** your **secondary** nodes and stop all services:
- ```ruby
- sudo gitlab-ctl stop
- ```
+ ```ruby
+ sudo gitlab-ctl stop
+ ```
1. **[secondary]** Make a backup of the `recovery.conf` file on **all**
**secondary** nodes to preserve PostgreSQL's credentials:
- ```sh
- sudo cp /var/opt/gitlab/postgresql/data/recovery.conf /var/opt/gitlab/
- ```
+ ```sh
+ sudo cp /var/opt/gitlab/postgresql/data/recovery.conf /var/opt/gitlab/
+ ```
1. **[primary]** Update the **primary** node to GitLab 9.0 following the
[regular update docs][update]. At the end of the update, the **primary** node
@@ -281,136 +281,133 @@ is prepended with the relevant node for better clarity:
stop all services except `postgresql` as we will use it to re-initialize the
**secondary** node's database:
- ```sh
- sudo gitlab-ctl stop
- sudo gitlab-ctl start postgresql
- ```
+ ```sh
+ sudo gitlab-ctl stop
+ sudo gitlab-ctl start postgresql
+ ```
1. **[secondary]** Run the following steps on each of the **secondary** nodes:
- 1. **[secondary]** Stop all services:
+ 1. **[secondary]** Stop all services:
- ```sh
- sudo gitlab-ctl stop
- ```
+ ```sh
+ sudo gitlab-ctl stop
+ ```
- 1. **[secondary]** Prevent running database migrations:
+ 1. **[secondary]** Prevent running database migrations:
- ```sh
- sudo touch /etc/gitlab/skip-auto-migrations
- ```
+ ```sh
+ sudo touch /etc/gitlab/skip-auto-migrations
+ ```
- 1. **[secondary]** Move the old database to another directory:
+ 1. **[secondary]** Move the old database to another directory:
- ```sh
- sudo mv /var/opt/gitlab/postgresql{,.bak}
- ```
+ ```sh
+ sudo mv /var/opt/gitlab/postgresql{,.bak}
+ ```
- 1. **[secondary]** Update to GitLab 9.0 following the [regular update docs][update].
- At the end of the update, the node will be running with PostgreSQL 9.6.
+ 1. **[secondary]** Update to GitLab 9.0 following the [regular update docs][update].
+ At the end of the update, the node will be running with PostgreSQL 9.6.
- 1. **[secondary]** Make sure all services are up:
+ 1. **[secondary]** Make sure all services are up:
- ```sh
- sudo gitlab-ctl start
- ```
+ ```sh
+ sudo gitlab-ctl start
+ ```
- 1. **[secondary]** Reconfigure GitLab:
+ 1. **[secondary]** Reconfigure GitLab:
- ```sh
- sudo gitlab-ctl reconfigure
- ```
+ ```sh
+ sudo gitlab-ctl reconfigure
+ ```
- 1. **[secondary]** Run the PostgreSQL upgrade command:
+ 1. **[secondary]** Run the PostgreSQL upgrade command:
- ```sh
- sudo gitlab-ctl pg-upgrade
- ```
+ ```sh
+ sudo gitlab-ctl pg-upgrade
+ ```
- 1. **[secondary]** See the stored credentials for the database that you will
- need to re-initialize the replication:
+ 1. **[secondary]** See the stored credentials for the database that you will
+ need to re-initialize the replication:
- ```sh
- sudo grep -s primary_conninfo /var/opt/gitlab/recovery.conf
- ```
+ ```sh
+ sudo grep -s primary_conninfo /var/opt/gitlab/recovery.conf
+ ```
- 1. **[secondary]** Create the `replica.sh` script as described in the
- [database configuration document][database-source-replication].
+ 1. **[secondary]** Save the snippet below in a file, let's say `/tmp/replica.sh`. Modify the
+ embedded paths if necessary:
- 1. 1. **[secondary]** Save the snippet below in a file, let's say `/tmp/replica.sh`. Modify the
- embedded paths if necessary:
+ ```
+ #!/bin/bash
- ```
- #!/bin/bash
+ PORT="5432"
+ USER="gitlab_replicator"
+ echo ---------------------------------------------------------------
+ echo WARNING: Make sure this script is run from the secondary server
+ echo ---------------------------------------------------------------
+ echo
+ echo Enter the IP or FQDN of the primary PostgreSQL server
+ read HOST
+ echo Enter the password for $USER@$HOST
+ read -s PASSWORD
+ echo Enter the required sslmode
+ read SSLMODE
- PORT="5432"
- USER="gitlab_replicator"
- echo ---------------------------------------------------------------
- echo WARNING: Make sure this script is run from the secondary server
- echo ---------------------------------------------------------------
- echo
- echo Enter the IP or FQDN of the primary PostgreSQL server
- read HOST
- echo Enter the password for $USER@$HOST
- read -s PASSWORD
- echo Enter the required sslmode
- read SSLMODE
+ echo Stopping PostgreSQL and all GitLab services
+ sudo service gitlab stop
+ sudo service postgresql stop
- echo Stopping PostgreSQL and all GitLab services
- sudo service gitlab stop
- sudo service postgresql stop
+ echo Backing up postgresql.conf
+ sudo -u postgres mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
- echo Backing up postgresql.conf
- sudo -u postgres mv /var/opt/gitlab/postgresql/data/postgresql.conf /var/opt/gitlab/postgresql/
+ echo Cleaning up old cluster directory
+ sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data
- echo Cleaning up old cluster directory
- sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data
+ echo Starting base backup as the replicator user
+ echo Enter the password for $USER@$HOST
+ sudo -u postgres /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
- echo Starting base backup as the replicator user
- echo Enter the password for $USER@$HOST
- sudo -u postgres /opt/gitlab/embedded/bin/pg_basebackup -h $HOST -D /var/opt/gitlab/postgresql/data -U gitlab_replicator -v -x -P
+ echo Writing recovery.conf file
+ sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
+ standby_mode = 'on'
+ primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE'
+ _EOF1_
+ "
- echo Writing recovery.conf file
- sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
- standby_mode = 'on'
- primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE'
- _EOF1_
- "
+ echo Restoring postgresql.conf
+ sudo -u postgres mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
- echo Restoring postgresql.conf
- sudo -u postgres mv /var/opt/gitlab/postgresql/postgresql.conf /var/opt/gitlab/postgresql/data/
+ echo Starting PostgreSQL
+ sudo service postgresql start
+ ```
- echo Starting PostgreSQL
- sudo service postgresql start
- ```
+ 1. **[secondary]** Run the recovery script using the credentials from the
+ previous step:
- 1. **[secondary]** Run the recovery script using the credentials from the
- previous step:
+ ```sh
+ sudo bash /tmp/replica.sh
+ ```
- ```sh
- sudo bash /tmp/replica.sh
- ```
+ 1. **[secondary]** Reconfigure GitLab:
- 1. **[secondary]** Reconfigure GitLab:
+ ```sh
+ sudo gitlab-ctl reconfigure
+ ```
- ```sh
- sudo gitlab-ctl reconfigure
- ```
+ 1. **[secondary]** Start all services:
- 1. **[secondary]** Start all services:
+ ```sh
+ sudo gitlab-ctl start
+ ```
- ```sh
- sudo gitlab-ctl start
- ```
-
- 1. **[secondary]** Repeat the steps for the remaining **secondary** nodes.
+ 1. **[secondary]** Repeat the steps for the remaining **secondary** nodes.
1. **[primary]** After all **secondary** nodes are updated, start all services in
**primary** node:
- ```sh
- sudo gitlab-ctl start
- ```
+ ```sh
+ sudo gitlab-ctl start
+ ```
## Check status after updating
@@ -419,9 +416,9 @@ everything is working correctly:
1. Run the Geo raketask on all nodes, everything should be green:
- ```sh
- sudo gitlab-rake gitlab:geo:check
- ```
+ ```sh
+ sudo gitlab-rake gitlab:geo:check
+ ```
1. Check the **primary** node's Geo dashboard for any errors.
1. Test the data replication by pushing code to the **primary** node and see if it
@@ -435,9 +432,9 @@ and it is required since 10.0.
1. Run database migrations on tracking database:
- ```sh
- sudo gitlab-rake geo:db:migrate
- ```
+ ```sh
+ sudo gitlab-rake geo:db:migrate
+ ```
1. Repeat this step for each **secondary** node.