summaryrefslogtreecommitdiff
path: root/doc/administration/geo/setup/database.md
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/geo/setup/database.md')
-rw-r--r--doc/administration/geo/setup/database.md232
1 files changed, 122 insertions, 110 deletions
diff --git a/doc/administration/geo/setup/database.md b/doc/administration/geo/setup/database.md
index b87a606e349..f6e72092a5f 100644
--- a/doc/administration/geo/setup/database.md
+++ b/doc/administration/geo/setup/database.md
@@ -9,7 +9,7 @@ type: howto
NOTE:
If your GitLab installation uses external (not managed by Omnibus) PostgreSQL
-instances, the Omnibus roles will not be able to perform all necessary
+instances, the Omnibus roles are unable to perform all necessary
configuration steps. In this case,
[follow the Geo with external PostgreSQL instances document instead](external_database.md).
@@ -25,10 +25,23 @@ size.
You are encouraged to first read through all the steps before executing them
in your testing/production environment.
-## PostgreSQL replication
+## Single instance database replication
-The GitLab **primary** node where the write operations happen will connect to
-the **primary** database server, and **secondary** nodes will
+A single instance database replication is easier to set up and still provides the same Geo capabilities
+as a clusterized alternative. It's useful for setups running on a single machine
+or trying to evaluate Geo for a future clusterized installation.
+
+A single instance can be expanded to a clusterized version using Patroni, which is recommended for a
+highly available architecture.
+
+Follow below the instructions on how to set up PostgreSQL replication as a single instance database.
+Alternatively, you can look at the [Multi-node database replication](#multi-node-database-replication)
+instructions on setting up replication with a Patroni cluster.
+
+### PostgreSQL replication
+
+The GitLab **primary** node where the write operations happen connects to
+the **primary** database server, and **secondary** nodes
connect to their own database servers (which are also read-only).
We recommend using [PostgreSQL replication slots](https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75)
@@ -37,8 +50,8 @@ recover. See below for more details.
The following guide assumes that:
-- You are using Omnibus and therefore you are using PostgreSQL 11 or later
- which includes the [`pg_basebackup` tool](https://www.postgresql.org/docs/11/app-pgbasebackup.html).
+- You are using Omnibus and therefore you are using PostgreSQL 12 or later
+ which includes the [`pg_basebackup` tool](https://www.postgresql.org/docs/12/app-pgbasebackup.html).
- You have a **primary** node already set up (the GitLab server you are
replicating from), running Omnibus' PostgreSQL (or equivalent version), and
you have a new **secondary** server set up with the same versions of the OS,
@@ -48,7 +61,7 @@ WARNING:
Geo works with streaming replication. Logical replication is not supported at this time.
There is an [issue where support is being discussed](https://gitlab.com/gitlab-org/gitlab/-/issues/7420).
-### Step 1. Configure the **primary** server
+#### Step 1. Configure the **primary** server
1. SSH into your GitLab **primary** server and login as root:
@@ -75,13 +88,9 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
gitlab-ctl set-geo-primary-node
```
- This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
-
-1. GitLab 10.4 and up only: Do the following to make sure the `gitlab` database user has a password defined:
+ This command uses your defined `external_url` in `/etc/gitlab/gitlab.rb`.
- NOTE:
- Until FDW settings are removed in GitLab version 14.0, avoid using single or double quotes in the
- password for PostgreSQL as that will lead to errors when reconfiguring.
+1. Define a password for the `gitlab` database user:
Generate a MD5 hash of the desired password:
@@ -103,18 +112,28 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
# must be present in all application nodes.
gitlab_rails['db_password'] = '<your_password_here>'
```
+
+1. Define a password for the database [replication user](https://wiki.postgresql.org/wiki/Streaming_Replication).
-1. Omnibus GitLab already has a [replication user](https://wiki.postgresql.org/wiki/Streaming_Replication)
- called `gitlab_replicator`. You must set the password for this user manually.
- You will be prompted to enter a password:
+ We will use the username defined in `/etc/gitlab/gitlab.rb` under the `postgresql['sql_replication_user']`
+ setting. The default value is `gitlab_replicator`, but if you changed it to something else, adapt
+ the instructions below.
+
+ Generate a MD5 hash of the desired password:
```shell
- gitlab-ctl set-replication-password
+ gitlab-ctl pg-password-md5 gitlab_replicator
+ # Enter password: <your_password_here>
+ # Confirm password: <your_password_here>
+ # 950233c0dfc2f39c64cf30457c3b7f1e
```
- This command will also read the `postgresql['sql_replication_user']` Omnibus
- setting in case you have changed `gitlab_replicator` username to something
- else.
+ Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ # Fill with the hash generated by `gitlab-ctl pg-password-md5 gitlab_replicator`
+ postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
+ ```
If you are using an external database not managed by Omnibus GitLab, you need
to create the replicator user and define a password to it manually:
@@ -154,7 +173,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
echo "External address: $(curl --silent "ipinfo.io/ip")"
```
- In most cases, the following addresses will be used to configure GitLab
+ In most cases, the following addresses are used to configure GitLab
Geo:
| Configuration | Address |
@@ -168,11 +187,11 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
`postgresql['md5_auth_cidr_addresses']` and `postgresql['listen_address']`.
The `listen_address` option opens PostgreSQL up to network connections with the interface
- corresponding to the given address. See [the PostgreSQL documentation](https://www.postgresql.org/docs/11/runtime-config-connection.html)
+ corresponding to the given address. See [the PostgreSQL documentation](https://www.postgresql.org/docs/12/runtime-config-connection.html)
for more details.
NOTE:
- If you need to use `0.0.0.0` or `*` as the listen_address, you will also need to add
+ If you need to use `0.0.0.0` or `*` as the listen_address, you also need to add
`127.0.0.1/32` to the `postgresql['md5_auth_cidr_addresses']` setting, to allow Rails to connect through
`127.0.0.1`. For more information, see [omnibus-5258](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5258).
@@ -190,7 +209,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
## Geo Primary role
## - configure dependent flags automatically to enable Geo
##
- roles ['geo_primary_role']
+ roles(['geo_primary_role'])
##
## Primary address
@@ -226,7 +245,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
```
You may also want to edit the `wal_keep_segments` and `max_wal_senders` to match your
- database replication requirements. Consult the [PostgreSQL - Replication documentation](https://www.postgresql.org/docs/11/runtime-config-replication.html)
+ database replication requirements. Consult the [PostgreSQL - Replication documentation](https://www.postgresql.org/docs/12/runtime-config-replication.html)
for more information.
1. Save the file and reconfigure GitLab for the database listen changes and
@@ -262,7 +281,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
`5432` to the **primary** server's private address.
1. A certificate was automatically generated when GitLab was reconfigured. This
- will be used automatically to protect your PostgreSQL traffic from
+ is used automatically to protect your PostgreSQL traffic from
eavesdroppers, but to protect against active ("man-in-the-middle") attackers,
the **secondary** node needs a copy of the certificate. Make a copy of the PostgreSQL
`server.crt` file on the **primary** node by running this command:
@@ -272,10 +291,10 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
```
Copy the output into a clipboard or into a local file. You
- will need it when setting up the **secondary** node! The certificate is not sensitive
+ need it when setting up the **secondary** node! The certificate is not sensitive
data.
-### Step 2. Configure the **secondary** server
+#### Step 2. Configure the **secondary** server
1. SSH into your GitLab **secondary** server and login as root:
@@ -325,7 +344,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
-T server.crt ~gitlab-psql/.postgresql/root.crt
```
- PostgreSQL will now only recognize that exact certificate when verifying TLS
+ PostgreSQL now only recognizes that exact certificate when verifying TLS
connections. The certificate can only be replicated by someone with access
to the private key, which is **only** present on the **primary** node.
@@ -363,7 +382,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
## Geo Secondary role
## - configure dependent flags automatically to enable Geo
##
- roles ['geo_secondary_role']
+ roles(['geo_secondary_role'])
##
## Secondary address
@@ -376,12 +395,13 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
## Database credentials password (defined previously in primary node)
## - replicate same values here as defined in primary node
##
+ postgresql['sql_replication_password'] = '<md5_hash_of_your_password>'
postgresql['sql_user_password'] = '<md5_hash_of_your_password>'
gitlab_rails['db_password'] = '<your_password_here>'
```
For external PostgreSQL instances, see [additional instructions](external_database.md).
- If you bring a former **primary** node back online to serve as a **secondary** node, then you also need to remove `roles ['geo_primary_role']` or `geo_primary_role['enable'] = true`.
+ If you bring a former **primary** node back online to serve as a **secondary** node, then you also need to remove `roles(['geo_primary_role'])` or `geo_primary_role['enable'] = true`.
1. Reconfigure GitLab for the changes to take effect:
@@ -395,7 +415,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
gitlab-ctl restart postgresql
```
-### Step 3. Initiate the replication process
+#### Step 3. Initiate the replication process
Below we provide a script that connects the database on the **secondary** node to
the database on the **primary** node, replicates the database, and creates the
@@ -423,7 +443,7 @@ data before running `pg_basebackup`.
WARNING:
Each Geo **secondary** node must have its own unique replication slot name.
- Using the same slot name between two secondaries will break PostgreSQL replication.
+ Using the same slot name between two secondaries breaks PostgreSQL replication.
```shell
gitlab-ctl replicate-geo-database \
@@ -441,57 +461,57 @@ data before running `pg_basebackup`.
to list them all, but here are a couple of tips:
- If PostgreSQL is listening on a non-standard port, add `--port=` as well.
- - If your database is too large to be transferred in 30 minutes, you will need
+ - If your database is too large to be transferred in 30 minutes, you need
to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
initial replication to take under an hour.
- Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
(e.g., you know the network path is secure, or you are using a site-to-site
VPN). This is **not** safe over the public Internet!
- You can read more details about each `sslmode` in the
- [PostgreSQL documentation](https://www.postgresql.org/docs/11/libpq-ssl.html#LIBPQ-SSL-PROTECTION);
+ [PostgreSQL documentation](https://www.postgresql.org/docs/12/libpq-ssl.html#LIBPQ-SSL-PROTECTION);
the instructions above are carefully written to ensure protection against
both passive eavesdroppers and active "man-in-the-middle" attackers.
- Change the `--slot-name` to the name of the replication slot
- to be used on the **primary** database. The script will attempt to create the
+ to be used on the **primary** database. The script attempts to create the
replication slot automatically if it does not exist.
- - If you're repurposing an old server into a Geo **secondary** node, you'll need to
+ - If you're repurposing an old server into a Geo **secondary** node, you need to
add `--force` to the command line.
- When not in a production machine you can disable backup step if you
really sure this is what you want by adding `--skip-backup`
The replication process is now complete.
-## PgBouncer support (optional)
+### PgBouncer support (optional)
[PgBouncer](https://www.pgbouncer.org/) may be used with GitLab Geo to pool
-PostgreSQL connections. We recommend using PgBouncer if you use GitLab in a
-high-availability configuration with a cluster of nodes supporting a Geo
-**primary** site and two other clusters of nodes supporting a Geo **secondary** site.
-One for the main database and the other for the tracking database. For more information,
+PostgreSQL connections, which can improve performance even when using in a
+single instance installation.
+
+We recommend using PgBouncer if you use GitLab in a highly available
+configuration with a cluster of nodes supporting a Geo **primary** site and
+two other clusters of nodes supporting a Geo **secondary** site. One for the
+main database and the other for the tracking database. For more information,
see [High Availability with Omnibus GitLab](../../postgresql/replication_and_failover.md).
-## Patroni support
+## Multi-node database replication
-Support for Patroni is intended to replace `repmgr` as a
-[highly available PostgreSQL solution](../../postgresql/replication_and_failover.md)
-on the primary node, but it can also be used for PostgreSQL HA on a secondary
-site. Similar to `repmgr`, using Patroni on a secondary node is optional.
+In GitLab 14.0, Patroni replaced `repmgr` as the supported
+[highly available PostgreSQL solution](../../postgresql/replication_and_failover.md).
-Starting with GitLab 13.5, Patroni is available for _experimental_ use with Geo
-primary and secondary sites. Due to its experimental nature, Patroni support is
-subject to change without notice.
+NOTE:
+If you still haven't [migrated from repmgr to Patroni](#migrating-from-repmgr-to-patroni) you're highly advised to do so.
-This experimental implementation has the following limitations:
+### Patroni support
-- Whenever `gitlab-ctl reconfigure` runs on a Patroni Leader instance, there's a
- chance the node will be demoted due to the required short-time restart. To
- avoid this, you can pause auto-failover by running `gitlab-ctl patroni pause`.
- After a reconfigure, it resumes on its own.
+Patroni is the official replication management solution for Geo. It
+can be used to build a highly available cluster on the **primary** and a **secondary** Geo site.
+Using Patroni on a **secondary** site is optional and you don't have to use the same amount of
+nodes on each Geo site.
For instructions about how to set up Patroni on the primary site, see the
[PostgreSQL replication and failover with Omnibus GitLab](../../postgresql/replication_and_failover.md#patroni) page.
-### Configuring Patroni cluster for a Geo secondary site
+#### Configuring Patroni cluster for a Geo secondary site
In a Geo secondary site, the main PostgreSQL database is a read-only replica of the primary site’s PostgreSQL database.
@@ -503,7 +523,7 @@ configuration for the secondary site. The internal load balancer provides a sing
endpoint for connecting to the Patroni cluster's leader whenever a new leader is
elected. Be sure to use [password credentials](../../postgresql/replication_and_failover.md#database-authorization-for-patroni) and other database best practices.
-#### Step 1. Configure Patroni permanent replication slot on the primary site
+##### Step 1. Configure Patroni permanent replication slot on the primary site
To set up database replication with Patroni on a secondary node, we need to
configure a _permanent replication slot_ on the primary node's Patroni cluster,
@@ -521,16 +541,16 @@ Leader instance**:
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
- consul['enable'] = true
+ roles(['patroni_role'])
+
+ consul['services'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w[CONSUL_PRIMARY1_IP CONSUL_PRIMARY2_IP CONSUL_PRIMARY3_IP]
}
-
- repmgr['enable'] = false
-
+
# You need one entry for each secondary, with a unique name following PostgreSQL slot_name constraints:
#
- # Configuration syntax will be: 'unique_slotname' => { 'type' => 'physical' },
+ # Configuration syntax is: 'unique_slotname' => { 'type' => 'physical' },
# We don't support setting a permanent replication slot for logical replication type
patroni['replication_slots'] = {
'geo_secondary' => { 'type' => 'physical' }
@@ -539,15 +559,18 @@ Leader instance**:
patroni['use_pg_rewind'] = true
patroni['postgresql']['max_wal_senders'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
patroni['postgresql']['max_replication_slots'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
+ patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
- postgresql['md5_auth_cidr_addresses'] = [
- 'PATRONI_PRIMARY1_IP/32', 'PATRONI_PRIMARY2_IP/32', 'PATRONI_PRIMARY3_IP/32', 'PATRONI_PRIMARY_PGBOUNCER/32',
- 'PATRONI_SECONDARY1_IP/32', 'PATRONI_SECONDARY2_IP/32', 'PATRONI_SECONDARY3_IP/32', 'PATRONI_SECONDARY_PGBOUNCER/32' # We list all secondary instances as they can all become a Standby Leader
+ # We list all secondary instances as they can all become a Standby Leader
+ postgresql['md5_auth_cidr_addresses'] = %w[
+ PATRONI_PRIMARY1_IP/32 PATRONI_PRIMARY2_IP/32 PATRONI_PRIMARY3_IP/32 PATRONI_PRIMARY_PGBOUNCER/32
+ PATRONI_SECONDARY1_IP/32 PATRONI_SECONDARY2_IP/32 PATRONI_SECONDARY3_IP/32 PATRONI_SECONDARY_PGBOUNCER/32
]
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
+ postgresql['listen_address'] = '0.0.0.0' # You can use a public or VPC address here instead
```
1. Reconfigure GitLab for the changes to take effect:
@@ -556,17 +579,17 @@ Leader instance**:
gitlab-ctl reconfigure
```
-#### Step 2. Configure the internal load balancer on the primary site
+##### Step 2. Configure the internal load balancer on the primary site
To avoid reconfiguring the Standby Leader on the secondary site whenever a new
-Leader is elected on the primary site, we'll need to set up a TCP internal load
-balancer which will give a single endpoint for connecting to the Patroni
+Leader is elected on the primary site, we need to set up a TCP internal load
+balancer which gives a single endpoint for connecting to the Patroni
cluster's Leader.
The Omnibus GitLab packages do not include a Load Balancer. Here's how you
could do it with [HAProxy](https://www.haproxy.org/).
-The following IPs and names will be used as an example:
+The following IPs and names are used as an example:
- `10.6.0.21`: Patroni 1 (`patroni1.internal`)
- `10.6.0.21`: Patroni 2 (`patroni2.internal`)
@@ -600,7 +623,7 @@ backend postgresql
Refer to your preferred Load Balancer's documentation for further guidance.
-#### Step 3. Configure a PgBouncer node on the secondary site
+##### Step 3. Configure a PgBouncer node on the secondary site
A production-ready and highly available configuration requires at least
three Consul nodes, a minimum of one PgBouncer node, but it’s recommended to have
@@ -621,22 +644,26 @@ Follow the minimal configuration for the PgBouncer node:
```ruby
# Disable all components except Pgbouncer and Consul agent
- roles ['pgbouncer_role']
+ roles(['pgbouncer_role'])
# PgBouncer configuration
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
pgbouncer['users'] = {
+ 'gitlab-consul': {
+ # Generate it with: `gitlab-ctl pg-password-md5 gitlab-consul`
+ password: 'GITLAB_CONSUL_PASSWORD_HASH'
+ },
'pgbouncer': {
+ # Generate it with: `gitlab-ctl pg-password-md5 pgbouncer`
password: 'PGBOUNCER_PASSWORD_HASH'
}
}
# Consul configuration
consul['watchers'] = %w(postgresql)
-
consul['configuration'] = {
retry_join: %w[CONSUL_SECONDARY1_IP CONSUL_SECONDARY2_IP CONSUL_SECONDARY3_IP]
}
-
consul['monitoring_service_discovery'] = true
```
@@ -652,17 +679,17 @@ Follow the minimal configuration for the PgBouncer node:
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
-1. Restart the PgBouncer service:
+1. Reload the PgBouncer service:
```shell
- gitlab-ctl restart pgbouncer
+ gitlab-ctl hup pgbouncer
```
-#### Step 4. Configure a Standby cluster on the secondary site
+##### Step 4. Configure a Standby cluster on the secondary site
NOTE:
If you are converting a secondary site to a Patroni Cluster, you must start
-on the PostgreSQL instance. It will become the Patroni Standby Leader instance,
+on the PostgreSQL instance. It becomes the Patroni Standby Leader instance,
and then you can switchover to another replica if you need.
For each Patroni instance on the secondary site:
@@ -676,21 +703,18 @@ For each Patroni instance on the secondary site:
1. Edit `/etc/gitlab/gitlab.rb` and add the following:
```ruby
- roles ['consul_role', 'postgres_role']
+ roles(['consul_role', 'patroni_role'])
consul['enable'] = true
consul['configuration'] = {
retry_join: %w[CONSUL_SECONDARY1_IP CONSUL_SECONDARY2_IP CONSUL_SECONDARY3_IP]
}
- repmgr['enable'] = false
-
postgresql['md5_auth_cidr_addresses'] = [
'PATRONI_SECONDARY1_IP/32', 'PATRONI_SECONDARY2_IP/32', 'PATRONI_SECONDARY3_IP/32', 'PATRONI_SECONDARY_PGBOUNCER/32',
# Any other instance that needs access to the database as per documentation
]
- patroni['enable'] = false
patroni['standby_cluster']['enable'] = true
patroni['standby_cluster']['host'] = 'INTERNAL_LOAD_BALANCER_PRIMARY_IP'
patroni['standby_cluster']['port'] = INTERNAL_LOAD_BALANCER_PRIMARY_PORT
@@ -699,6 +723,15 @@ For each Patroni instance on the secondary site:
patroni['use_pg_rewind'] = true
patroni['postgresql']['max_wal_senders'] = 5 # A minimum of three for one replica, plus two for each additional replica
patroni['postgresql']['max_replication_slots'] = 5 # A minimum of three for one replica, plus two for each additional replica
+
+ postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
+ postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
+ postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
+ postgresql['listen_address'] = '0.0.0.0' # You can use a public or VPC address here instead
+
+ gitlab_rails['dbpassword'] = 'POSTGRESQL_PASSWORD'
+ gitlab_rails['enable'] = true
+ gitlab_rails['auto_migrate'] = false
```
1. Reconfigure GitLab for the changes to take effect.
@@ -708,33 +741,11 @@ For each Patroni instance on the secondary site:
gitlab-ctl reconfigure
```
-1. Remove the PostgreSQL data directory:
-
- WARNING:
- If you are converting a secondary site to a Patroni Cluster, you must skip
- this step on the PostgreSQL instance.
-
- ```shell
- rm -rf /var/opt/gitlab/postgresql/data
- ```
-
-1. Edit `/etc/gitlab/gitlab.rb` to enable Patroni:
-
- ```ruby
- patroni['enable'] = true
- ```
-
-1. Reconfigure GitLab for the changes to take effect:
-
- ```shell
- gitlab-ctl reconfigure
- ```
-
### Migrating from repmgr to Patroni
1. Before migrating, it is recommended that there is no replication lag between the primary and secondary sites and that replication is paused. In GitLab 13.2 and later, you can pause and resume replication with `gitlab-ctl geo-replication-pause` and `gitlab-ctl geo-replication-resume` on a Geo secondary database node.
1. Follow the [instructions to migrate repmgr to Patroni](../../postgresql/replication_and_failover.md#switching-from-repmgr-to-patroni). When configuring Patroni on each primary site database node, add `patroni['replication_slots'] = { '<slot_name>' => 'physical' }`
-to `gitlab.rb` where `<slot_name>` is the name of the replication slot for your Geo secondary. This will ensure that Patroni recognizes the replication slot as permanent and will not drop it upon restarting.
+to `gitlab.rb` where `<slot_name>` is the name of the replication slot for your Geo secondary. This ensures that Patroni recognizes the replication slot as permanent and not drop it upon restarting.
1. If database replication to the secondary was paused before migration, resume replication once Patroni is confirmed working on the primary.
### Migrating a single PostgreSQL node to Patroni
@@ -750,14 +761,14 @@ With Patroni it's now possible to support that. In order to migrate the existing
1. [Configure a Standby Cluster](#step-4-configure-a-standby-cluster-on-the-secondary-site)
on that single node machine.
-You will end up with a "Standby Cluster" with a single node. That allows you to later on add additional Patroni nodes
+You end up with a "Standby Cluster" with a single node. That allows you to later on add additional Patroni nodes
by following the same instructions above.
### Configuring Patroni cluster for the tracking PostgreSQL database
Secondary sites use a separate PostgreSQL installation as a tracking database to
keep track of replication status and automatically recover from potential replication issues.
-Omnibus automatically configures a tracking database when `roles ['geo_secondary_role']` is set.
+Omnibus automatically configures a tracking database when `roles(['geo_secondary_role'])` is set.
If you want to run this database in a highly available configuration, follow the instructions below.
A production-ready and secure setup requires at least three Consul nodes, three
@@ -782,7 +793,7 @@ Follow the minimal configuration for the PgBouncer node for the tracking databas
```ruby
# Disable all components except Pgbouncer and Consul agent
- roles ['pgbouncer_role']
+ roles(['pgbouncer_role'])
# PgBouncer configuration
pgbouncer['users'] = {
@@ -844,7 +855,7 @@ For each Patroni instance on the secondary site for the tracking database:
```ruby
# Disable all components except PostgreSQL, Patroni, and Consul
- roles ['patroni_role']
+ roles(['patroni_role'])
# Consul configuration
consul['services'] = %w(postgresql)
@@ -875,6 +886,7 @@ For each Patroni instance on the secondary site for the tracking database:
# GitLab database settings
gitlab_rails['db_database'] = 'gitlabhq_geo_production'
gitlab_rails['db_username'] = 'gitlab_geo'
+ gitlab_rails['enable'] = true
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
@@ -934,8 +946,8 @@ Patroni implementation on Omnibus that do not allow us to manage two different
clusters on the same machine, we recommend setting up a new Patroni cluster for
the tracking database by following the same instructions above.
-The secondary nodes will backfill the new tracking database, and no data
-synchronization will be required.
+The secondary nodes backfill the new tracking database, and no data
+synchronization is required.
## Troubleshooting