summaryrefslogtreecommitdiff
path: root/doc/administration/geo/setup/database.md
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/geo/setup/database.md')
-rw-r--r--doc/administration/geo/setup/database.md100
1 files changed, 63 insertions, 37 deletions
diff --git a/doc/administration/geo/setup/database.md b/doc/administration/geo/setup/database.md
index f6e72092a5f..bc4128deb4a 100644
--- a/doc/administration/geo/setup/database.md
+++ b/doc/administration/geo/setup/database.md
@@ -31,17 +31,17 @@ A single instance database replication is easier to set up and still provides th
as a clusterized alternative. It's useful for setups running on a single machine
or trying to evaluate Geo for a future clusterized installation.
-A single instance can be expanded to a clusterized version using Patroni, which is recommended for a
+A single instance can be expanded to a clusterized version using Patroni, which is recommended for a
highly available architecture.
-Follow below the instructions on how to set up PostgreSQL replication as a single instance database.
-Alternatively, you can look at the [Multi-node database replication](#multi-node-database-replication)
+Follow below the instructions on how to set up PostgreSQL replication as a single instance database.
+Alternatively, you can look at the [Multi-node database replication](#multi-node-database-replication)
instructions on setting up replication with a Patroni cluster.
### PostgreSQL replication
The GitLab **primary** node where the write operations happen connects to
-the **primary** database server, and **secondary** nodes
+the **primary** database server, and **secondary** nodes
connect to their own database servers (which are also read-only).
We recommend using [PostgreSQL replication slots](https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75)
@@ -112,13 +112,13 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
# must be present in all application nodes.
gitlab_rails['db_password'] = '<your_password_here>'
```
-
+
1. Define a password for the database [replication user](https://wiki.postgresql.org/wiki/Streaming_Replication).
We will use the username defined in `/etc/gitlab/gitlab.rb` under the `postgresql['sql_replication_user']`
- setting. The default value is `gitlab_replicator`, but if you changed it to something else, adapt
+ setting. The default value is `gitlab_replicator`, but if you changed it to something else, adapt
the instructions below.
-
+
Generate a MD5 hash of the desired password:
```shell
@@ -462,10 +462,10 @@ data before running `pg_basebackup`.
- If PostgreSQL is listening on a non-standard port, add `--port=` as well.
- If your database is too large to be transferred in 30 minutes, you need
- to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
+ to increase the timeout, for example, `--backup-timeout=3600` if you expect the
initial replication to take under an hour.
- Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
- (e.g., you know the network path is secure, or you are using a site-to-site
+ (for example, you know the network path is secure, or you are using a site-to-site
VPN). This is **not** safe over the public Internet!
- You can read more details about each `sslmode` in the
[PostgreSQL documentation](https://www.postgresql.org/docs/12/libpq-ssl.html#LIBPQ-SSL-PROTECTION);
@@ -484,12 +484,12 @@ The replication process is now complete.
### PgBouncer support (optional)
[PgBouncer](https://www.pgbouncer.org/) may be used with GitLab Geo to pool
-PostgreSQL connections, which can improve performance even when using in a
-single instance installation.
+PostgreSQL connections, which can improve performance even when using in a
+single instance installation.
-We recommend using PgBouncer if you use GitLab in a highly available
-configuration with a cluster of nodes supporting a Geo **primary** site and
-two other clusters of nodes supporting a Geo **secondary** site. One for the
+We recommend using PgBouncer if you use GitLab in a highly available
+configuration with a cluster of nodes supporting a Geo **primary** site and
+two other clusters of nodes supporting a Geo **secondary** site. One for the
main database and the other for the tracking database. For more information,
see [High Availability with Omnibus GitLab](../../postgresql/replication_and_failover.md).
@@ -505,7 +505,7 @@ If you still haven't [migrated from repmgr to Patroni](#migrating-from-repmgr-to
Patroni is the official replication management solution for Geo. It
can be used to build a highly available cluster on the **primary** and a **secondary** Geo site.
-Using Patroni on a **secondary** site is optional and you don't have to use the same amount of
+Using Patroni on a **secondary** site is optional and you don't have to use the same amount of
nodes on each Geo site.
For instructions about how to set up Patroni on the primary site, see the
@@ -515,13 +515,21 @@ For instructions about how to set up Patroni on the primary site, see the
In a Geo secondary site, the main PostgreSQL database is a read-only replica of the primary site’s PostgreSQL database.
-If you are currently using `repmgr` on your Geo primary site, see [these instructions](#migrating-from-repmgr-to-patroni) for migrating from `repmgr` to Patroni.
+If you are currently using `repmgr` on your Geo primary site, see [these instructions](#migrating-from-repmgr-to-patroni)
+for migrating from `repmgr` to Patroni.
+
+A production-ready and secure setup requires at least:
+
+- 3 Consul nodes _(primary and secondary sites)_
+- 2 Patroni nodes _(primary and secondary sites)_
+- 1 PgBouncer node _(primary and secondary sites)_
+- 1 internal load-balancer _(primary site only)_
+
+The internal load balancer provides a single endpoint for connecting to the Patroni cluster's leader whenever a new leader is
+elected, and it is required for enabling cascading replication from the secondary sites.
-A production-ready and secure setup requires at least three Consul nodes, three
-Patroni nodes, one internal load-balancing node on the primary site, and a similar
-configuration for the secondary site. The internal load balancer provides a single
-endpoint for connecting to the Patroni cluster's leader whenever a new leader is
-elected. Be sure to use [password credentials](../../postgresql/replication_and_failover.md#database-authorization-for-patroni) and other database best practices.
+Be sure to use [password credentials](../../postgresql/replication_and_failover.md#database-authorization-for-patroni)
+and other database best practices.
##### Step 1. Configure Patroni permanent replication slot on the primary site
@@ -542,12 +550,12 @@ Leader instance**:
```ruby
roles(['patroni_role'])
-
+
consul['services'] = %w(postgresql)
consul['configuration'] = {
retry_join: %w[CONSUL_PRIMARY1_IP CONSUL_PRIMARY2_IP CONSUL_PRIMARY3_IP]
}
-
+
# You need one entry for each secondary, with a unique name following PostgreSQL slot_name constraints:
#
# Configuration syntax is: 'unique_slotname' => { 'type' => 'physical' },
@@ -559,6 +567,8 @@ Leader instance**:
patroni['use_pg_rewind'] = true
patroni['postgresql']['max_wal_senders'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
patroni['postgresql']['max_replication_slots'] = 8 # Use double of the amount of patroni/reserved slots (3 patronis + 1 reserved slot for a Geo secondary).
+ patroni['username'] = 'PATRONI_API_USERNAME'
+ patroni['password'] = 'PATRONI_API_PASSWORD'
patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
# We list all secondary instances as they can all become a Standby Leader
@@ -719,27 +729,41 @@ For each Patroni instance on the secondary site:
patroni['standby_cluster']['host'] = 'INTERNAL_LOAD_BALANCER_PRIMARY_IP'
patroni['standby_cluster']['port'] = INTERNAL_LOAD_BALANCER_PRIMARY_PORT
patroni['standby_cluster']['primary_slot_name'] = 'geo_secondary' # Or the unique replication slot name you setup before
+ patroni['username'] = 'PATRONI_API_USERNAME'
+ patroni['password'] = 'PATRONI_API_PASSWORD'
patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
patroni['use_pg_rewind'] = true
patroni['postgresql']['max_wal_senders'] = 5 # A minimum of three for one replica, plus two for each additional replica
patroni['postgresql']['max_replication_slots'] = 5 # A minimum of three for one replica, plus two for each additional replica
-
+
postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
postgresql['sql_replication_password'] = 'POSTGRESQL_REPLICATION_PASSWORD_HASH'
postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
postgresql['listen_address'] = '0.0.0.0' # You can use a public or VPC address here instead
-
+
gitlab_rails['dbpassword'] = 'POSTGRESQL_PASSWORD'
gitlab_rails['enable'] = true
gitlab_rails['auto_migrate'] = false
```
1. Reconfigure GitLab for the changes to take effect.
- This is required to bootstrap PostgreSQL users and settings:
+ This is required to bootstrap PostgreSQL users and settings.
- ```shell
- gitlab-ctl reconfigure
- ```
+ - If this is a fresh installation of Patroni:
+
+ ```shell
+ gitlab-ctl reconfigure
+ ```
+
+ - If you are configuring a Patroni standby cluster on a site that previously had a working Patroni cluster:
+
+ ```shell
+ gitlab-ctl stop patroni
+ rm -rf /var/opt/gitlab/postgresql/data
+ /opt/gitlab/embedded/bin/patronictl -c /var/opt/gitlab/patroni/patroni.yaml remove postgresql-ha
+ gitlab-ctl reconfigure
+ gitlab-ctl start patroni
+ ```
### Migrating from repmgr to Patroni
@@ -769,17 +793,17 @@ by following the same instructions above.
Secondary sites use a separate PostgreSQL installation as a tracking database to
keep track of replication status and automatically recover from potential replication issues.
Omnibus automatically configures a tracking database when `roles(['geo_secondary_role'])` is set.
-If you want to run this database in a highly available configuration, follow the instructions below.
-A production-ready and secure setup requires at least three Consul nodes, three
-Patroni nodes on the secondary site secondary site. Be sure to use [password credentials](../../postgresql/replication_and_failover.md#database-authorization-for-patroni) and other database best practices.
+If you want to run this database in a highly available configuration, don't use the `geo_secondary_role` above.
+Instead, follow the instructions below.
-#### Step 1. Configure a PgBouncer node on the secondary site
+A production-ready and secure setup requires at least three Consul nodes, two
+Patroni nodes and one PgBouncer node on the secondary site.
-A production-ready and highly available configuration requires at least
-three Consul nodes, three PgBouncer nodes, and one internal load-balancing node.
-The internal load balancer provides a single endpoint for connecting to the
-PgBouncer cluster. For more information, see [High Availability with Omnibus GitLab](../../postgresql/replication_and_failover.md).
+Be sure to use [password credentials](../../postgresql/replication_and_failover.md#database-authorization-for-patroni)
+and other database best practices.
+
+#### Step 1. Configure a PgBouncer node on the secondary site
Follow the minimal configuration for the PgBouncer node for the tracking database:
@@ -880,6 +904,8 @@ For each Patroni instance on the secondary site for the tracking database:
]
# Patroni configuration
+ patroni['username'] = 'PATRONI_API_USERNAME'
+ patroni['password'] = 'PATRONI_API_PASSWORD'
patroni['replication_password'] = 'PLAIN_TEXT_POSTGRESQL_REPLICATION_PASSWORD'
patroni['postgresql']['max_wal_senders'] = 5 # A minimum of three for one replica, plus two for each additional replica