summaryrefslogtreecommitdiff
path: root/doc/administration/postgresql
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/postgresql')
-rw-r--r--doc/administration/postgresql/external.md2
-rw-r--r--doc/administration/postgresql/index.md2
-rw-r--r--doc/administration/postgresql/pgbouncer.md168
-rw-r--r--doc/administration/postgresql/replication_and_failover.md51
4 files changed, 201 insertions, 22 deletions
diff --git a/doc/administration/postgresql/external.md b/doc/administration/postgresql/external.md
index 6e2bbc0aae1..e2cfb95ec48 100644
--- a/doc/administration/postgresql/external.md
+++ b/doc/administration/postgresql/external.md
@@ -32,7 +32,7 @@ If you use a cloud-managed service, or provide your own PostgreSQL instance:
gitlab_rails['db_password'] = 'DB password'
```
- For more information on GitLab HA setups, refer to [configuring GitLab for HA](../high_availability/gitlab.md).
+ For more information on GitLab multi-node setups, refer to the [reference architectures](../reference_architectures/index.md).
1. Reconfigure for the changes to take effect:
diff --git a/doc/administration/postgresql/index.md b/doc/administration/postgresql/index.md
index 7e0a2f3cae1..2720d8e696b 100644
--- a/doc/administration/postgresql/index.md
+++ b/doc/administration/postgresql/index.md
@@ -13,7 +13,7 @@ There are essentially three setups to choose from.
This setup is for when you have installed GitLab using the
[Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee).
-All the tools that are needed like PostgreSQL, PgBouncer, Repmgr are bundled in
+All the tools that are needed like PostgreSQL, PgBouncer, Patroni, and repmgr are bundled in
the package, so you can it to set up the whole PostgreSQL infrastructure (primary, replica).
[> Read how to set up PostgreSQL replication and failover using Omnibus GitLab](replication_and_failover.md)
diff --git a/doc/administration/postgresql/pgbouncer.md b/doc/administration/postgresql/pgbouncer.md
new file mode 100644
index 00000000000..9db3e017359
--- /dev/null
+++ b/doc/administration/postgresql/pgbouncer.md
@@ -0,0 +1,168 @@
+---
+type: reference
+---
+
+# Working with the bundled PgBouncer service **(PREMIUM ONLY)**
+
+[PgBouncer](http://www.pgbouncer.org/) is used to seamlessly migrate database
+connections between servers in a failover scenario. Additionally, it can be used
+in a non-fault-tolerant setup to pool connections, speeding up response time
+while reducing resource usage.
+
+GitLab Premium includes a bundled version of PgBouncer that can be managed
+through `/etc/gitlab/gitlab.rb`.
+
+## PgBouncer as part of a fault-tolerant GitLab installation
+
+This content has been moved to a [new location](replication_and_failover.md#configuring-the-pgbouncer-node).
+
+## PgBouncer as part of a non-fault-tolerant GitLab installation
+
+1. Generate PGBOUNCER_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 pgbouncer`
+
+1. Generate SQL_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 gitlab`. We'll also need to enter the plaintext SQL_USER_PASSWORD later
+
+1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
+ postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
+ postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
+ postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+ NOTE: **Note:**
+ If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
+
+1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ pgbouncer['enable'] = true
+ pgbouncer['databases'] = {
+ gitlabhq_production: {
+ host: 'DATABASE_HOST',
+ user: 'pgbouncer',
+ password: 'PGBOUNCER_USER_PASSWORD_HASH'
+ }
+ }
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+1. On the node running Puma, make sure the following is set in `/etc/gitlab/gitlab.rb`
+
+ ```ruby
+ gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
+ gitlab_rails['db_port'] = '6432'
+ gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
+ ```
+
+1. Run `gitlab-ctl reconfigure`
+
+1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
+
+## Enable Monitoring
+
+> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
+
+If you enable Monitoring, it must be enabled on **all** PgBouncer servers.
+
+1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
+
+ ```ruby
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Replace placeholders
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses of the Consul server nodes
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
+ }
+
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
+ ```
+
+1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+
+## Administrative console
+
+As part of Omnibus GitLab, a command is provided to automatically connect to the
+PgBouncer administrative console. See the
+[PgBouncer documentation](https://www.pgbouncer.org/usage.html#admin-console)
+for detailed instructions on how to interact with the console.
+
+To start a session run the following and provide the password for the `pgbouncer`
+user:
+
+```shell
+sudo gitlab-ctl pgb-console
+```
+
+To get some basic information about the instance:
+
+```shell
+pgbouncer=# show databases; show clients; show servers;
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+---------------------+-----------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | 127.0.0.1 | 5432 | gitlabhq_production | | 100 | 5 | | 0 | 1
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+(2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link
+| remote_pid | tls
+------+-----------+---------------------+--------+-----------+-------+------------+------------+---------------------+---------------------+-----------+------
++------------+-----
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44590 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12444c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44592 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x12447c0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44594 | 127.0.0.1 | 6432 | 2018-04-24 22:13:10 | 2018-04-24 22:17:10 | 0x1244940 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44706 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:16:31 | 0x1244ac0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44708 | 127.0.0.1 | 6432 | 2018-04-24 22:14:22 | 2018-04-24 22:15:15 | 0x1244c40 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44794 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:15:15 | 0x1244dc0 |
+| 0 |
+ C | gitlab | gitlabhq_production | active | 127.0.0.1 | 44798 | 127.0.0.1 | 6432 | 2018-04-24 22:15:15 | 2018-04-24 22:16:31 | 0x1244f40 |
+| 0 |
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 44660 | 127.0.0.1 | 6432 | 2018-04-24 22:13:51 | 2018-04-24 22:17:12 | 0x1244640 |
+| 0 |
+(8 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | rem
+ote_pid | tls
+------+--------+---------------------+-------+-----------+------+------------+------------+---------------------+---------------------+-----------+------+----
+--------+-----
+ S | gitlab | gitlabhq_production | idle | 127.0.0.1 | 5432 | 127.0.0.1 | 35646 | 2018-04-24 22:15:15 | 2018-04-24 22:17:10 | 0x124dca0 | |
+ 19980 |
+(1 row)
+```
+
+## Troubleshooting
+
+In case you are experiencing any issues connecting through PgBouncer, the first
+place to check is always the logs:
+
+```shell
+sudo gitlab-ctl tail pgbouncer
+```
+
+Additionally, you can check the output from `show databases` in the
+[administrative console](#administrative-console). In the output, you would expect
+to see values in the `host` field for the `gitlabhq_production` database.
+Additionally, `current_connections` should be greater than 1.
+
+### Message: `LOG: invalid CIDR mask in address`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-cidr-mask-in-address).
+
+### Message: `LOG: invalid IP mask "md5": Name or service not known`
+
+See the suggested fix [in Geo documentation](../geo/replication/troubleshooting.md#message-log--invalid-ip-mask-md5-name-or-service-not-known).
diff --git a/doc/administration/postgresql/replication_and_failover.md b/doc/administration/postgresql/replication_and_failover.md
index 5f550f09e5b..bc2af167e6c 100644
--- a/doc/administration/postgresql/replication_and_failover.md
+++ b/doc/administration/postgresql/replication_and_failover.md
@@ -29,6 +29,11 @@ You also need to take into consideration the underlying network topology, making
sure you have redundant connectivity between all Database and GitLab instances
to avoid the network becoming a single point of failure.
+NOTE: **Note:**
+As of GitLab 13.3, PostgreSQL 12 is shipped with Omnibus GitLab. Clustering for PostgreSQL 12 is only supported with
+Patroni. See the [Patroni](#patroni) section for further details. The support for repmgr will not be extended beyond
+PostgreSQL 11.
+
### Database node
Each database node runs three services:
@@ -97,7 +102,7 @@ This is why you will need:
When using default setup, minimum configuration requires:
-- `CONSUL_USERNAME`. Defaults to `gitlab-consul`
+- `CONSUL_USERNAME`. The default user for Omnibus GitLab is `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
@@ -140,7 +145,7 @@ server nodes.
We will need the following password information for the application's database user:
-- `POSTGRESQL_USERNAME`. Defaults to `gitlab`
+- `POSTGRESQL_USERNAME`. The default user for Omnibus GitLab is `gitlab`
- `POSTGRESQL_USER_PASSWORD`. The password for the database user
- `POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
Can be generated with:
@@ -153,7 +158,7 @@ We will need the following password information for the application's database u
When using default setup, minimum configuration requires:
-- `PGBOUNCER_USERNAME`. Defaults to `pgbouncer`
+- `PGBOUNCER_USERNAME`. The default user for Omnibus GitLab is `pgbouncer`
- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair.
Can be generated with:
@@ -198,7 +203,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
### Configuring the Database nodes
-1. Make sure to [configure the Consul nodes](../high_availability/consul.md).
+1. Make sure to [configure the Consul nodes](../consul.md).
1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information), [`POSTGRESQL_PASSWORD_HASH`](#postgresql-information), the [number of db nodes](#postgresql-information), and the [network address](#network-information) before executing the next step.
1. On the master database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
@@ -305,6 +310,12 @@ Select one node as a primary node.
CREATE EXTENSION pg_trgm;
```
+1. Enable the `btree_gist` extension:
+
+ ```shell
+ CREATE EXTENSION btree_gist;
+ ```
+
1. Exit the database prompt by typing `\q` and Enter.
1. Verify the cluster is initialized with one node:
@@ -455,7 +466,7 @@ Check the [Troubleshooting section](#troubleshooting) before proceeding.
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
-1. [Enable monitoring](../high_availability/pgbouncer.md#enable-monitoring)
+1. [Enable monitoring](../postgresql/pgbouncer.md#enable-monitoring)
#### PgBouncer Checkpoint
@@ -736,9 +747,9 @@ consul['configuration'] = {
After deploying the configuration follow these steps:
-1. On `10.6.0.31`, our primary database
+1. On `10.6.0.31`, our primary database:
- Enable the `pg_trgm` extension
+ Enable the `pg_trgm` and `btree_gist` extensions:
```shell
gitlab-psql -d gitlabhq_production
@@ -746,33 +757,34 @@ After deploying the configuration follow these steps:
```shell
CREATE EXTENSION pg_trgm;
+ CREATE EXTENSION btree_gist;
```
-1. On `10.6.0.32`, our first standby database
+1. On `10.6.0.32`, our first standby database:
- Make this node a standby of the primary
+ Make this node a standby of the primary:
```shell
gitlab-ctl repmgr standby setup 10.6.0.21
```
-1. On `10.6.0.33`, our second standby database
+1. On `10.6.0.33`, our second standby database:
- Make this node a standby of the primary
+ Make this node a standby of the primary:
```shell
gitlab-ctl repmgr standby setup 10.6.0.21
```
-1. On `10.6.0.41`, our application server
+1. On `10.6.0.41`, our application server:
- Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`
+ Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`:
```shell
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
- Run database migrations
+ Run database migrations:
```shell
gitlab-rake gitlab:db:configure
@@ -783,7 +795,7 @@ After deploying the configuration follow these steps:
This example uses 3 PostgreSQL servers, and 1 application node (with PgBouncer setup alongside).
It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
-The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](../high_availability/consul.md#outage-recovery) on the same set of machines.
+The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](../consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
@@ -1075,7 +1087,7 @@ To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
-On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](../high_availability/consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+On the Consul server nodes, it is important to [restart the Consul service](../consul.md#restart-consul) in a controlled manner.
### `gitlab-ctl repmgr-check-master` command produces errors
@@ -1122,11 +1134,10 @@ postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
### Issues with other components
-If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page.
+If you're running into an issue with a component not outlined here, be sure to check the troubleshooting section of their specific documentation page:
-- [Consul](../high_availability/consul.md#troubleshooting)
+- [Consul](../consul.md#troubleshooting-consul)
- [PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#troubleshooting)
-- [GitLab application](../high_availability/gitlab.md#troubleshooting)
## Patroni
@@ -1324,7 +1335,7 @@ You can switch an exiting database cluster to use Patroni instead of repmgr with
NOTE: **Note:**
Ensure that there is no `walsender` process running on the primary node.
- `ps aux | grep walsender` must not show any running process.
+ `ps aux | grep walsender` must not show any running process.
1. On the primary node, [configure Patroni](#configuring-patroni-cluster). Remove `repmgr` and any other
repmgr-specific configuration. Also remove any configuration that is related to PostgreSQL replication.