summaryrefslogtreecommitdiff
path: root/doc/administration/high_availability
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2019-10-22 11:31:16 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2019-10-22 11:31:16 +0000
commit905c1110b08f93a19661cf42a276c7ea90d0a0ff (patch)
tree756d138db422392c00471ab06acdff92c5a9b69c /doc/administration/high_availability
parent50d93f8d1686950fc58dda4823c4835fd0d8c14b (diff)
downloadgitlab-ce-905c1110b08f93a19661cf42a276c7ea90d0a0ff.tar.gz
Add latest changes from gitlab-org/gitlab@12-4-stable-ee
Diffstat (limited to 'doc/administration/high_availability')
-rw-r--r--doc/administration/high_availability/README.md162
-rw-r--r--doc/administration/high_availability/consul.md12
-rw-r--r--doc/administration/high_availability/database.md70
-rw-r--r--doc/administration/high_availability/gitlab.md20
-rw-r--r--doc/administration/high_availability/load_balancer.md14
-rw-r--r--doc/administration/high_availability/monitoring_node.md4
-rw-r--r--doc/administration/high_availability/nfs.md9
-rw-r--r--doc/administration/high_availability/nfs_host_client_setup.md6
-rw-r--r--doc/administration/high_availability/pgbouncer.md34
-rw-r--r--doc/administration/high_availability/redis.md18
-rw-r--r--doc/administration/high_availability/redis_source.md4
11 files changed, 220 insertions, 133 deletions
diff --git a/doc/administration/high_availability/README.md b/doc/administration/high_availability/README.md
index 0aaf956f169..199944a160c 100644
--- a/doc/administration/high_availability/README.md
+++ b/doc/administration/high_availability/README.md
@@ -25,24 +25,24 @@ solution should balance the costs against the benefits.
There are many options when choosing a highly-available GitLab architecture. We
recommend engaging with GitLab Support to choose the best architecture for your
-use-case. This page contains some various options and guidelines based on
+use case. This page contains some various options and guidelines based on
experience with GitLab.com and Enterprise Edition on-premises customers.
-For a detailed insight into how GitLab scales and configures GitLab.com, you can
+For detailed insight into how GitLab scales and configures GitLab.com, you can
watch [this 1 hour Q&A](https://www.youtube.com/watch?v=uCU8jdYzpac)
with [John Northrup](https://gitlab.com/northrup), and live questions coming in from some of our customers.
## GitLab Components
The following components need to be considered for a scaled or highly-available
-environment. In many cases components can be combined on the same nodes to reduce
+environment. In many cases, components can be combined on the same nodes to reduce
complexity.
- Unicorn/Workhorse - Web-requests (UI, API, Git over HTTP)
- Sidekiq - Asynchronous/Background jobs
- PostgreSQL - Database
- Consul - Database service discovery and health checks/failover
- - PGBouncer - Database pool manager
+ - PgBouncer - Database pool manager
- Redis - Key/Value store (User sessions, cache, queue for Sidekiq)
- Sentinel - Redis health check/failover manager
- Gitaly - Provides high-level RPC access to Git repositories
@@ -57,12 +57,12 @@ infrastructure and maintenance costs of full high availability.
### Basic Scaling
This is the simplest form of scaling and will work for the majority of
-cases. Backend components such as PostgreSQL, Redis and storage are offloaded
+cases. Backend components such as PostgreSQL, Redis, and storage are offloaded
to their own nodes while the remaining GitLab components all run on 2 or more
application nodes.
This form of scaling also works well in a cloud environment when it is more
-cost-effective to deploy several small nodes rather than a single
+cost effective to deploy several small nodes rather than a single
larger one.
- 1 PostgreSQL node
@@ -85,11 +85,11 @@ you can continue with the next step.
### Full Scaling
-For very large installations it may be necessary to further split components
-for maximum scalability. In a fully-scaled architecture the application node
+For very large installations, it might be necessary to further split components
+for maximum scalability. In a fully-scaled architecture, the application node
is split into separate Sidekiq and Unicorn/Workhorse nodes. One indication that
this architecture is required is if Sidekiq queues begin to periodically increase
-in size, indicating that there is contention or not enough resources.
+in size, indicating that there is contention or there are not enough resources.
- 1 PostgreSQL node
- 1 Redis node
@@ -100,7 +100,7 @@ in size, indicating that there is contention or not enough resources.
## High Availability Architecture Examples
-When organizations require scaling *and* high availability the following
+When organizations require scaling *and* high availability, the following
architectures can be utilized. As the introduction section at the top of this
page mentions, there is a tradeoff between cost/complexity and uptime. Be sure
this complexity is absolutely required before taking the step into full
@@ -108,11 +108,11 @@ high availability.
For all examples below, we recommend running Consul and Redis Sentinel on
dedicated nodes. If Consul is running on PostgreSQL nodes or Sentinel on
-Redis nodes there is a potential that high resource usage by PostgreSQL or
+Redis nodes, there is a potential that high resource usage by PostgreSQL or
Redis could prevent communication between the other Consul and Sentinel nodes.
-This may lead to the other nodes believing a failure has occurred and automated
-failover is necessary. Isolating them from the services they monitor reduces
-the chances of split-brain.
+This may lead to the other nodes believing a failure has occurred and initiating
+automated failover. Isolating Redis and Consul from the services they monitor
+reduces the chances of a false positive that a failure has occurred.
The examples below do not really address high availability of NFS. Some enterprises
have access to NFS appliances that manage availability. This is the best case
@@ -131,14 +131,14 @@ trade-offs and limits.
This architecture will work well for many GitLab customers. Larger customers
may begin to notice certain events cause contention/high load - for example,
cloning many large repositories with binary files, high API usage, a large
-number of enqueued Sidekiq jobs, etc. If this happens you should consider
+number of enqueued Sidekiq jobs, and so on. If this happens, you should consider
moving to a hybrid or fully distributed architecture depending on what is causing
the contention.
- 3 PostgreSQL nodes
- 2 Redis nodes
- 3 Consul/Sentinel nodes
-- 2 or more GitLab application nodes (Unicorn, Workhorse, Sidekiq, PGBouncer)
+- 2 or more GitLab application nodes (Unicorn, Workhorse, Sidekiq, PgBouncer)
- 1 NFS/Gitaly server
- 1 Monitoring node (Prometheus, Grafana)
@@ -162,32 +162,11 @@ contention due to certain workloads.
![Hybrid architecture diagram](img/hybrid.png)
-#### Reference Architecture
-
-- **Supported Users (approximate):** 10,000
-- **Known Issues:** While validating the reference architecture, slow endpoints were discovered and are being investigated. [gitlab-org/gitlab-ce/issues/64335](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335)
-
-The Support and Quality teams built, performance tested, and validated an
-environment that supports about 10,000 users. The specifications below are a
-representation of the work so far. The specifications may be adjusted in the
-future based on additional testing and iteration.
-
-NOTE: **Note:** The specifications here were performance tested against a specific coded workload. Your exact needs may be more, depending on your workload. Your workload is influenced by factors such as - but not limited to - how active your users are, how much automation you use, mirroring, and repo/change size.
-
-- 3 PostgreSQL - 4 CPU, 16GiB memory per node
-- 1 PgBouncer - 2 CPU, 4GiB memory
-- 2 Redis - 2 CPU, 8GiB memory per node
-- 3 Consul/Sentinel - 2 CPU, 2GiB memory per node
-- 4 Sidekiq - 4 CPU, 16GiB memory per node
-- 5 GitLab application nodes - 16 CPU, 64GiB memory per node
-- 1 Gitaly - 16 CPU, 64GiB memory
-- 1 Monitoring node - 2 CPU, 8GiB memory, 100GiB local storage
-
### Fully Distributed
This architecture scales to hundreds of thousands of users and projects and is
the basis of the GitLab.com architecture. While this scales well it also comes
-with the added complexity of many more nodes to configure, manage and monitor.
+with the added complexity of many more nodes to configure, manage, and monitor.
- 3 PostgreSQL nodes
- 4 or more Redis nodes (2 separate clusters for persistent and cache data)
@@ -214,3 +193,110 @@ separately:
1. [Configure the GitLab application servers](gitlab.md)
1. [Configure the load balancers](load_balancer.md)
1. [Monitoring node (Prometheus and Grafana)](monitoring_node.md)
+
+## Reference Architecture Examples
+
+These reference architecture examples rely on the general rule that approximately 2 requests per second (RPS) of load is generated for every 100 users.
+
+The specifications here were performance tested against a specific coded
+workload. Your exact needs may be more, depending on your workload. Your
+workload is influenced by factors such as - but not limited to - how active your
+users are, how much automation you use, mirroring, and repo/change size.
+
+### 10,000 User Configuration
+
+- **Supported Users (approximate):** 10,000
+- **RPS:** 200 requests per second
+- **Known Issues:** While validating the reference architecture, slow API endpoints
+ were discovered. For details, see the related issues list in
+ [this issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335).
+
+The Support and Quality teams built, performance tested, and validated an
+environment that supports about 10,000 users. The specifications below are a
+representation of the work so far. The specifications may be adjusted in the
+future based on additional testing and iteration.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 3 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 16 vCPU, 60GB Memory | n1-standard-16 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+### 25,000 User Configuration
+
+- **Supported Users (approximate):** 25,000
+- **RPS:** 500 requests per second
+- **Known Issues:** The slow API endpoints that were discovered during testing
+ the 10,000 user architecture also affect the 25,000 user architecture. For
+ details, see the related issues list in
+ [this issue](https://gitlab.com/gitlab-org/gitlab-foss/issues/64335).
+
+The GitLab Support and Quality teams built, performance tested, and validated an
+environment that supports around 25,000 users. The specifications below are a
+representation of the work so far. The specifications may be adjusted in the
+future based on additional testing and iteration.
+
+NOTE: **Note:** The specifications here were performance tested against a
+specific coded workload. Your exact needs may be more, depending on your
+workload. Your workload is influenced by factors such as - but not limited to -
+how active your users are, how much automation you use, mirroring, and
+repo/change size.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 7 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 8 vCPU, 30GB Memory | n1-standard-8 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 120GB Memory | n1-standard-32 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+### 50,000 User Configuration
+
+- **Supported Users (approximate):** 50,000
+- **RPS:** 1,000 requests per second
+- **Status:** Work-in-progress
+- **Related Issue:** See the [related issue](https://gitlab.com/gitlab-org/quality/performance/issues/66) for more information.
+
+The Support and Quality teams are in the process of building and performance
+testing an environment that will support around 50,000 users. The specifications
+below are a very rough work-in-progress representation of the work so far. The
+Quality team will be certifying this environment in late 2019. The
+specifications may be adjusted prior to certification based on performance
+testing.
+
+| Service | Configuration | GCP type |
+| ------------------------------|-------------------------|----------------|
+| 15 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
+| 3 PostgreSQL | 8 vCPU, 30GB Memory | n1-standard-8 |
+| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| X Gitaly[^1] <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 64 vCPU, 240GB Memory | n1-standard-64 |
+| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
+| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+| 1 NFS Server | 16 vCPU, 14.4GB Memory | n1-highcpu-16 |
+| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
+| 1 Load Balancing node[^2] . | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
+
+[^1]: Gitaly node requirements are dependent on customer data. We recommend 2
+ nodes as an absolute minimum for performance at the 10,000 and 25,000 user
+ scale and 4 nodes as an absolute minimum at the 50,000 user scale, but
+ additional nodes should be considered in conjunction with a review of
+ project counts and sizes.
+
+[^2]: HAProxy is the only tested and recommended load balancer. Additional
+ options may be supported in the future.
diff --git a/doc/administration/high_availability/consul.md b/doc/administration/high_availability/consul.md
index b02a61b9256..b01419200cc 100644
--- a/doc/administration/high_availability/consul.md
+++ b/doc/administration/high_availability/consul.md
@@ -6,7 +6,7 @@ type: reference
As part of its High Availability stack, GitLab Premium includes a bundled version of [Consul](https://www.consul.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
-A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the consul cluster.
+A Consul cluster consists of multiple server agents, as well as client agents that run on other nodes which need to talk to the Consul cluster.
## Prerequisites
@@ -96,7 +96,7 @@ Ideally all nodes will have a `Status` of `alive`.
**Note**: This section only applies to server agents. It is safe to restart client agents whenever needed.
-If it is necessary to restart the server cluster, it is important to do this in a controlled fashion in order to maintain quorum. If quorum is lost, you will need to follow the consul [outage recovery](#outage-recovery) process to recover the cluster.
+If it is necessary to restart the server cluster, it is important to do this in a controlled fashion in order to maintain quorum. If quorum is lost, you will need to follow the Consul [outage recovery](#outage-recovery) process to recover the cluster.
To be safe, we recommend you only restart one server agent at a time to ensure the cluster remains intact.
@@ -129,7 +129,7 @@ To fix this:
1. Run `gitlab-ctl reconfigure`
-If you still see the errors, you may have to [erase the consul database and reinitialize](#recreate-from-scratch) on the affected node.
+If you still see the errors, you may have to [erase the Consul database and reinitialize](#recreate-from-scratch) on the affected node.
### Consul agents do not start - Multiple private IPs
@@ -158,11 +158,11 @@ To fix this:
### Outage recovery
-If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervenetion.
+If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervention.
#### Recreate from scratch
-By default, GitLab does not store anything in the consul cluster that cannot be recreated. To erase the consul database and reinitialize
+By default, GitLab does not store anything in the Consul cluster that cannot be recreated. To erase the Consul database and reinitialize
```
# gitlab-ctl stop consul
@@ -174,4 +174,4 @@ After this, the cluster should start back up, and the server agents rejoin. Shor
#### Recover a failed cluster
-If you have taken advantage of consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://www.consul.io/docs/guides/outage.html) to recover a failed cluster.
+If you have taken advantage of Consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://learn.hashicorp.com/consul/day-2-operations/outage) to recover a failed cluster.
diff --git a/doc/administration/high_availability/database.md b/doc/administration/high_availability/database.md
index 99582dae57a..a50cc0cbd03 100644
--- a/doc/administration/high_availability/database.md
+++ b/doc/administration/high_availability/database.md
@@ -153,9 +153,9 @@ Database nodes run two services with PostgreSQL:
- Instructing remaining servers to follow the new master node.
On failure, the old master node is automatically evicted from the cluster, and should be rejoined manually once recovered.
-- Consul. Monitors the status of each node in the database cluster and tracks its health in a service definition on the consul cluster.
+- Consul. Monitors the status of each node in the database cluster and tracks its health in a service definition on the Consul cluster.
-Alongside pgbouncer, there is a consul agent that watches the status of the PostgreSQL service. If that status changes, consul runs a script which updates the configuration and reloads pgbouncer
+Alongside PgBouncer, there is a Consul agent that watches the status of the PostgreSQL service. If that status changes, Consul runs a script which updates the configuration and reloads PgBouncer
##### Connection flow
@@ -198,7 +198,7 @@ When using default setup, minimum configuration requires:
- `CONSUL_USERNAME`. Defaults to `gitlab-consul`
- `CONSUL_DATABASE_PASSWORD`. Password for the database user.
-- `CONSUL_PASSWORD_HASH`. This is a hash generated out of consul username/password pair.
+- `CONSUL_PASSWORD_HASH`. This is a hash generated out of Consul username/password pair.
Can be generated with:
```sh
@@ -248,26 +248,26 @@ We will need the following password information for the application's database u
sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
```
-##### Pgbouncer information
+##### PgBouncer information
When using default setup, minimum configuration requires:
- `PGBOUNCER_USERNAME`. Defaults to `pgbouncer`
-- `PGBOUNCER_PASSWORD`. This is a password for pgbouncer service.
-- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of pgbouncer username/password pair.
+- `PGBOUNCER_PASSWORD`. This is a password for PgBouncer service.
+- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of PgBouncer username/password pair.
Can be generated with:
```sh
sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
```
-- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running Pgbouncer.
+- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running PgBouncer.
Few notes on the service itself:
- The service runs as the same system account as the database
- In the package, this is by default `gitlab-psql`
-- If you use a non-default user account for Pgbouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
+- If you use a non-default user account for PgBouncer service (by default `pgbouncer`), you will have to specify this username. We will refer to this requirement with `PGBOUNCER_USERNAME`.
- The service will have a regular database user account generated for it
- This defaults to `repmgr`
- Passwords will be stored in the following locations:
@@ -315,7 +315,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
- # Configure the consul agent
+ # Configure the Consul agent
consul['services'] = %w(postgresql)
# START user configuration
@@ -348,7 +348,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
1. On secondary nodes, add all the configuration specified above for primary node
to `/etc/gitlab/gitlab.rb`. In addition, append the following configuration
- to inform gitlab-ctl that they are standby nodes initially and it need not
+ to inform `gitlab-ctl` that they are standby nodes initially and it need not
attempt to register them as primary node
```
@@ -363,7 +363,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
>
> - If you want your database to listen on a specific interface, change the config:
> `postgresql['listen_address'] = '0.0.0.0'`.
-> - If your Pgbouncer service runs under a different user account,
+> - If your PgBouncer service runs under a different user account,
> you also need to specify: `postgresql['pgbouncer_user'] = PGBOUNCER_USERNAME` in
> your configuration.
@@ -484,9 +484,9 @@ or secondary. The most important thing here is that this command does not produc
If there are errors it's most likely due to incorrect `gitlab-consul` database user permissions.
Check the [Troubleshooting section](#troubleshooting) before proceeding.
-#### Configuring the Pgbouncer node
+#### Configuring the PgBouncer node
-See our [documentation for Pgbouncer](pgbouncer.md) for information on running Pgbouncer as part of an HA setup.
+See our [documentation for PgBouncer](pgbouncer.md) for information on running PgBouncer as part of an HA setup.
#### Configuring the Application nodes
@@ -515,10 +515,10 @@ Ensure that all migrations ran:
gitlab-rake gitlab:db:configure
```
-> **Note**: If you encounter a `rake aborted!` error stating that PGBouncer is failing to connect to
-PostgreSQL it may be that your PGBouncer node's IP address is missing from
+> **Note**: If you encounter a `rake aborted!` error stating that PgBouncer is failing to connect to
+PostgreSQL it may be that your PgBouncer node's IP address is missing from
PostgreSQL's `trust_auth_cidr_addresses` in `gitlab.rb` on your database nodes. See
-[PGBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
+[PgBouncer error `ERROR: pgbouncer cannot connect to server`](#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
in the Troubleshooting section before proceeding.
##### Ensure GitLab is running
@@ -533,7 +533,7 @@ Here we'll show you some fully expanded example configurations.
##### Example recommended setup
-This example uses 3 consul servers, 3 postgresql servers, and 1 application node.
+This example uses 3 Consul servers, 3 PostgreSQL servers, and 1 application node.
We start with all servers on the same 10.6.0.0/16 private network range, they
can connect to each freely other on those addresses.
@@ -589,7 +589,7 @@ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
-# Configure the consul agent
+# Configure the Consul agent
consul['services'] = %w(postgresql)
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
@@ -635,7 +635,7 @@ postgresql['enable'] = false
pgbouncer['enable'] = true
consul['enable'] = true
-# Configure Pgbouncer
+# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -691,7 +691,7 @@ After deploying the configuration follow these steps:
1. On `10.6.0.31`, our application server
- Set gitlab-consul's pgbouncer password to `toomanysecrets`
+ Set `gitlab-consul` user's PgBouncer password to `toomanysecrets`
```sh
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
@@ -705,10 +705,10 @@ After deploying the configuration follow these steps:
#### Example minimal setup
-This example uses 3 postgresql servers, and 1 application node.
+This example uses 3 PostgreSQL servers, and 1 application node.
-It differs from the [recommended setup](#example-recommended-setup) by moving the consul servers into the same servers we use for PostgreSQL.
-The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with postgres [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [consul outage recovery](consul.md#outage-recovery) on the same set of machines.
+It differs from the [recommended setup](#example-recommended-setup) by moving the Consul servers into the same servers we use for PostgreSQL.
+The trade-off is between reducing server counts, against the increased operational complexity of needing to deal with PostgreSQL [failover](#failover-procedure) and [restore](#restore-procedure) procedures in addition to [Consul outage recovery](consul.md#outage-recovery) on the same set of machines.
In this example we start with all servers on the same 10.6.0.0/16 private network range, they can connect to each freely other on those addresses.
@@ -744,7 +744,7 @@ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails['auto_migrate'] = false
-# Configure the consul agent
+# Configure the Consul agent
consul['services'] = %w(postgresql)
postgresql['pgbouncer_user_password'] = '771a8625958a529132abe6f1a4acb19c'
@@ -788,7 +788,7 @@ postgresql['enable'] = false
pgbouncer['enable'] = true
consul['enable'] = true
-# Configure Pgbouncer
+# Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -817,7 +817,7 @@ The manual steps for this configuration are the same as for the [example recomme
#### Failover procedure
By default, if the master database fails, `repmgrd` should promote one of the
-standby nodes to master automatically, and consul will update pgbouncer with
+standby nodes to master automatically, and Consul will update PgBouncer with
the new master.
If you need to failover manually, you have two options:
@@ -907,7 +907,7 @@ can require md5 authentication.
##### Trust specific addresses
-If you know the IP address, or FQDN of all database and pgbouncer nodes in the
+If you know the IP address, or FQDN of all database and PgBouncer nodes in the
cluster, you can trust only those nodes.
In `/etc/gitlab/gitlab.rb` on all of the database nodes, set
@@ -926,7 +926,7 @@ repmgr['trust_auth_cidr_addresses'] = %w(192.168.1.44/32 db2.example.com)
##### MD5 Authentication
If you are running on an untrusted network, repmgr can use md5 authentication
-with a [.pgpass file](https://www.postgresql.org/docs/9.6/static/libpq-pgpass.html)
+with a [.pgpass file](https://www.postgresql.org/docs/9.6/libpq-pgpass.html)
to authenticate.
You can specify by IP address, FQDN, or by subnet, using the same format as in
@@ -950,7 +950,7 @@ the previous section:
1. Set `postgresql['md5_auth_cidr_addresses']` to the desired value
1. Set `postgresql['sql_replication_user'] = 'gitlab_repmgr'`
1. Reconfigure with `gitlab-ctl reconfigure`
- 1. Restart postgresql with `gitlab-ctl restart postgresql`
+ 1. Restart PostgreSQL with `gitlab-ctl restart postgresql`
1. Create a `.pgpass` file. Enter the `gitlab_repmgr` password twice to
when asked:
@@ -959,7 +959,7 @@ the previous section:
gitlab-ctl write-pgpass --user gitlab_repmgr --hostuser gitlab-psql --database '*'
```
-1. On each pgbouncer node, edit `/etc/gitlab/gitlab.rb`:
+1. On each PgBouncer node, edit `/etc/gitlab/gitlab.rb`:
1. Ensure `gitlab_rails['db_password']` is set to the plaintext password for
the `gitlab` database user
1. [Reconfigure GitLab] for the changes to take effect
@@ -993,7 +993,7 @@ To restart either service, run `gitlab-ctl restart SERVICE`
For PostgreSQL, it is usually safe to restart the master node by default. Automatic failover defaults to a 1 minute timeout. Provided the database returns before then, nothing else needs to be done. To be safe, you can stop `repmgrd` on the standby nodes first with `gitlab-ctl stop repmgrd`, then start afterwards with `gitlab-ctl start repmgrd`.
-On the consul server nodes, it is important to restart the consul service in a controlled fashion. Read our [consul documentation](consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
+On the Consul server nodes, it is important to restart the Consul service in a controlled fashion. Read our [Consul documentation](consul.md#restarting-the-server-cluster) for instructions on how to restart the service.
### `gitlab-ctl repmgr-check-master` command produces errors
@@ -1010,16 +1010,16 @@ steps to fix the problem:
Now there should not be errors. If errors still occur then there is another problem.
-### PGBouncer error `ERROR: pgbouncer cannot connect to server`
+### PgBouncer error `ERROR: pgbouncer cannot connect to server`
You may get this error when running `gitlab-rake gitlab:db:configure` or you
-may see the error in the PGBouncer log file.
+may see the error in the PgBouncer log file.
```
PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
```
-The problem may be that your PGBouncer node's IP address is not included in the
+The problem may be that your PgBouncer node's IP address is not included in the
`trust_auth_cidr_addresses` setting in `/etc/gitlab/gitlab.rb` on the database nodes.
You can confirm that this is the issue by checking the PostgreSQL log on the master
@@ -1049,7 +1049,7 @@ If you're running into an issue with a component not outlined here, be sure to c
## Configure using Omnibus
**Note**: We recommend that you follow the instructions here for a full [PostgreSQL cluster](#high-availability-with-gitlab-omnibus-premium-only).
-If you are reading this section due to an old bookmark, you can find that old documentation [in the repository](https://gitlab.com/gitlab-org/gitlab-foss/blob/v10.1.4/doc/administration/high_availability/database.md#configure-using-omnibus).
+If you are reading this section due to an old bookmark, you can find that old documentation [in the repository](https://gitlab.com/gitlab-org/gitlab/blob/v10.1.4/doc/administration/high_availability/database.md#configure-using-omnibus).
Read more on high-availability configuration:
diff --git a/doc/administration/high_availability/gitlab.md b/doc/administration/high_availability/gitlab.md
index 0d1dd06871a..71ab169a801 100644
--- a/doc/administration/high_availability/gitlab.md
+++ b/doc/administration/high_availability/gitlab.md
@@ -40,7 +40,7 @@ these additional steps before proceeding with GitLab installation.
```
1. Download/install GitLab Omnibus using **steps 1 and 2** from
- [GitLab downloads](https://about.gitlab.com/downloads). Do not complete other
+ [GitLab downloads](https://about.gitlab.com/install/). Do not complete other
steps on the download page.
1. Create/edit `/etc/gitlab/gitlab.rb` and use the following configuration.
Be sure to change the `external_url` to match your eventual GitLab front-end
@@ -90,8 +90,8 @@ these additional steps before proceeding with GitLab installation.
NOTE: **Note:** When you specify `https` in the `external_url`, as in the example
above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
- certificates are not present, Nginx will fail to start. See
- [Nginx documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+ certificates are not present, NGINX will fail to start. See
+ [NGINX documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for more information.
NOTE: **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure
@@ -99,14 +99,14 @@ these additional steps before proceeding with GitLab installation.
## First GitLab application server
-As a final step, run the setup rake task **only on** the first GitLab application server.
-Do not run this on additional application servers.
+On the first application server, run:
-1. Initialize the database by running `sudo gitlab-rake gitlab:setup`.
-1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+```sh
+sudo gitlab-ctl reconfigure
+```
- CAUTION: **WARNING:** Only run this setup task on **NEW** GitLab instances because it
- will wipe any existing data.
+This should compile the configuration and initialize the database. Do
+not run this on additional application servers until the next step.
## Extra configuration for additional GitLab application servers
@@ -175,7 +175,7 @@ If you enable Monitoring, it must be enabled on **all** GitLab servers.
CAUTION: **Warning:**
After changing `unicorn['listen']` in `gitlab.rb`, and running `sudo gitlab-ctl reconfigure`,
- it can take an extended period of time for unicorn to complete reloading after receiving a `HUP`.
+ it can take an extended period of time for Unicorn to complete reloading after receiving a `HUP`.
For more information, see the [issue](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/4401).
## Troubleshooting
diff --git a/doc/administration/high_availability/load_balancer.md b/doc/administration/high_availability/load_balancer.md
index f11d27487d1..43a7c442d8c 100644
--- a/doc/administration/high_availability/load_balancer.md
+++ b/doc/administration/high_availability/load_balancer.md
@@ -27,10 +27,10 @@ options:
Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
than 'HTTP(S)' protocol. This will pass the connection to the application nodes
-Nginx service untouched. Nginx will have the SSL certificate and listen on port 443.
+NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
-See [Nginx HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring Nginx.
+See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
### Load Balancer(s) terminate SSL without backend SSL
@@ -40,7 +40,7 @@ terminating SSL.
Since communication between the load balancer(s) and GitLab will not be secure,
there is some additional configuration needed. See
-[Nginx Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
+[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
for details.
### Load Balancer(s) terminate SSL with backend SSL
@@ -49,12 +49,12 @@ Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will be responsible for managing SSL certificates that
end users will see.
-Traffic will also be secure between the load balancer(s) and Nginx in this
+Traffic will also be secure between the load balancer(s) and NGINX in this
scenario. There is no need to add configuration for proxied SSL since the
connection will be secure all the way. However, configuration will need to be
added to GitLab to configure SSL certificates. See
-[Nginx HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
-for details on managing SSL certificates and configuring Nginx.
+[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+for details on managing SSL certificates and configuring NGINX.
## Ports
diff --git a/doc/administration/high_availability/monitoring_node.md b/doc/administration/high_availability/monitoring_node.md
index 0b04e48f74a..d293fc350fa 100644
--- a/doc/administration/high_availability/monitoring_node.md
+++ b/doc/administration/high_availability/monitoring_node.md
@@ -34,6 +34,9 @@ Omnibus:
prometheus['listen_address'] = '0.0.0.0:9090'
prometheus['monitor_kubernetes'] = false
+ # Enable Login form
+ grafana['disable_login_form'] = false
+
# Enable Grafana
grafana['enable'] = true
grafana['admin_password'] = 'toomanysecrets'
@@ -63,6 +66,7 @@ Omnibus:
sidekiq['enable'] = false
unicorn['enable'] = false
node_exporter['enable'] = false
+ gitlab_exporter['enable'] = false
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
diff --git a/doc/administration/high_availability/nfs.md b/doc/administration/high_availability/nfs.md
index 987db3c89a5..f7c5593e211 100644
--- a/doc/administration/high_availability/nfs.md
+++ b/doc/administration/high_availability/nfs.md
@@ -103,13 +103,12 @@ If you do choose to use EFS, avoid storing GitLab log files (e.g. those in `/var
there because this will also affect performance. We recommend that the log files be
stored on a local volume.
-For more details on another person's experience with EFS, see
-[Amazon's Elastic File System: Burst Credits](https://rawkode.com/2017/04/16/amazons-elastic-file-system-burst-credits/)
+For more details on another person's experience with EFS, see this [Commit Brooklyn 2019 video](https://youtu.be/K6OS8WodRBQ?t=313).
## Avoid using CephFS and GlusterFS
GitLab strongly recommends against using CephFS and GlusterFS.
-These distributed file systems are not well-suited for GitLab's input/output access patterns because git uses many small files and access times and file locking times to propagate will make git activity very slow.
+These distributed file systems are not well-suited for GitLab's input/output access patterns because Git uses many small files and access times and file locking times to propagate will make Git activity very slow.
## Avoid using PostgreSQL with NFS
@@ -118,7 +117,7 @@ across NFS. The GitLab support team will not be able to assist on performance is
this configuration.
Additionally, this configuration is specifically warned against in the
-[Postgres Documentation](https://www.postgresql.org/docs/current/static/creating-cluster.html#CREATING-CLUSTER-NFS):
+[Postgres Documentation](https://www.postgresql.org/docs/current/creating-cluster.html#CREATING-CLUSTER-NFS):
>PostgreSQL does nothing special for NFS file systems, meaning it assumes NFS behaves exactly like
>locally-connected drives. If the client or server NFS implementation does not provide standard file
@@ -147,7 +146,7 @@ Note there are several options that you should consider using:
## A single NFS mount
-It's recommended to nest all gitlab data dirs within a mount, that allows automatic
+It's recommended to nest all GitLab data dirs within a mount, that allows automatic
restore of backups without manually moving existing data.
```
diff --git a/doc/administration/high_availability/nfs_host_client_setup.md b/doc/administration/high_availability/nfs_host_client_setup.md
index 9b0e085fe25..5b6b28bf633 100644
--- a/doc/administration/high_availability/nfs_host_client_setup.md
+++ b/doc/administration/high_availability/nfs_host_client_setup.md
@@ -11,7 +11,7 @@ setup act as clients while the NFS server plays host.
> Note: The instructions provided in this documentation allow for setting a quick
proof of concept but will leave NFS as potential single point of failure and
therefore not recommended for use in production. Explore options such as [Pacemaker
-and Corosync](http://clusterlabs.org/) for highly available NFS in production.
+and Corosync](https://clusterlabs.org) for highly available NFS in production.
Below are instructions for setting up an application node(client) in an HA cluster
to read from and write to a central NFS server(host).
@@ -56,7 +56,7 @@ You may need to update your server's firewall. See the [firewall section](#nfs-i
## Client/ GitLab application node Setup
-> Follow the instructions below to connect any GitLab rails application node running
+> Follow the instructions below to connect any GitLab Rails application node running
inside your HA environment to the NFS server configured above.
### Step 1 - Install NFS Common on Client
@@ -108,7 +108,7 @@ When using the default Omnibus configuration you will need to share 5 data locat
between all GitLab cluster nodes. No other locations should be shared. Changing the
default file locations in `gitlab.rb` on the client allows you to have one main mount
point and have all the required locations as subdirectories to use the NFS mount for
-git-data.
+`git-data`.
```text
git_data_dirs({"default" => {"path" => "/nfs/home/var/opt/gitlab-data/git-data"}})
diff --git a/doc/administration/high_availability/pgbouncer.md b/doc/administration/high_availability/pgbouncer.md
index b99724d12a2..e7479ad1ecb 100644
--- a/doc/administration/high_availability/pgbouncer.md
+++ b/doc/administration/high_availability/pgbouncer.md
@@ -2,29 +2,29 @@
type: reference
---
-# Working with the bundle Pgbouncer service
+# Working with the bundle PgBouncer service
-As part of its High Availability stack, GitLab Premium includes a bundled version of [Pgbouncer](https://pgbouncer.github.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
+As part of its High Availability stack, GitLab Premium includes a bundled version of [PgBouncer](https://pgbouncer.github.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
-In a High Availability setup, Pgbouncer is used to seamlessly migrate database connections between servers in a failover scenario.
+In a High Availability setup, PgBouncer is used to seamlessly migrate database connections between servers in a failover scenario.
Additionally, it can be used in a non-HA setup to pool connections, speeding up response time while reducing resource usage.
-It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on its own dedicated node in a cluster.
+It is recommended to run PgBouncer alongside the `gitlab-rails` service, or on its own dedicated node in a cluster.
## Operations
-### Running Pgbouncer as part of an HA GitLab installation
+### Running PgBouncer as part of an HA GitLab installation
1. Make sure you collect [`CONSUL_SERVER_NODES`](database.md#consul-information), [`CONSUL_PASSWORD_HASH`](database.md#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](database.md#pgbouncer-information) before executing the next step.
1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
```ruby
- # Disable all components except Pgbouncer and Consul agent
+ # Disable all components except PgBouncer and Consul agent
roles ['pgbouncer_role']
- # Configure Pgbouncer
+ # Configure PgBouncer
pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
# Configure Consul agent
@@ -59,13 +59,13 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
1. Create a `.pgpass` file so Consul is able to
- reload pgbouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
+ reload PgBouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
```sh
gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
```
-#### PGBouncer Checkpoint
+#### PgBouncer Checkpoint
1. Ensure the node is talking to the current master:
@@ -100,7 +100,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
(2 rows)
```
-### Running Pgbouncer as part of a non-HA GitLab installation
+### Running PgBouncer as part of a non-HA GitLab installation
1. Generate PGBOUNCER_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 pgbouncer`
@@ -119,7 +119,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
**Note:** If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
-1. On the node you are running pgbouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
+1. On the node you are running PgBouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
pgbouncer['enable'] = true
@@ -134,7 +134,7 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
-1. On the node running unicorn, make sure the following is set in `/etc/gitlab/gitlab.rb`
+1. On the node running Unicorn, make sure the following is set in `/etc/gitlab/gitlab.rb`
```ruby
gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
@@ -144,13 +144,13 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
1. Run `gitlab-ctl reconfigure`
-1. At this point, your instance should connect to the database through pgbouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
+1. At this point, your instance should connect to the database through PgBouncer. If you are having issues, see the [Troubleshooting](#troubleshooting) section
## Enable Monitoring
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3786) in GitLab 12.0.
-If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
+If you enable Monitoring, it must be enabled on **all** PgBouncer servers.
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
@@ -173,11 +173,11 @@ If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
-### Interacting with pgbouncer
+### Interacting with PgBouncer
#### Administrative console
-As part of omnibus-gitlab, we provide a command `gitlab-ctl pgb-console` to automatically connect to the pgbouncer administrative console. Please see the [pgbouncer documentation](https://pgbouncer.github.io/usage.html#admin-console) for detailed instructions on how to interact with the console.
+As part of Omnibus GitLab, we provide a command `gitlab-ctl pgb-console` to automatically connect to the PgBouncer administrative console. Please see the [PgBouncer documentation](https://pgbouncer.github.io/usage.html#admin-console) for detailed instructions on how to interact with the console.
To start a session, run
@@ -235,7 +235,7 @@ ote_pid | tls
## Troubleshooting
-In case you are experiencing any issues connecting through pgbouncer, the first place to check is always the logs:
+In case you are experiencing any issues connecting through PgBouncer, the first place to check is always the logs:
```shell
# gitlab-ctl tail pgbouncer
diff --git a/doc/administration/high_availability/redis.md b/doc/administration/high_availability/redis.md
index aa616ec91d8..ba4599e5bcd 100644
--- a/doc/administration/high_availability/redis.md
+++ b/doc/administration/high_availability/redis.md
@@ -68,7 +68,7 @@ Omnibus:
gitaly['enable'] = false
redis['bind'] = '0.0.0.0'
- redis['port'] = '6379'
+ redis['port'] = 6379
redis['password'] = 'SECRET_PASSWORD_HERE'
gitlab_rails['auto_migrate'] = false
@@ -313,12 +313,12 @@ Pick the one that suits your needs.
- [Installations from source][source]: You need to install Redis and Sentinel
yourself. Use the [Redis HA installation from source](redis_source.md)
documentation.
-- [Omnibus GitLab **Community Edition** (CE) package][ce]: Redis is bundled, so you
+- [Omnibus GitLab **Community Edition** (CE) package](https://about.gitlab.com/install/?version=ce): Redis is bundled, so you
can use the package with only the Redis service enabled as described in steps
1 and 2 of this document (works for both master and slave setups). To install
and configure Sentinel, jump directly to the Sentinel section in the
[Redis HA installation from source](redis_source.md#step-3-configuring-the-redis-sentinel-instances) documentation.
-- [Omnibus GitLab **Enterprise Edition** (EE) package][ee]: Both Redis and Sentinel
+- [Omnibus GitLab **Enterprise Edition** (EE) package](https://about.gitlab.com/install/?version=ee): Both Redis and Sentinel
are bundled in the package, so you can use the EE package to set up the whole
Redis HA infrastructure (master, slave and Sentinel) which is described in
this document.
@@ -397,7 +397,7 @@ The prerequisites for a HA Redis setup are the following:
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
-> Note: You can specify multiple roles like sentinel and redis as:
+> Note: You can specify multiple roles like sentinel and Redis as:
> `roles ['redis_sentinel_role', 'redis_master_role']`. Read more about high
> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
@@ -446,7 +446,7 @@ The prerequisites for a HA Redis setup are the following:
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
1. Go through the steps again for all the other slave nodes.
-> Note: You can specify multiple roles like sentinel and redis as:
+> Note: You can specify multiple roles like sentinel and Redis as:
> `roles ['redis_sentinel_role', 'redis_slave_role']`. Read more about high
> availability roles at <https://docs.gitlab.com/omnibus/roles/>.
@@ -628,7 +628,7 @@ single-machine install, to rotate the **Master** to one of the new nodes.
Make the required changes in configuration and restart the new nodes again.
-To disable redis in the single install, edit `/etc/gitlab/gitlab.rb`:
+To disable Redis in the single install, edit `/etc/gitlab/gitlab.rb`:
```ruby
redis['enable'] = false
@@ -902,7 +902,7 @@ You can check if everything is correct by connecting to each server using
/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
```
-When connected to a `master` redis, you will see the number of connected
+When connected to a `master` Redis, you will see the number of connected
`slaves`, and a list of each with connection details:
```
@@ -948,7 +948,7 @@ to [this issue][gh-531].
You must make sure you are defining the same value in `redis['master_name']`
and `redis['master_pasword']` as you defined for your sentinel node.
-The way the redis connector `redis-rb` works with sentinel is a bit
+The way the Redis connector `redis-rb` works with sentinel is a bit
non-intuitive. We try to hide the complexity in omnibus, but it still requires
a few extra configs.
@@ -1025,6 +1025,4 @@ Read more on High Availability:
[sentinel]: https://redis.io/topics/sentinel
[omnifile]: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-cookbooks/gitlab/libraries/gitlab_rails.rb
[source]: ../../install/installation.md
-[ce]: https://about.gitlab.com/downloads
-[ee]: https://about.gitlab.com/downloads-ee
[it]: https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
diff --git a/doc/administration/high_availability/redis_source.md b/doc/administration/high_availability/redis_source.md
index 0758b240a25..9ab9d9a206d 100644
--- a/doc/administration/high_availability/redis_source.md
+++ b/doc/administration/high_availability/redis_source.md
@@ -341,7 +341,7 @@ to [this upstream issue][gh-531].
You must make sure that `resque.yml` and `sentinel.conf` are configured correctly,
otherwise `redis-rb` will not work properly.
-The `master-group-name` ('gitlab-redis') defined in (`sentinel.conf`)
+The `master-group-name` (`gitlab-redis`) defined in (`sentinel.conf`)
**must** be used as the hostname in GitLab (`resque.yml`):
```conf
@@ -374,4 +374,4 @@ When in doubt, please read [Redis Sentinel documentation](https://redis.io/topic
[downloads]: https://about.gitlab.com/downloads
[restart]: ../restart_gitlab.md#installations-from-source
[it]: https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
-[resque]: https://gitlab.com/gitlab-org/gitlab-foss/blob/master/config/resque.yml.example
+[resque]: https://gitlab.com/gitlab-org/gitlab/blob/master/config/resque.yml.example