summaryrefslogtreecommitdiff
path: root/doc/administration
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration')
-rw-r--r--doc/administration/audit_events.md5
-rw-r--r--doc/administration/auditor_users.md2
-rw-r--r--doc/administration/auth/README.md37
-rw-r--r--doc/administration/auth/authentiq.md85
-rw-r--r--doc/administration/auth/crowd.md77
-rw-r--r--doc/administration/auth/google_secure_ldap.md22
-rw-r--r--doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md20
-rw-r--r--doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md22
-rw-r--r--doc/administration/auth/img/google_secure_ldap_add_step_1.pngbin28849 -> 9083 bytes
-rw-r--r--doc/administration/auth/img/google_secure_ldap_add_step_2.pngbin82115 -> 27207 bytes
-rw-r--r--doc/administration/auth/img/google_secure_ldap_client_settings.pngbin63959 -> 21302 bytes
-rw-r--r--doc/administration/auth/jwt.md134
-rw-r--r--doc/administration/auth/ldap-ee.md384
-rw-r--r--doc/administration/auth/ldap.md151
-rw-r--r--doc/administration/auth/oidc.md200
-rw-r--r--doc/administration/auth/okta.md232
-rw-r--r--doc/administration/auth/smartcard.md86
-rw-r--r--doc/administration/container_registry.md88
-rw-r--r--doc/administration/custom_hooks.md20
-rw-r--r--doc/administration/database_load_balancing.md7
-rw-r--r--doc/administration/dependency_proxy.md3
-rw-r--r--doc/administration/environment_variables.md1
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md17
-rw-r--r--doc/administration/geo/disaster_recovery/bring_primary_back.md2
-rw-r--r--doc/administration/geo/disaster_recovery/img/checksum-differences-admin-project-page.pngbin202130 -> 74553 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.pngbin85997 -> 28817 bytes
-rw-r--r--doc/administration/geo/disaster_recovery/index.md2
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md2
-rw-r--r--doc/administration/geo/replication/configuration.md4
-rw-r--r--doc/administration/geo/replication/database.md34
-rw-r--r--doc/administration/geo/replication/external_database.md1
-rw-r--r--doc/administration/geo/replication/img/geo_architecture.pngbin116506 -> 53225 bytes
-rw-r--r--doc/administration/geo/replication/index.md94
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md12
-rw-r--r--doc/administration/geo/replication/troubleshooting.md16
-rw-r--r--doc/administration/geo/replication/updating_the_geo_nodes.md46
-rw-r--r--doc/administration/git_protocol.md2
-rw-r--r--doc/administration/gitaly/index.md888
-rw-r--r--doc/administration/high_availability/README.md40
-rw-r--r--doc/administration/high_availability/alpha_database.md3
-rw-r--r--doc/administration/high_availability/consul.md72
-rw-r--r--doc/administration/high_availability/database.md493
-rw-r--r--doc/administration/high_availability/gitaly.md16
-rw-r--r--doc/administration/high_availability/gitlab.md136
-rw-r--r--doc/administration/high_availability/img/fully-distributed.pngbin46918 -> 46691 bytes
-rw-r--r--doc/administration/high_availability/img/horizontal.pngbin18660 -> 18179 bytes
-rw-r--r--doc/administration/high_availability/img/hybrid.pngbin20698 -> 20693 bytes
-rw-r--r--doc/administration/high_availability/load_balancer.md71
-rw-r--r--doc/administration/high_availability/monitoring_node.md98
-rw-r--r--doc/administration/high_availability/nfs.md55
-rw-r--r--doc/administration/high_availability/nfs_host_client_setup.md16
-rw-r--r--doc/administration/high_availability/pgbouncer.md169
-rw-r--r--doc/administration/high_availability/redis.md409
-rw-r--r--doc/administration/high_availability/redis_source.md311
-rw-r--r--doc/administration/housekeeping.md3
-rw-r--r--doc/administration/img/custom_hooks_error_msg.pngbin80442 -> 31281 bytes
-rw-r--r--doc/administration/incoming_email.md4
-rw-r--r--doc/administration/index.md13
-rw-r--r--doc/administration/instance_review.md1
-rw-r--r--doc/administration/integration/plantuml.md71
-rw-r--r--doc/administration/integration/terminal.md4
-rw-r--r--doc/administration/issue_closing_pattern.md17
-rw-r--r--doc/administration/job_artifacts.md21
-rw-r--r--doc/administration/job_traces.md6
-rw-r--r--doc/administration/logs.md42
-rw-r--r--doc/administration/merge_request_diffs.md1
-rw-r--r--doc/administration/monitoring/gitlab_instance_administration_project/index.md36
-rw-r--r--doc/administration/monitoring/index.md3
-rw-r--r--doc/administration/monitoring/ip_whitelist.md20
-rw-r--r--doc/administration/monitoring/performance/gitlab_configuration.md6
-rw-r--r--doc/administration/monitoring/performance/grafana_configuration.md32
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar.pngbin99331 -> 33642 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.pngbin91275 -> 83212 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_line_profiling.pngbin93063 -> 0 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_redis_calls.pngbin0 -> 70859 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_rugged_calls.pngbin0 -> 105305 bytes
-rw-r--r--doc/administration/monitoring/performance/img/performance_bar_sql_queries.pngbin128337 -> 143074 bytes
-rw-r--r--doc/administration/monitoring/performance/img/request_profile_result.pngbin3216 -> 11451 bytes
-rw-r--r--doc/administration/monitoring/performance/performance_bar.md15
-rw-r--r--doc/administration/monitoring/performance/request_profiling.md4
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_metrics.md271
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md6
-rw-r--r--doc/administration/monitoring/prometheus/index.md142
-rw-r--r--doc/administration/monitoring/prometheus/node_exporter.md6
-rw-r--r--doc/administration/monitoring/prometheus/pgbouncer_exporter.md6
-rw-r--r--doc/administration/monitoring/prometheus/postgres_exporter.md6
-rw-r--r--doc/administration/monitoring/prometheus/redis_exporter.md6
-rw-r--r--doc/administration/operations/fast_ssh_key_lookup.md108
-rw-r--r--doc/administration/operations/filesystem_benchmarking.md33
-rw-r--r--doc/administration/operations/img/sidekiq-cluster.pngbin61923 -> 22576 bytes
-rw-r--r--doc/administration/operations/unicorn.md2
-rw-r--r--doc/administration/packages.md2
-rw-r--r--doc/administration/pages/img/lets_encrypt_integration_v12_1.pngbin0 -> 33137 bytes
-rw-r--r--doc/administration/pages/index.md172
-rw-r--r--doc/administration/pages/source.md293
-rw-r--r--doc/administration/plugins.md36
-rw-r--r--doc/administration/pseudonymizer.md4
-rw-r--r--doc/administration/raketasks/geo.md4
-rw-r--r--doc/administration/raketasks/ldap.md25
-rw-r--r--doc/administration/raketasks/maintenance.md19
-rw-r--r--doc/administration/raketasks/project_import_export.md16
-rw-r--r--doc/administration/raketasks/storage.md29
-rw-r--r--doc/administration/raketasks/uploads/migrate.md10
-rw-r--r--doc/administration/raketasks/uploads/sanitize.md16
-rw-r--r--doc/administration/reply_by_email_postfix_setup.md4
-rw-r--r--doc/administration/repository_checks.md6
-rw-r--r--doc/administration/repository_storage_paths.md20
-rw-r--r--doc/administration/repository_storage_types.md22
-rw-r--r--doc/administration/smime_signing_email.md49
-rw-r--r--doc/administration/troubleshooting/debug.md110
-rw-r--r--doc/administration/troubleshooting/diagnostics_tools.md27
-rw-r--r--doc/administration/troubleshooting/elasticsearch.md345
-rw-r--r--doc/administration/troubleshooting/kubernetes_cheat_sheet.md261
-rw-r--r--doc/administration/troubleshooting/sidekiq.md118
-rw-r--r--doc/administration/uploads.md107
115 files changed, 4457 insertions, 2708 deletions
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index a80ff330e03..8075a40cae7 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -73,6 +73,10 @@ From there, you can see the following actions:
- User was added to project and with which [permissions]
- Permission changes of a user assigned to a project
- User was removed from project
+- Project export was downloaded
+- Project repository was downloaded
+- Project was archived
+- Project was unarchived
### Instance events **(PREMIUM ONLY)**
@@ -94,6 +98,7 @@ recorded:
- Changed password
- Ask for password reset
- Grant OAuth access
+- Started/stopped user impersonation
It is possible to filter particular actions by choosing an audit data type from
the filter drop-down. You can further filter by specific group, project or user
diff --git a/doc/administration/auditor_users.md b/doc/administration/auditor_users.md
index 65d36612d85..18c415b5ff7 100644
--- a/doc/administration/auditor_users.md
+++ b/doc/administration/auditor_users.md
@@ -52,7 +52,7 @@ section.
**Admin Area > Users**. You will find the option of the access level under
the 'Access' section.
- ![Admin Area Form](img/auditor_access_form.png)
+ ![Admin Area Form](img/auditor_access_form.png)
1. Click **Save changes** or **Create user** for the changes to take effect.
diff --git a/doc/administration/auth/README.md b/doc/administration/auth/README.md
index d8094587d14..2fc9db0632e 100644
--- a/doc/administration/auth/README.md
+++ b/doc/administration/auth/README.md
@@ -1,19 +1,34 @@
---
comments: false
+type: index
---
-# Authentication and Authorization
+# GitLab authentication and authorization
GitLab integrates with the following external authentication and authorization
-providers.
+providers:
-- [LDAP](ldap.md) Includes Active Directory, Apple Open Directory, Open LDAP,
- and 389 Server
+- [Auth0](../../integration/auth0.md)
+- [Authentiq](authentiq.md)
+- [Azure](../../integration/azure.md)
+- [Bitbucket Cloud](../../integration/bitbucket.md)
+- [CAS](../../integration/cas.md)
+- [Crowd](../../integration/crowd.md)
+- [Facebook](../../integration/facebook.md)
+- [GitHub](../../integration/github.md)
+- [GitLab.com](../../integration/gitlab.md)
+- [Google](../../integration/google.md)
+- [JWT](jwt.md)
+- [Kerberos](../../integration/kerberos.md)
+- [LDAP](ldap.md): Includes Active Directory, Apple Open Directory, Open LDAP,
+ and 389 Server.
- [LDAP for GitLab EE](ldap-ee.md): LDAP additions to GitLab Enterprise Editions **(STARTER ONLY)**
-- [OmniAuth](../../integration/omniauth.md) Sign in via Twitter, GitHub, GitLab.com, Google,
- Bitbucket, Facebook, Shibboleth, Crowd, Azure, Authentiq ID, and JWT
-- [CAS](../../integration/cas.md) Configure GitLab to sign in using CAS
-- [SAML](../../integration/saml.md) Configure GitLab as a SAML 2.0 Service Provider
-- [Okta](okta.md) Configure GitLab to sign in using Okta
-- [Authentiq](authentiq.md): Enable the Authentiq OmniAuth provider for passwordless authentication
-- [Smartcard](smartcard.md) Smartcard authentication **(PREMIUM ONLY)**
+ - [Google Secure LDAP](google_secure_ldap.md)
+- [Okta](okta.md)
+- [Salesforce](../../integration/salesforce.md)
+- [SAML](../../integration/saml.md)
+- [SAML for GitLab.com groups](../../user/group/saml_sso/index.md) **(SILVER ONLY)**
+- [Shibboleth](../../integration/shibboleth.md)
+- [Smartcard](smartcard.md) **(PREMIUM ONLY)**
+- [Twitter](../../integration/twitter.md)
+- [UltraAuth](../../integration/ultra_auth.md)
diff --git a/doc/administration/auth/authentiq.md b/doc/administration/auth/authentiq.md
index 726622d8599..b84eca4ef0d 100644
--- a/doc/administration/auth/authentiq.md
+++ b/doc/administration/auth/authentiq.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Authentiq OmniAuth Provider
To enable the Authentiq OmniAuth provider for passwordless authentication you must register an application with Authentiq.
@@ -8,47 +12,48 @@ Authentiq will generate a Client ID and the accompanying Client Secret for you t
1. On your GitLab server, open the configuration file:
- For omnibus installation
- ```sh
- sudo editor /etc/gitlab/gitlab.rb
- ```
+ For omnibus installation
+
+ ```sh
+ sudo editor /etc/gitlab/gitlab.rb
+ ```
- For installations from source:
+ For installations from source:
- ```sh
- sudo -u git -H editor /home/git/gitlab/config/gitlab.yml
- ```
+ ```sh
+ sudo -u git -H editor /home/git/gitlab/config/gitlab.yml
+ ```
1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings to enable single sign-on and add Authentiq as an OAuth provider.
1. Add the provider configuration for Authentiq:
- For Omnibus packages:
-
- ```ruby
- gitlab_rails['omniauth_providers'] = [
- {
- "name" => "authentiq",
- "app_id" => "YOUR_CLIENT_ID",
- "app_secret" => "YOUR_CLIENT_SECRET",
- "args" => {
- "scope": 'aq:name email~rs address aq:push'
- }
- }
- ]
- ```
-
- For installations from source:
-
- ```yaml
- - { name: 'authentiq',
- app_id: 'YOUR_CLIENT_ID',
- app_secret: 'YOUR_CLIENT_SECRET',
- args: {
- scope: 'aq:name email~rs address aq:push'
- }
- }
- ```
+ For Omnibus packages:
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ {
+ "name" => "authentiq",
+ "app_id" => "YOUR_CLIENT_ID",
+ "app_secret" => "YOUR_CLIENT_SECRET",
+ "args" => {
+ "scope": 'aq:name email~rs address aq:push'
+ }
+ }
+ ]
+ ```
+
+ For installations from source:
+
+ ```yaml
+ - { name: 'authentiq',
+ app_id: 'YOUR_CLIENT_ID',
+ app_secret: 'YOUR_CLIENT_SECRET',
+ args: {
+ scope: 'aq:name email~rs address aq:push'
+ }
+ }
+ ```
1. The `scope` is set to request the user's name, email (required and signed), and permission to send push notifications to sign in on subsequent visits.
See [OmniAuth Authentiq strategy](https://github.com/AuthentiqID/omniauth-authentiq/wiki/Scopes,-callback-url-configuration-and-responses) for more information on scopes and modifiers.
@@ -65,3 +70,15 @@ On the sign in page there should now be an Authentiq icon below the regular sign
- If not they will be prompted to download the app and then follow the procedure above.
If everything goes right, the user will be returned to GitLab and will be signed in.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/crowd.md b/doc/administration/auth/crowd.md
index 6db74958d5a..ac63b4f2b97 100644
--- a/doc/administration/auth/crowd.md
+++ b/doc/administration/auth/crowd.md
@@ -1,60 +1,67 @@
+---
+type: reference
+---
+
# Atlassian Crowd OmniAuth Provider
+Authenticate to GitLab using the Atlassian Crowd OmniAuth provider.
+
## Configure a new Crowd application
1. Choose 'Applications' in the top menu, then 'Add application'.
1. Go through the 'Add application' steps, entering the appropriate details.
The screenshot below shows an example configuration.
- ![Example Crowd application configuration](img/crowd_application.png)
+ ![Example Crowd application configuration](img/crowd_application.png)
## Configure GitLab
1. On your GitLab server, open the configuration file.
- **Omnibus:**
+ **Omnibus:**
- ```sh
- sudo editor /etc/gitlab/gitlab.rb
- ```
+ ```sh
+ sudo editor /etc/gitlab/gitlab.rb
+ ```
- **Source:**
+ **Source:**
- ```sh
- cd /home/git/gitlab
+ ```sh
+ cd /home/git/gitlab
- sudo -u git -H editor config/gitlab.yml
- ```
+ sudo -u git -H editor config/gitlab.yml
+ ```
1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration)
for initial settings.
1. Add the provider configuration:
- **Omnibus:**
-
- ```ruby
- gitlab_rails['omniauth_providers'] = [
- {
- "name" => "crowd",
- "args" => {
- "crowd_server_url" => "CROWD_SERVER_URL",
- "application_name" => "YOUR_APP_NAME",
- "application_password" => "YOUR_APP_PASSWORD"
- }
- }
- ]
- ```
-
- **Source:**
-
- ```
- - { name: 'crowd',
- args: {
- crowd_server_url: 'CROWD_SERVER_URL',
- application_name: 'YOUR_APP_NAME',
- application_password: 'YOUR_APP_PASSWORD' } }
- ```
+ **Omnibus:**
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ {
+ "name" => "crowd",
+ "args" => {
+ "crowd_server_url" => "CROWD_SERVER_URL",
+ "application_name" => "YOUR_APP_NAME",
+ "application_password" => "YOUR_APP_PASSWORD"
+ }
+ }
+ ]
+ ```
+
+ **Source:**
+
+ ```
+ - { name: 'crowd',
+ args: {
+ crowd_server_url: 'CROWD_SERVER_URL',
+ application_name: 'YOUR_APP_NAME',
+ application_password: 'YOUR_APP_PASSWORD' } }
+ ```
+
1. Change `CROWD_SERVER_URL` to the URL of your Crowd server.
1. Change `YOUR_APP_NAME` to the application name from Crowd applications page.
1. Change `YOUR_APP_PASSWORD` to the application password you've set.
@@ -77,4 +84,4 @@ could not authorize you from Crowd because invalid credentials
Please make sure the Crowd users who need to login to GitLab are authorized to [the application](#configure-a-new-crowd-application) in the step of **Authorisation**. This could be verified by try "Authentication test" for Crowd as of 2.11.
-![Example Crowd application authorisation configuration](img/crowd_application_authorisation.png) \ No newline at end of file
+![Example Crowd application authorisation configuration](img/crowd_application_authorisation.png)
diff --git a/doc/administration/auth/google_secure_ldap.md b/doc/administration/auth/google_secure_ldap.md
index 1db5bb4bc3f..55e6f53622c 100644
--- a/doc/administration/auth/google_secure_ldap.md
+++ b/doc/administration/auth/google_secure_ldap.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Google Secure LDAP **(CORE ONLY)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/46391) in GitLab 11.9.
@@ -66,7 +70,7 @@ values obtained during the LDAP client configuration earlier:
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
+ ```ruby
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
@@ -127,7 +131,7 @@ values obtained during the LDAP client configuration earlier:
AcZSFJQjdg5BTyvdEDhaYUKGdRw=
-----END PRIVATE KEY-----
EOS
- ```
+ ```
1. Save the file and [reconfigure] GitLab for the changes to take effect.
@@ -137,7 +141,7 @@ values obtained during the LDAP client configuration earlier:
1. Edit `config/gitlab.yml`:
- ```yaml
+ ```yaml
ldap:
enabled: true
servers:
@@ -204,3 +208,15 @@ values obtained during the LDAP client configuration earlier:
[reconfigure]: ../restart_gitlab.md#omnibus-gitlab-reconfigure
[restart]: ../restart_gitlab.md#installations-from-source
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md b/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
index 320a65b665d..86dd398343b 100644
--- a/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
+++ b/doc/administration/auth/how_to_configure_ldap_gitlab_ce/index.md
@@ -1,15 +1,9 @@
---
-author: Chris Wilson
-author_gitlab: MrChrisW
-level: intermediary
-article_type: admin guide
-date: 2017-05-03
+type: howto
---
# How to configure LDAP with GitLab CE
-## Introduction
-
Managing a large number of users in GitLab can become a burden for system administrators. As an organization grows so do user accounts. Keeping these user accounts in sync across multiple enterprise applications often becomes a time consuming task.
In this guide we will focus on configuring GitLab with Active Directory. [Active Directory](https://en.wikipedia.org/wiki/Active_Directory) is a popular LDAP compatible directory service provided by Microsoft, included in all modern Windows Server operating systems.
@@ -268,3 +262,15 @@ have extended functionalities with LDAP, such as:
- Multiple LDAP servers
Read through the article on [LDAP for GitLab EE](../how_to_configure_ldap_gitlab_ee/index.md) **(STARTER ONLY)** for an overview.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md b/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
index 2683950f143..366acb9ed3e 100644
--- a/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
+++ b/doc/administration/auth/how_to_configure_ldap_gitlab_ee/index.md
@@ -1,16 +1,10 @@
---
-author: Chris Wilson
-author_gitlab: MrChrisW
-level: intermediary
-article_type: admin guide
-date: 2017-05-03
+type: howto
---
# How to configure LDAP with GitLab EE **(STARTER ONLY)**
-## Introduction
-
-The present article follows [How to Configure LDAP with GitLab CE](../how_to_configure_ldap_gitlab_ce/index.md). Make sure to read through it before moving forward.
+This article expands on [How to Configure LDAP with GitLab CE](../how_to_configure_ldap_gitlab_ce/index.md). Make sure to read through it before moving forward.
## GitLab Enterprise Edition - LDAP features
@@ -117,3 +111,15 @@ Integration of GitLab with Active Directory (LDAP) reduces the complexity of use
It has the advantage of improving user permission controls, whilst easing the deployment of GitLab into an existing [IT environment](https://www.techopedia.com/definition/29199/it-infrastructure). GitLab EE offers advanced group management and multiple LDAP servers.
With the assistance of the [GitLab Support](https://about.gitlab.com/support) team, setting up GitLab with an existing AD/LDAP solution will be a smooth and painless process.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/img/google_secure_ldap_add_step_1.png b/doc/administration/auth/img/google_secure_ldap_add_step_1.png
index fd254443d75..bee9c602a14 100644
--- a/doc/administration/auth/img/google_secure_ldap_add_step_1.png
+++ b/doc/administration/auth/img/google_secure_ldap_add_step_1.png
Binary files differ
diff --git a/doc/administration/auth/img/google_secure_ldap_add_step_2.png b/doc/administration/auth/img/google_secure_ldap_add_step_2.png
index 611a21ae03c..b127410cb8c 100644
--- a/doc/administration/auth/img/google_secure_ldap_add_step_2.png
+++ b/doc/administration/auth/img/google_secure_ldap_add_step_2.png
Binary files differ
diff --git a/doc/administration/auth/img/google_secure_ldap_client_settings.png b/doc/administration/auth/img/google_secure_ldap_client_settings.png
index 3c0b3f3d4bd..868e6645f56 100644
--- a/doc/administration/auth/img/google_secure_ldap_client_settings.png
+++ b/doc/administration/auth/img/google_secure_ldap_client_settings.png
Binary files differ
diff --git a/doc/administration/auth/jwt.md b/doc/administration/auth/jwt.md
index 497298503ad..e6b3287ce60 100644
--- a/doc/administration/auth/jwt.md
+++ b/doc/administration/auth/jwt.md
@@ -1,67 +1,71 @@
+---
+type: reference
+---
+
# JWT OmniAuth provider
To enable the JWT OmniAuth provider, you must register your application with JWT.
JWT will provide you with a secret key for you to use.
-1. On your GitLab server, open the configuration file.
-
- For Omnibus GitLab:
-
- ```sh
- sudo editor /etc/gitlab/gitlab.rb
- ```
-
- For installations from source:
-
- ```sh
- cd /home/git/gitlab
- sudo -u git -H editor config/gitlab.yml
- ```
-
-1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings.
-1. Add the provider configuration.
-
- For Omnibus GitLab:
-
- ```ruby
- gitlab_rails['omniauth_providers'] = [
- { name: 'jwt',
- args: {
- secret: 'YOUR_APP_SECRET',
- algorithm: 'HS256', # Supported algorithms: 'RS256', 'RS384', 'RS512', 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512'
- uid_claim: 'email',
- required_claims: ['name', 'email'],
- info_maps: { name: 'name', email: 'email' },
- auth_url: 'https://example.com/',
- valid_within: 3600 # 1 hour
- }
- }
- ]
- ```
-
- For installation from source:
-
- ```
- - { name: 'jwt',
- args: {
- secret: 'YOUR_APP_SECRET',
- algorithm: 'HS256', # Supported algorithms: 'RS256', 'RS384', 'RS512', 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512'
- uid_claim: 'email',
- required_claims: ['name', 'email'],
- info_map: { name: 'name', email: 'email' },
- auth_url: 'https://example.com/',
- valid_within: 3600 # 1 hour
- }
- }
- ```
-
- NOTE: **Note:** For more information on each configuration option refer to
- the [OmniAuth JWT usage documentation](https://github.com/mbleigh/omniauth-jwt#usage).
-
-1. Change `YOUR_APP_SECRET` to the client secret and set `auth_url` to your redirect URL.
-1. Save the configuration file.
-1. [Reconfigure][] or [restart GitLab][] for the changes to take effect if you
- installed GitLab via Omnibus or from source respectively.
+1. On your GitLab server, open the configuration file.
+
+ For Omnibus GitLab:
+
+ ```sh
+ sudo editor /etc/gitlab/gitlab.rb
+ ```
+
+ For installations from source:
+
+ ```sh
+ cd /home/git/gitlab
+ sudo -u git -H editor config/gitlab.yml
+ ```
+
+1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings.
+1. Add the provider configuration.
+
+ For Omnibus GitLab:
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ { name: 'jwt',
+ args: {
+ secret: 'YOUR_APP_SECRET',
+ algorithm: 'HS256', # Supported algorithms: 'RS256', 'RS384', 'RS512', 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512'
+ uid_claim: 'email',
+ required_claims: ['name', 'email'],
+ info_maps: { name: 'name', email: 'email' },
+ auth_url: 'https://example.com/',
+ valid_within: 3600 # 1 hour
+ }
+ }
+ ]
+ ```
+
+ For installation from source:
+
+ ```
+ - { name: 'jwt',
+ args: {
+ secret: 'YOUR_APP_SECRET',
+ algorithm: 'HS256', # Supported algorithms: 'RS256', 'RS384', 'RS512', 'ES256', 'ES384', 'ES512', 'HS256', 'HS384', 'HS512'
+ uid_claim: 'email',
+ required_claims: ['name', 'email'],
+ info_map: { name: 'name', email: 'email' },
+ auth_url: 'https://example.com/',
+ valid_within: 3600 # 1 hour
+ }
+ }
+ ```
+
+ NOTE: **Note:** For more information on each configuration option refer to
+ the [OmniAuth JWT usage documentation](https://github.com/mbleigh/omniauth-jwt#usage).
+
+1. Change `YOUR_APP_SECRET` to the client secret and set `auth_url` to your redirect URL.
+1. Save the configuration file.
+1. [Reconfigure][] or [restart GitLab][] for the changes to take effect if you
+ installed GitLab via Omnibus or from source respectively.
On the sign in page there should now be a JWT icon below the regular sign in form.
Click the icon to begin the authentication process. JWT will ask the user to
@@ -70,3 +74,15 @@ will be redirected to GitLab and will be signed in.
[reconfigure]: ../restart_gitlab.md#omnibus-gitlab-reconfigure
[restart GitLab]: ../restart_gitlab.md#installations-from-source
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/ldap-ee.md b/doc/administration/auth/ldap-ee.md
index 1a8af0827ee..34f3cfa353f 100644
--- a/doc/administration/auth/ldap-ee.md
+++ b/doc/administration/auth/ldap-ee.md
@@ -1,36 +1,30 @@
-# LDAP Additions in GitLab EE **(STARTER ONLY)**
-
-This is a continuation of the main [LDAP documentation](ldap.md), detailing LDAP
-features specific to GitLab Enterprise Edition Starter, Premium and Ultimate.
+---
+type: reference
+---
-## Overview
+# LDAP Additions in GitLab EE **(STARTER ONLY)**
-[LDAP](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol)
-stands for **Lightweight Directory Access Protocol**, which
-is a standard application protocol for
-accessing and maintaining distributed directory information services
-over an Internet Protocol (IP) network.
+This section documents LDAP features specific to to GitLab Enterprise Edition
+[Starter](https://about.gitlab.com/pricing/#self-managed) and above.
-GitLab integrates with LDAP to support **user authentication**. This integration
-works with most LDAP-compliant directory servers, including Microsoft Active
-Directory, Apple Open Directory, Open LDAP, and 389 Server.
-**GitLab Enterprise Edition** includes enhanced integration, including group
-membership syncing.
+For documentation relevant to both Community Edition and Enterprise Edition,
+see the main [LDAP documentation](ldap.md).
## Use cases
-- User sync: Once a day, GitLab will update users against LDAP
+- User sync: Once a day, GitLab will update users against LDAP.
- Group sync: Once an hour, GitLab will update group membership
- based on LDAP group members
+ based on LDAP group members.
-## Multiple LDAP servers
+## Multiple LDAP servers **(STARTER ONLY)**
With GitLab Enterprise Edition Starter, you can configure multiple LDAP servers
that your GitLab instance will connect to.
-To add another LDAP server, you can start by duplicating the settings under
-[the main configuration](ldap.md#configuration) and edit them to match the
-additional LDAP server.
+To add another LDAP server:
+
+1. Duplicating the settings under [the main configuration](ldap.md#configuration).
+1. Edit them to match the additional LDAP server.
Be sure to choose a different provider ID made of letters a-z and numbers 0-9.
This ID will be stored in the database so that GitLab can remember which LDAP
@@ -43,19 +37,19 @@ users against LDAP.
The process will execute the following access checks:
-1. Ensure the user is still present in LDAP
-1. If the LDAP server is Active Directory, ensure the user is active (not
- blocked/disabled state). This will only be checked if
- `active_directory: true` is set in the LDAP configuration [^1]
+- Ensure the user is still present in LDAP.
+- If the LDAP server is Active Directory, ensure the user is active (not
+ blocked/disabled state). This will only be checked if
+ `active_directory: true` is set in the LDAP configuration. [^1]
The user will be set to `ldap_blocked` state in GitLab if the above conditions
fail. This means the user will not be able to login or push/pull code.
The process will also update the following user information:
-1. Email address
-1. If `sync_ssh_keys` is set, SSH public keys
-1. If Kerberos is enabled, Kerberos identity
+- Email address.
+- If `sync_ssh_keys` is set, SSH public keys.
+- If Kerberos is enabled, Kerberos identity.
NOTE: **Note:**
The LDAP sync process updates existing users while new users will
@@ -68,9 +62,6 @@ first time GitLab will trigger a sync for groups the user should be a member of.
That way they don't need to wait for the hourly sync to be granted
access to their groups and projects.
-In GitLab Premium, we can also add a GitLab group to sync with one or multiple LDAP groups or we can
-also add a filter. The filter must comply with the syntax defined in [RFC 2254](https://tools.ietf.org/search/rfc2254).
-
A group sync process will run every hour on the hour, and `group_base` must be set
in LDAP configuration for LDAP synchronizations based on group CN to work. This allows
GitLab group membership to be automatically updated based on LDAP group members.
@@ -85,19 +76,19 @@ following.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_servers'] = YAML.load <<-EOS
- main:
- ## snip...
- ##
- ## Base where we can search for groups
- ##
- ## Ex. ou=groups,dc=gitlab,dc=example
- ##
- ##
- group_base: ou=groups,dc=example,dc=com
- EOS
- ```
+ ```ruby
+ gitlab_rails['ldap_servers'] = YAML.load <<-EOS
+ main:
+ ## snip...
+ ##
+ ## Base where we can search for groups
+ ##
+ ## Ex. ou=groups,dc=gitlab,dc=example
+ ##
+ ##
+ group_base: ou=groups,dc=example,dc=com
+ EOS
+ ```
1. [Reconfigure GitLab][reconfigure] for the changes to take effect.
@@ -105,24 +96,37 @@ following.
1. Edit `/home/git/gitlab/config/gitlab.yml`:
- ```yaml
- production:
- ldap:
- servers:
- main:
- # snip...
- group_base: ou=groups,dc=example,dc=com
- ```
+ ```yaml
+ production:
+ ldap:
+ servers:
+ main:
+ # snip...
+ group_base: ou=groups,dc=example,dc=com
+ ```
1. [Restart GitLab][restart] for the changes to take effect.
----
-
To take advantage of group sync, group owners or maintainers will need to create an
-LDAP group link in their group **Settings > LDAP Groups** page. Multiple LDAP
-groups and/or filters can be linked with a single GitLab group. When the link is
-created, an access level/role is specified (Guest, Reporter, Developer, Maintainer,
-or Owner).
+LDAP group link in their group **Settings > LDAP Groups** page.
+
+Multiple LDAP groups and [filters](#filters-premium-only) can be linked with
+a single GitLab group. When the link is created, an access level/role is
+specified (Guest, Reporter, Developer, Maintainer, or Owner).
+
+### Filters **(PREMIUM ONLY)**
+
+In GitLab Premium, you can add an LDAP user filter for group synchronization.
+Filters allow for complex logic without creating a special LDAP group.
+
+To sync GitLab group membership based on an LDAP filter:
+
+1. Open the **LDAP Synchronization** page for the GitLab group.
+1. Select **LDAP user filter** as the **Sync method**.
+1. Enter an LDAP user filter in the **LDAP user filter** field.
+
+The filter must comply with the
+syntax defined in [RFC 2254](https://tools.ietf.org/search/rfc2254).
## Administrator sync
@@ -140,30 +144,30 @@ group, as opposed to the full DN.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_servers'] = YAML.load <<-EOS
- main:
- ## snip...
- ##
- ## Base where we can search for groups
- ##
- ## Ex. ou=groups,dc=gitlab,dc=example
- ##
- ##
- group_base: ou=groups,dc=example,dc=com
-
- ##
- ## The CN of a group containing GitLab administrators
- ##
- ## Ex. administrators
- ##
- ## Note: Not `cn=administrators` or the full DN
- ##
- ##
- admin_group: my_admin_group
-
- EOS
- ```
+ ```ruby
+ gitlab_rails['ldap_servers'] = YAML.load <<-EOS
+ main:
+ ## snip...
+ ##
+ ## Base where we can search for groups
+ ##
+ ## Ex. ou=groups,dc=gitlab,dc=example
+ ##
+ ##
+ group_base: ou=groups,dc=example,dc=com
+
+ ##
+ ## The CN of a group containing GitLab administrators
+ ##
+ ## Ex. administrators
+ ##
+ ## Note: Not `cn=administrators` or the full DN
+ ##
+ ##
+ admin_group: my_admin_group
+
+ EOS
+ ```
1. [Reconfigure GitLab][reconfigure] for the changes to take effect.
@@ -171,26 +175,28 @@ group, as opposed to the full DN.
1. Edit `/home/git/gitlab/config/gitlab.yml`:
- ```yaml
- production:
- ldap:
- servers:
- main:
- # snip...
- group_base: ou=groups,dc=example,dc=com
- admin_group: my_admin_group
- ```
+ ```yaml
+ production:
+ ldap:
+ servers:
+ main:
+ # snip...
+ group_base: ou=groups,dc=example,dc=com
+ admin_group: my_admin_group
+ ```
1. [Restart GitLab][restart] for the changes to take effect.
## Global group memberships lock
"Lock memberships to LDAP synchronization" setting allows instance administrators
-to lock down user abilities to invite new members to a group. When enabled following happens:
+to lock down user abilities to invite new members to a group.
-1. Only administrator can manage memberships of any group including access levels.
-2. Users are not allowed to share project with other groups or invite members to a project created in a group.
+When enabled, the following applies:
+- Only administrator can manage memberships of any group including access levels.
+- Users are not allowed to share project with other groups or invite members to
+ a project created in a group.
## Adjusting LDAP user sync schedule
@@ -211,9 +217,9 @@ sync to run once every 12 hours at the top of the hour.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_sync_worker_cron'] = "0 */12 * * *"
- ```
+ ```ruby
+ gitlab_rails['ldap_sync_worker_cron'] = "0 */12 * * *"
+ ```
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
@@ -221,11 +227,11 @@ sync to run once every 12 hours at the top of the hour.
1. Edit `config/gitlab.yaml`:
- ```yaml
- cron_jobs:
- ldap_sync_worker_cron:
- "0 */12 * * *"
- ```
+ ```yaml
+ cron_jobs:
+ ldap_sync_worker_cron:
+ "0 */12 * * *"
+ ```
1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
@@ -252,9 +258,9 @@ sync to run once every 2 hours at the top of the hour.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_group_sync_worker_cron'] = "0 */2 * * * *"
- ```
+ ```ruby
+ gitlab_rails['ldap_group_sync_worker_cron'] = "0 */2 * * * *"
+ ```
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
@@ -262,11 +268,11 @@ sync to run once every 2 hours at the top of the hour.
1. Edit `config/gitlab.yaml`:
- ```yaml
- cron_jobs:
- ldap_group_sync_worker_cron:
- "*/30 * * * *"
- ```
+ ```yaml
+ cron_jobs:
+ ldap_group_sync_worker_cron:
+ "*/30 * * * *"
+ ```
1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
@@ -283,20 +289,20 @@ task.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_servers'] = YAML.load <<-EOS
- main:
- ## snip...
- ##
- ## An array of CNs of groups containing users that should be considered external
- ##
- ## Ex. ['interns', 'contractors']
- ##
- ## Note: Not `cn=interns` or the full DN
- ##
- external_groups: ['interns', 'contractors']
- EOS
- ```
+ ```ruby
+ gitlab_rails['ldap_servers'] = YAML.load <<-EOS
+ main:
+ ## snip...
+ ##
+ ## An array of CNs of groups containing users that should be considered external
+ ##
+ ## Ex. ['interns', 'contractors']
+ ##
+ ## Note: Not `cn=interns` or the full DN
+ ##
+ external_groups: ['interns', 'contractors']
+ EOS
+ ```
1. [Reconfigure GitLab][reconfigure] for the changes to take effect.
@@ -304,14 +310,14 @@ task.
1. Edit `config/gitlab.yaml`:
- ```yaml
- production:
- ldap:
- servers:
- main:
- # snip...
- external_groups: ['interns', 'contractors']
- ```
+ ```yaml
+ production:
+ ldap:
+ servers:
+ main:
+ # snip...
+ external_groups: ['interns', 'contractors']
+ ```
1. [Restart GitLab][restart] for the changes to take effect.
@@ -330,10 +336,18 @@ administrative duties.
### Supported LDAP group types/attributes
-GitLab supports LDAP groups that use member attributes `member`, `submember`,
-`uniquemember`, `memberof` and `memberuid`. This means group sync supports, at
-least, LDAP groups with object class `groupOfNames`, `posixGroup`, and
-`groupOfUniqueName`. Other object classes should work fine as long as members
+GitLab supports LDAP groups that use member attributes:
+
+- `member`
+- `submember`
+- `uniquemember`
+- `memberof`
+- `memberuid`.
+
+This means group sync supports, at least, LDAP groups with object class:
+`groupOfNames`, `posixGroup`, and `groupOfUniqueName`.
+
+Other object classes should work fine as long as members
are defined as one of the mentioned attributes. This also means GitLab supports
Microsoft Active Directory, Apple Open Directory, Open LDAP, and 389 Server.
Other LDAP servers should work, too.
@@ -341,11 +355,12 @@ Other LDAP servers should work, too.
Active Directory also supports nested groups. Group sync will recursively
resolve membership if `active_directory: true` is set in the configuration file.
-> **Note:** Nested group membership will only be resolved if the nested group
- also falls within the configured `group_base`. For example, if GitLab sees a
- nested group with DN `cn=nested_group,ou=special_groups,dc=example,dc=com` but
- the configured `group_base` is `ou=groups,dc=example,dc=com`, `cn=nested_group`
- will be ignored.
+NOTE: **Note:**
+Nested group membership will only be resolved if the nested group
+also falls within the configured `group_base`. For example, if GitLab sees a
+nested group with DN `cn=nested_group,ou=special_groups,dc=example,dc=com` but
+the configured `group_base` is `ou=groups,dc=example,dc=com`, `cn=nested_group`
+will be ignored.
### Queries
@@ -400,7 +415,7 @@ main: # 'main' is the GitLab 'provider ID' of this LDAP server
[^1]: In Active Directory, a user is marked as disabled/blocked if the user
account control attribute (`userAccountControl:1.2.840.113556.1.4.803`)
- has bit 2 set. See https://ctogonewild.com/2009/09/03/bitmask-searches-in-ldap/
+ has bit 2 set. See <https://ctogonewild.com/2009/09/03/bitmask-searches-in-ldap/>
for more information.
### User DN has changed
@@ -420,10 +435,10 @@ things to check to debug the situation.
- Ensure LDAP configuration has a `group_base` specified. This configuration is
required for group sync to work properly.
- Ensure the correct LDAP group link is added to the GitLab group. Check group
- links by visiting the GitLab group, then **Settings dropdown -> LDAP groups**.
-- Check that the user has an LDAP identity
+ links by visiting the GitLab group, then **Settings dropdown > LDAP groups**.
+- Check that the user has an LDAP identity:
1. Sign in to GitLab as an administrator user.
- 1. Navigate to **Admin area -> Users**.
+ 1. Navigate to **Admin area > Users**.
1. Search for the user
1. Open the user, by clicking on their name. Do not click 'Edit'.
1. Navigate to the **Identities** tab. There should be an LDAP identity with
@@ -434,68 +449,74 @@ Often, the best way to learn more about why group sync is behaving a certain
way is to enable debug logging. There is verbose output that details every
step of the sync.
-1. Start a Rails console
+1. Start a Rails console:
+
+ ```bash
+ # For Omnibus installations
+ sudo gitlab-rails console
- ```bash
- # For Omnibus installations
- sudo gitlab-rails console
+ # For installations from source
+ sudo -u git -H bundle exec rails console production
+ ```
- # For installations from source
- sudo -u git -H bundle exec rails console production
- ```
1. Set the log level to debug (only for this session):
- ```ruby
- Rails.logger.level = Logger::DEBUG
- ```
+ ```ruby
+ Rails.logger.level = Logger::DEBUG
+ ```
+
1. Choose a GitLab group to test with. This group should have an LDAP group link
already configured. If the output is `nil`, the group could not be found.
If a bunch of group attributes are output, your group was found successfully.
- ```ruby
- group = Group.find_by(name: 'my_group')
+ ```ruby
+ group = Group.find_by(name: 'my_group')
+
+ # Output
+ => #<Group:0x007fe825196558 id: 1234, name: "my_group"...>
+ ```
- # Output
- => #<Group:0x007fe825196558 id: 1234, name: "my_group"...>
- ```
1. Run a group sync for this particular group.
- ```ruby
- EE::Gitlab::Auth::LDAP::Sync::Group.execute_all_providers(group)
- ```
+ ```ruby
+ EE::Gitlab::Auth::LDAP::Sync::Group.execute_all_providers(group)
+ ```
+
1. Look through the output of the sync. See [example log output](#example-log-output)
below for more information about the output.
1. If you still aren't able to see why the user isn't being added, query the
LDAP group directly to see what members are listed. Still in the Rails console,
run the following query:
- ```ruby
- adapter = Gitlab::Auth::LDAP::Adapter.new('ldapmain') # If `main` is the LDAP provider
- ldap_group = EE::Gitlab::Auth::LDAP::Group.find_by_cn('group_cn_here', adapter)
+ ```ruby
+ adapter = Gitlab::Auth::LDAP::Adapter.new('ldapmain') # If `main` is the LDAP provider
+ ldap_group = EE::Gitlab::Auth::LDAP::Group.find_by_cn('group_cn_here', adapter)
+
+ # Output
+ => #<EE::Gitlab::Auth::LDAP::Group:0x007fcbdd0bb6d8
+ ```
- # Output
- => #<EE::Gitlab::Auth::LDAP::Group:0x007fcbdd0bb6d8
- ```
1. Query the LDAP group's member DNs and see if the user's DN is in the list.
One of the DNs here should match the 'Identifier' from the LDAP identity
checked earlier. If it doesn't, the user does not appear to be in the LDAP
group.
- ```ruby
- ldap_group.member_dns
+ ```ruby
+ ldap_group.member_dns
+
+ # Output
+ => ["uid=john,ou=people,dc=example,dc=com", "uid=mary,ou=people,dc=example,dc=com"]
+ ```
- # Output
- => ["uid=john,ou=people,dc=example,dc=com", "uid=mary,ou=people,dc=example,dc=com"]
- ```
1. Some LDAP servers don't store members by DN. Rather, they use UIDs instead.
If you didn't see results from the last query, try querying by UIDs instead.
- ```ruby
- ldap_group.member_uids
+ ```ruby
+ ldap_group.member_uids
- # Output
- => ['john','mary']
- ```
+ # Output
+ => ['john','mary']
+ ```
#### Example log output
@@ -532,8 +553,9 @@ and more DNs may be added, or existing entries modified, based on additional
LDAP group lookups. The very last occurrence of this entry should indicate
exactly which users GitLab believes should be added to the group.
-> **Note:** 10 is 'Guest', 20 is 'Reporter', 30 is 'Developer', 40 is 'Maintainer'
- and 50 is 'Owner'
+NOTE: **Note:**
+10 is 'Guest', 20 is 'Reporter', 30 is 'Developer', 40 is 'Maintainer'
+and 50 is 'Owner'.
```bash
Resolved 'my_group' group member access: {"uid=john0,ou=people,dc=example,dc=com"=>30,
diff --git a/doc/administration/auth/ldap.md b/doc/administration/auth/ldap.md
index 2144f5753a8..186bf4c4825 100644
--- a/doc/administration/auth/ldap.md
+++ b/doc/administration/auth/ldap.md
@@ -1,30 +1,50 @@
+---
+type: reference
+---
+
<!-- If the change is EE-specific, put it in `ldap-ee.md`, NOT here. -->
# LDAP
GitLab integrates with LDAP to support user authentication.
-This integration works with most LDAP-compliant directory
-servers, including Microsoft Active Directory, Apple Open Directory, Open LDAP,
-and 389 Server. GitLab Enterprise Editions include enhanced integration,
+
+This integration works with most LDAP-compliant directory servers, including:
+
+- Microsoft Active Directory
+- Apple Open Directory
+- Open LDAP
+- 389 Server.
+
+GitLab Enterprise Editions (EE) include enhanced integration,
including group membership syncing as well as multiple LDAP servers support.
-## GitLab EE
+For more details about EE-specific LDAP features, see the
+[LDAP Enterprise Edition documentation](ldap-ee.md).
+
+NOTE: **Note:**
+The information on this page is relevant for both GitLab CE and EE.
+
+## Overview
-The information on this page is relevant for both GitLab CE and EE. For more
-details about EE-specific LDAP features, see the
-[LDAP Enterprise Edition documentation](ldap-ee.md). **(STARTER ONLY)**
+[LDAP](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol)
+stands for **Lightweight Directory Access Protocol**, which is a standard
+application protocol for accessing and maintaining distributed directory
+information services over an Internet Protocol (IP) network.
## Security
-GitLab assumes that LDAP users are not able to change their LDAP 'mail', 'email'
-or 'userPrincipalName' attribute. An LDAP user who is allowed to change their
-email on the LDAP server can potentially
-[take over any account](#enabling-ldap-sign-in-for-existing-gitlab-users)
-on your GitLab server.
+GitLab assumes that LDAP users:
+
+- Are not able to change their LDAP `mail`, `email`, or `userPrincipalName` attribute.
+ An LDAP user who is allowed to change their email on the LDAP server can potentially
+ [take over any account](#enabling-ldap-sign-in-for-existing-gitlab-users)
+ on your GitLab server.
+- Have unique email addresses, otherwise it is possible for LDAP users with the same
+ email address to share the same GitLab account.
We recommend against using LDAP integration if your LDAP users are
-allowed to change their 'mail', 'email' or 'userPrincipalName' attribute on
-the LDAP server.
+allowed to change their 'mail', 'email' or 'userPrincipalName' attribute on
+the LDAP server or share email addresses.
### User deletion
@@ -60,9 +80,12 @@ NOTE: **Note**:
In GitLab Enterprise Edition Starter, you can configure multiple LDAP servers
to connect to one GitLab server.
-For a complete guide on configuring LDAP with GitLab Community Edition, please check
-the admin guide [How to configure LDAP with GitLab CE](how_to_configure_ldap_gitlab_ce/index.md).
-For GitLab Enterprise Editions, see also [How to configure LDAP with GitLab EE](how_to_configure_ldap_gitlab_ee/index.md). **(STARTER ONLY)**
+For a complete guide on configuring LDAP with:
+
+- GitLab Community Edition, see
+ [How to configure LDAP with GitLab CE](how_to_configure_ldap_gitlab_ce/index.md).
+- Enterprise Editions, see
+ [How to configure LDAP with GitLab EE](how_to_configure_ldap_gitlab_ee/index.md). **(STARTER ONLY)**
To enable LDAP integration you need to add your LDAP server settings in
`/etc/gitlab/gitlab.rb` or `/home/git/gitlab/config/gitlab.yml` for Omnibus
@@ -380,7 +403,7 @@ production:
Tip: If you want to limit access to the nested members of an Active Directory
group, you can use the following syntax:
-```
+```text
(memberOf:1.2.840.113556.1.4.1941:=CN=My Group,DC=Example,DC=com)
```
@@ -398,30 +421,30 @@ The `user_filter` DN can contain special characters. For example:
- A comma:
- ```
- OU=GitLab, Inc,DC=gitlab,DC=com
- ```
+ ```text
+ OU=GitLab, Inc,DC=gitlab,DC=com
+ ```
- Open and close brackets:
- ```
- OU=Gitlab (Inc),DC=gitlab,DC=com
- ```
+ ```text
+ OU=Gitlab (Inc),DC=gitlab,DC=com
+ ```
- These characters must be escaped as documented in
- [RFC 4515](https://tools.ietf.org/search/rfc4515).
+ These characters must be escaped as documented in
+ [RFC 4515](https://tools.ietf.org/search/rfc4515).
- Escape commas with `\2C`. For example:
- ```
- OU=GitLab\2C Inc,DC=gitlab,DC=com
- ```
+ ```text
+ OU=GitLab\2C Inc,DC=gitlab,DC=com
+ ```
- Escape open and close brackets with `\28` and `\29`, respectively. For example:
- ```
- OU=Gitlab \28Inc\29,DC=gitlab,DC=com
- ```
+ ```text
+ OU=Gitlab \28Inc\29,DC=gitlab,DC=com
+ ```
## Enabling LDAP sign-in for existing GitLab users
@@ -445,13 +468,13 @@ the configuration option `lowercase_usernames`. By default, this configuration o
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_servers'] = YAML.load <<-EOS
- main:
- # snip...
- lowercase_usernames: true
- EOS
- ```
+ ```ruby
+ gitlab_rails['ldap_servers'] = YAML.load <<-EOS
+ main:
+ # snip...
+ lowercase_usernames: true
+ EOS
+ ```
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
@@ -459,14 +482,14 @@ the configuration option `lowercase_usernames`. By default, this configuration o
1. Edit `config/gitlab.yaml`:
- ```yaml
- production:
- ldap:
- servers:
- main:
- # snip...
- lowercase_usernames: true
- ```
+ ```yaml
+ production:
+ ldap:
+ servers:
+ main:
+ # snip...
+ lowercase_usernames: true
+ ```
1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
@@ -494,13 +517,20 @@ be mandatory and clients cannot be authenticated with the TLS protocol.
## Troubleshooting
+If a user account is blocked or unblocked due to the LDAP configuration, a
+message will be logged to `application.log`.
+
+If there is an unexpected error during an LDAP lookup (configuration error,
+timeout), the login is rejected and a message will be logged to
+`production.log`.
+
### Debug LDAP user filter with ldapsearch
-This example uses ldapsearch and assumes you are using ActiveDirectory. The
+This example uses `ldapsearch` and assumes you are using ActiveDirectory. The
following query returns the login names of the users that will be allowed to
log in to GitLab if you configure your own user_filter.
-```
+```sh
ldapsearch -H ldaps://$host:$port -D "$bind_dn" -y bind_dn_password.txt -b "$base" "$user_filter" sAMAccountName
```
@@ -519,26 +549,17 @@ ldapsearch -H ldaps://$host:$port -D "$bind_dn" -y bind_dn_password.txt -b "$ba
- Run the following check command to make sure that the LDAP settings are
correct and GitLab can see your users:
- ```bash
- # For Omnibus installations
- sudo gitlab-rake gitlab:ldap:check
+ ```bash
+ # For Omnibus installations
+ sudo gitlab-rake gitlab:ldap:check
- # For installations from source
- sudo -u git -H bundle exec rake gitlab:ldap:check RAILS_ENV=production
- ```
+ # For installations from source
+ sudo -u git -H bundle exec rake gitlab:ldap:check RAILS_ENV=production
+ ```
-### Connection Refused
+### Connection refused
If you are getting 'Connection Refused' errors when trying to connect to the
LDAP server please double-check the LDAP `port` and `encryption` settings used by
GitLab. Common combinations are `encryption: 'plain'` and `port: 389`, OR
`encryption: 'simple_tls'` and `port: 636`.
-
-### Troubleshooting
-
-If a user account is blocked or unblocked due to the LDAP configuration, a
-message will be logged to `application.log`.
-
-If there is an unexpected error during an LDAP lookup (configuration error,
-timeout), the login is rejected and a message will be logged to
-`production.log`.
diff --git a/doc/administration/auth/oidc.md b/doc/administration/auth/oidc.md
index 6e48add6930..3445f916ef4 100644
--- a/doc/administration/auth/oidc.md
+++ b/doc/administration/auth/oidc.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# OpenID Connect OmniAuth provider
GitLab can use [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) as an OmniAuth provider.
@@ -5,101 +9,103 @@ GitLab can use [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0
To enable the OpenID Connect OmniAuth provider, you must register your application with an OpenID Connect provider.
The OpenID Connect will provide you with a client details and secret for you to use.
-1. On your GitLab server, open the configuration file.
-
- For Omnibus GitLab:
-
- ```sh
- sudo editor /etc/gitlab/gitlab.rb
- ```
-
- For installations from source:
-
- ```sh
- cd /home/git/gitlab
- sudo -u git -H editor config/gitlab.yml
- ```
-
- See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings.
-
-1. Add the provider configuration.
-
- For Omnibus GitLab:
-
- ```ruby
- gitlab_rails['omniauth_providers'] = [
- { 'name' => 'openid_connect',
- 'label' => '<your_oidc_label>',
- 'args' => {
- "name' => 'openid_connect',
- 'scope' => ['openid','profile'],
- 'response_type' => 'code',
- 'issuer' => '<your_oidc_url>',
- 'discovery' => true,
- 'client_auth_method' => 'query',
- 'uid_field' => '<uid_field>',
- 'client_options' => {
- 'identifier' => '<your_oidc_client_id>',
- 'secret' => '<your_oidc_client_secret>',
- 'redirect_uri' => '<your_gitlab_url>/users/auth/openid_connect/callback'
- }
- }
- }
- ]
- ```
-
- For installation from source:
-
- ```yaml
- - { name: 'openid_connect',
- label: '<your_oidc_label>',
- args: {
- name: 'openid_connect',
- scope: ['openid','profile'],
- response_type: 'code',
- issuer: '<your_oidc_url>',
- discovery: true,
- client_auth_method: 'query',
- uid_field: '<uid_field>',
- client_options: {
- identifier: '<your_oidc_client_id>',
- secret: '<your_oidc_client_secret>',
- redirect_uri: '<your_gitlab_url>/users/auth/openid_connect/callback'
- }
- }
- }
- ```
-
- > **Note:**
- >
- > - For more information on each configuration option refer to
- the [OmniAuth OpenID Connect usage documentation](https://github.com/m0n9oose/omniauth_openid_connect#usage) and
- the [OpenID Connect Core 1.0 specification](https://openid.net/specs/openid-connect-core-1_0.html).
+1. On your GitLab server, open the configuration file.
+
+ For Omnibus GitLab:
+
+ ```sh
+ sudo editor /etc/gitlab/gitlab.rb
+ ```
+
+ For installations from source:
+
+ ```sh
+ cd /home/git/gitlab
+ sudo -u git -H editor config/gitlab.yml
+ ```
+
+ See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration) for initial settings.
+
+1. Add the provider configuration.
+
+ For Omnibus GitLab:
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ { 'name' => 'openid_connect',
+ 'label' => '<your_oidc_label>',
+ 'args' => {
+ "name' => 'openid_connect',
+ 'scope' => ['openid','profile'],
+ 'response_type' => 'code',
+ 'issuer' => '<your_oidc_url>',
+ 'discovery' => true,
+ 'client_auth_method' => 'query',
+ 'uid_field' => '<uid_field>',
+ 'client_options' => {
+ 'identifier' => '<your_oidc_client_id>',
+ 'secret' => '<your_oidc_client_secret>',
+ 'redirect_uri' => '<your_gitlab_url>/users/auth/openid_connect/callback'
+ }
+ }
+ }
+ ]
+ ```
+
+ For installation from source:
+
+ ```yaml
+ - { name: 'openid_connect',
+ label: '<your_oidc_label>',
+ args: {
+ name: 'openid_connect',
+ scope: ['openid','profile'],
+ response_type: 'code',
+ issuer: '<your_oidc_url>',
+ discovery: true,
+ client_auth_method: 'query',
+ uid_field: '<uid_field>',
+ client_options: {
+ identifier: '<your_oidc_client_id>',
+ secret: '<your_oidc_client_secret>',
+ redirect_uri: '<your_gitlab_url>/users/auth/openid_connect/callback'
+ }
+ }
+ }
+ ```
+
+ NOTE: **Note:**
+ For more information on each configuration option refer to the [OmniAuth OpenID Connect usage documentation](https://github.com/m0n9oose/omniauth_openid_connect#usage)
+ and the [OpenID Connect Core 1.0 specification](https://openid.net/specs/openid-connect-core-1_0.html).
1. For the configuration above, change the values for the provider to match your OpenID Connect client setup. Use the following as a guide:
- - `<your_oidc_label>` is the label that will be displayed on the login page.
- - `<your_oidc_url>` (optional) is the URL that points to the OpenID Connect provider. For example, `https://example.com/auth/realms/your-realm`.
- If this value is not provided, the URL is constructed from the `client_options` in the following format: `<client_options.scheme>://<client_options.host>:<client_options.port>`.
- - If `discovery` is set to `true`, the OpenID Connect provider will try to auto discover the client options using `<your_oidc_url>/.well-known/openid-configuration`. Defaults to `false`.
- - `<uid_field>` (optional) is the field name from the `user_info` details that will be used as `uid` value. For example, `preferred_username`.
- If this value is not provided or the field with the configured value is missing from the `user_info` details, the `uid` will use the `sub` field.
- - `client_options` are the OpenID Connect client-specific options. Specifically:
-
- - `identifier` is the client identifier as configured in the OpenID Connect service provider.
- - `secret` is the client secret as configured in the OpenID Connect service provider.
- - `redirect_uri` is the GitLab URL to redirect the user to after successful login. For example, `http://example.com/users/auth/openid_connect/callback`.
- - `end_session_endpoint` (optional) is the URL to the endpoint that end the session (logout). Can be provided if auto-discovery disabled or unsuccessful.
-
- The following `client_options` are optional unless auto-discovery is disabled or unsuccessful:
-
- - `authorization_endpoint` is the URL to the endpoint that authorizes the end user.
- - `token_endpoint` is the URL to the endpoint that provides Access Token.
- - `userinfo_endpoint` is the URL to the endpoint that provides the user information.
- - `jwks_uri` is the URL to the endpoint where the Token signer publishes its keys.
-
-1. Save the configuration file.
-1. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source)
- for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
+ - `<your_oidc_label>` is the label that will be displayed on the login page.
+ - `<your_oidc_url>` (optional) is the URL that points to the OpenID Connect provider. For example, `https://example.com/auth/realms/your-realm`.
+ If this value is not provided, the URL is constructed from the `client_options` in the following format: `<client_options.scheme>://<client_options.host>:<client_options.port>`.
+ - If `discovery` is set to `true`, the OpenID Connect provider will try to auto discover the client options using `<your_oidc_url>/.well-known/openid-configuration`. Defaults to `false`.
+ - `client_auth_method` (optional) specifies the method used for authenticating the client with the OpenID Connect provider.
+ - Supported values are:
+ - `basic` - HTTP Basic Authentication
+ - `jwt_bearer` - JWT based authentication (private key and client secret signing)
+ - `mtls` - Mutual TLS or X.509 certificate validation
+ - Any other value will POST the client id and secret in the request body
+ - If not specified, defaults to `basic`.
+ - `<uid_field>` (optional) is the field name from the `user_info` details that will be used as `uid` value. For example, `preferred_username`.
+ If this value is not provided or the field with the configured value is missing from the `user_info` details, the `uid` will use the `sub` field.
+ - `client_options` are the OpenID Connect client-specific options. Specifically:
+ - `identifier` is the client identifier as configured in the OpenID Connect service provider.
+ - `secret` is the client secret as configured in the OpenID Connect service provider.
+ - `redirect_uri` is the GitLab URL to redirect the user to after successful login. For example, `http://example.com/users/auth/openid_connect/callback`.
+ - `end_session_endpoint` (optional) is the URL to the endpoint that end the session (logout). Can be provided if auto-discovery disabled or unsuccessful.
+ - The following `client_options` are optional unless auto-discovery is disabled or unsuccessful:
+ - `authorization_endpoint` is the URL to the endpoint that authorizes the end user.
+ - `token_endpoint` is the URL to the endpoint that provides Access Token.
+ - `userinfo_endpoint` is the URL to the endpoint that provides the user information.
+ - `jwks_uri` is the URL to the endpoint where the Token signer publishes its keys.
+
+1. Save the configuration file.
+1. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart GitLab](../restart_gitlab.md#installations-from-source)
+ for the changes to take effect if you installed GitLab via Omnibus or from source respectively.
On the sign in page, there should now be an OpenID Connect icon below the regular sign in form.
Click the icon to begin the authentication process. The OpenID Connect provider will ask the user to
@@ -139,7 +145,7 @@ for more details:
}
```
-### Troubleshooting
+## Troubleshooting
If you're having trouble, here are some tips:
@@ -155,9 +161,9 @@ If you're having trouble, here are some tips:
`https://accounts.google.com/.well-known/openid-configuration`.
1. The OpenID Connect client uses HTTP Basic Authentication to send the
- OAuth2 access token. For example, if you are seeing 401 errors upon
- retrieving the `userinfo` endpoint, you may want to check your OpenID
- Web server configuration. For example, for
+ OAuth2 access token if `client_auth_method` is not defined or if set to `basic`.
+ If you are seeing 401 errors upon retrieving the `userinfo` endpoint, you may
+ want to check your OpenID Web server configuration. For example, for
[oauth2-server-php](https://github.com/bshaffer/oauth2-server-php), you
may need to [add a configuration parameter to
Apache](https://github.com/bshaffer/oauth2-server-php/issues/926#issuecomment-387502778).
diff --git a/doc/administration/auth/okta.md b/doc/administration/auth/okta.md
index aa4e1b0d2e0..5524c3ba092 100644
--- a/doc/administration/auth/okta.md
+++ b/doc/administration/auth/okta.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Okta SSO provider
Okta is a [Single Sign-on provider](https://www.okta.com/products/single-sign-on/) that can be used to authenticate
@@ -16,7 +20,7 @@ The following documentation enables Okta as a SAML provider.
1. Next, you'll need the to fill in the SAML general config. Here's an example
image.
- ![Okta admin panel view](img/okta_admin_panel.png)
+ ![Okta admin panel view](img/okta_admin_panel.png)
1. The last part of the configuration is the feedback section where you can
just say you're a customer and creating an app for internal use.
@@ -24,7 +28,7 @@ The following documentation enables Okta as a SAML provider.
profile. Click on the SAML 2.0 config instructions button which should
look like the following:
- ![Okta SAML settings](img/okta_saml_settings.png)
+ ![Okta SAML settings](img/okta_saml_settings.png)
1. On the screen that comes up take note of the
**Identity Provider Single Sign-On URL** which you'll use for the
@@ -38,112 +42,112 @@ Now that the Okta app is configured, it's time to enable it in GitLab.
## Configure GitLab
-1. On your GitLab server, open the configuration file:
-
- **For Omnibus GitLab installations**
-
- ```sh
- sudo editor /etc/gitlab/gitlab.rb
- ```
-
- **For installations from source**
-
- ```sh
- cd /home/git/gitlab
- sudo -u git -H editor config/gitlab.yml
- ```
-
-1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration)
- for initial settings.
-
-1. To allow your users to use Okta to sign up without having to manually create
- an account first, don't forget to add the following values to your
- configuration:
-
- **For Omnibus GitLab installations**
-
- ```ruby
- gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
- gitlab_rails['omniauth_block_auto_created_users'] = false
- ```
-
- **For installations from source**
-
- ```yaml
- allow_single_sign_on: ["saml"]
- block_auto_created_users: false
- ```
-
-1. You can also automatically link Okta users with existing GitLab users if
- their email addresses match by adding the following setting:
-
- **For Omnibus GitLab installations**
-
- ```ruby
- gitlab_rails['omniauth_auto_link_saml_user'] = true
- ```
-
- **For installations from source**
-
- ```yaml
- auto_link_saml_user: true
- ```
-
-1. Add the provider configuration.
-
- >**Notes:**
- >
- >- Change the value for `assertion_consumer_service_url` to match the HTTPS endpoint
- of GitLab (append `users/auth/saml/callback` to the HTTPS URL of your GitLab
- installation to generate the correct value).
- >
- >- To get the `idp_cert_fingerprint` fingerprint, first download the
- certificate from the Okta app you registered and then run:
- `openssl x509 -in okta.cert -noout -fingerprint`. Substitute `okta.cert`
- with the location of your certificate.
- >
- >- Change the value of `idp_sso_target_url`, with the value of the
- **Identity Provider Single Sign-On URL** from the step when you
- configured the Okta app.
- >
- >- Change the value of `issuer` to the value of the **Audience Restriction** from your Okta app configuration. This will identify GitLab
- to the IdP.
- >
- >- Leave `name_identifier_format` as-is.
-
- **For Omnibus GitLab installations**
-
- ```ruby
- gitlab_rails['omniauth_providers'] = [
- {
- name: 'saml',
- args: {
- assertion_consumer_service_url: 'https://gitlab.example.com/users/auth/saml/callback',
- idp_cert_fingerprint: '43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8',
- idp_sso_target_url: 'https://gitlab.oktapreview.com/app/gitlabdev773716_gitlabsaml_1/exk8odl81tBrjpD4B0h7/sso/saml',
- issuer: 'https://gitlab.example.com',
- name_identifier_format: 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient'
- },
- label: 'Okta' # optional label for SAML login button, defaults to "Saml"
- }
- ]
- ```
-
- **For installations from source**
-
- ```yaml
- - {
- name: 'saml',
- args: {
- assertion_consumer_service_url: 'https://gitlab.example.com/users/auth/saml/callback',
- idp_cert_fingerprint: '43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8',
- idp_sso_target_url: 'https://gitlab.oktapreview.com/app/gitlabdev773716_gitlabsaml_1/exk8odl81tBrjpD4B0h7/sso/saml',
- issuer: 'https://gitlab.example.com',
- name_identifier_format: 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient'
- },
- label: 'Okta' # optional label for SAML login button, defaults to "Saml"
- }
- ```
+1. On your GitLab server, open the configuration file:
+
+ **For Omnibus GitLab installations**
+
+ ```sh
+ sudo editor /etc/gitlab/gitlab.rb
+ ```
+
+ **For installations from source**
+
+ ```sh
+ cd /home/git/gitlab
+ sudo -u git -H editor config/gitlab.yml
+ ```
+
+1. See [Initial OmniAuth Configuration](../../integration/omniauth.md#initial-omniauth-configuration)
+ for initial settings.
+
+1. To allow your users to use Okta to sign up without having to manually create
+ an account first, don't forget to add the following values to your
+ configuration:
+
+ **For Omnibus GitLab installations**
+
+ ```ruby
+ gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
+ gitlab_rails['omniauth_block_auto_created_users'] = false
+ ```
+
+ **For installations from source**
+
+ ```yaml
+ allow_single_sign_on: ["saml"]
+ block_auto_created_users: false
+ ```
+
+1. You can also automatically link Okta users with existing GitLab users if
+ their email addresses match by adding the following setting:
+
+ **For Omnibus GitLab installations**
+
+ ```ruby
+ gitlab_rails['omniauth_auto_link_saml_user'] = true
+ ```
+
+ **For installations from source**
+
+ ```yaml
+ auto_link_saml_user: true
+ ```
+
+1. Add the provider configuration.
+
+ >**Notes:**
+ >
+ >- Change the value for `assertion_consumer_service_url` to match the HTTPS endpoint
+ of GitLab (append `users/auth/saml/callback` to the HTTPS URL of your GitLab
+ installation to generate the correct value).
+ >
+ >- To get the `idp_cert_fingerprint` fingerprint, first download the
+ certificate from the Okta app you registered and then run:
+ `openssl x509 -in okta.cert -noout -fingerprint`. Substitute `okta.cert`
+ with the location of your certificate.
+ >
+ >- Change the value of `idp_sso_target_url`, with the value of the
+ **Identity Provider Single Sign-On URL** from the step when you
+ configured the Okta app.
+ >
+ >- Change the value of `issuer` to the value of the **Audience Restriction** from your Okta app configuration. This will identify GitLab
+ to the IdP.
+ >
+ >- Leave `name_identifier_format` as-is.
+
+ **For Omnibus GitLab installations**
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ {
+ name: 'saml',
+ args: {
+ assertion_consumer_service_url: 'https://gitlab.example.com/users/auth/saml/callback',
+ idp_cert_fingerprint: '43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8',
+ idp_sso_target_url: 'https://gitlab.oktapreview.com/app/gitlabdev773716_gitlabsaml_1/exk8odl81tBrjpD4B0h7/sso/saml',
+ issuer: 'https://gitlab.example.com',
+ name_identifier_format: 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient'
+ },
+ label: 'Okta' # optional label for SAML login button, defaults to "Saml"
+ }
+ ]
+ ```
+
+ **For installations from source**
+
+ ```yaml
+ - {
+ name: 'saml',
+ args: {
+ assertion_consumer_service_url: 'https://gitlab.example.com/users/auth/saml/callback',
+ idp_cert_fingerprint: '43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8',
+ idp_sso_target_url: 'https://gitlab.oktapreview.com/app/gitlabdev773716_gitlabsaml_1/exk8odl81tBrjpD4B0h7/sso/saml',
+ issuer: 'https://gitlab.example.com',
+ name_identifier_format: 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient'
+ },
+ label: 'Okta' # optional label for SAML login button, defaults to "Saml"
+ }
+ ```
1. [Reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure) or [restart](../restart_gitlab.md#installations-from-source) GitLab for Omnibus and installations
from source respectively for the changes to take effect.
@@ -157,3 +161,15 @@ Make sure the groups exist and are assigned to the Okta app.
You can take a look of the [SAML documentation](../../integration/saml.md#marking-users-as-external-based-on-saml-groups) on external groups since
it works the same.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/auth/smartcard.md b/doc/administration/auth/smartcard.md
index a0d4e9ef3b5..4f236d1afb8 100644
--- a/doc/administration/auth/smartcard.md
+++ b/doc/administration/auth/smartcard.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Smartcard authentication **(PREMIUM ONLY)**
GitLab supports authentication using smartcards.
@@ -22,7 +26,7 @@ To use a smartcard with an X.509 certificate to authenticate against a local
database with GitLab, `CN` and `emailAddress` must be defined in the
certificate. For example:
-```
+```text
Certificate:
Data:
Version: 1 (0x0)
@@ -56,11 +60,11 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['smartcard_enabled'] = true
- gitlab_rails['smartcard_ca_file'] = "/etc/ssl/certs/CA.pem"
- gitlab_rails['smartcard_client_certificate_required_port'] = 3444
- ```
+ ```ruby
+ gitlab_rails['smartcard_enabled'] = true
+ gitlab_rails['smartcard_ca_file'] = "/etc/ssl/certs/CA.pem"
+ gitlab_rails['smartcard_client_certificate_required_port'] = 3444
+ ```
1. Save the file and [reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure)
GitLab for the changes to take effect.
@@ -154,15 +158,15 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['ldap_servers'] = YAML.load <<-EOS
- main:
- # snip...
- # Enable smartcard authentication against the LDAP server. Valid values
- # are "false", "optional", and "required".
- smartcard_auth: optional
- EOS
- ```
+ ```ruby
+ gitlab_rails['ldap_servers'] = YAML.load <<-EOS
+ main:
+ # snip...
+ # Enable smartcard authentication against the LDAP server. Valid values
+ # are "false", "optional", and "required".
+ smartcard_auth: optional
+ EOS
+ ```
1. Save the file and [reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure)
GitLab for the changes to take effect.
@@ -171,16 +175,16 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Edit `config/gitlab.yml`:
- ```yaml
- production:
- ldap:
- servers:
- main:
- # snip...
- # Enable smartcard authentication against the LDAP server. Valid values
- # are "false", "optional", and "required".
- smartcard_auth: optional
- ```
+ ```yaml
+ production:
+ ldap:
+ servers:
+ main:
+ # snip...
+ # Enable smartcard authentication against the LDAP server. Valid values
+ # are "false", "optional", and "required".
+ smartcard_auth: optional
+ ```
1. Save the file and [restart](../restart_gitlab.md#installations-from-source)
GitLab for the changes to take effect.
@@ -191,9 +195,9 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['smartcard_required_for_git_access'] = true
- ```
+ ```ruby
+ gitlab_rails['smartcard_required_for_git_access'] = true
+ ```
1. Save the file and [reconfigure](../restart_gitlab.md#omnibus-gitlab-reconfigure)
GitLab for the changes to take effect.
@@ -202,13 +206,25 @@ attribute. As a prerequisite, you must use an LDAP server that:
1. Edit `config/gitlab.yml`:
- ```yaml
- ## Smartcard authentication settings
- smartcard:
- # snip...
- # Browser session with smartcard sign-in is required for Git access
- required_for_git_access: true
- ```
+ ```yaml
+ ## Smartcard authentication settings
+ smartcard:
+ # snip...
+ # Browser session with smartcard sign-in is required for Git access
+ required_for_git_access: true
+ ```
1. Save the file and [restart](../restart_gitlab.md#installations-from-source)
GitLab for the changes to take effect.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/container_registry.md b/doc/administration/container_registry.md
index 04f52783d22..f5d58db0133 100644
--- a/doc/administration/container_registry.md
+++ b/doc/administration/container_registry.md
@@ -27,8 +27,6 @@ but not recommended and out of the scope of this document.
Read the [insecure Registry documentation][docker-insecure] if you want to
implement this.
----
-
**Installations from source**
If you have installed GitLab from source:
@@ -116,8 +114,6 @@ GitLab from source respectively.
Be careful to choose a port different than the one that Registry listens to (`5000` by default),
otherwise you will run into conflicts.
----
-
**Omnibus GitLab installations**
1. Your `/etc/gitlab/gitlab.rb` should contain the Registry URL as well as the
@@ -141,7 +137,14 @@ otherwise you will run into conflicts.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
+1. Validate using:
+
+ ```sh
+ openssl s_client -showcerts -servername gitlab.example.com -connect gitlab.example.com:443 > cacert.pem
+ ```
+
+NOTE: **Note:**
+If your certificate provider provides the CA Bundle certificates, append them to the TLS certificate file.
**Installations from source**
@@ -158,8 +161,6 @@ otherwise you will run into conflicts.
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
----
-
Users should now be able to login to the Container Registry with their GitLab
credentials using:
@@ -171,14 +172,16 @@ docker login gitlab.example.com:4567
If the Registry is configured to use its own domain, you will need a TLS
certificate for that specific domain (e.g., `registry.example.com`) or maybe
-a wildcard certificate if hosted under a subdomain of your existing GitLab
+a wildcard certificate if hosted under a subdomain of your existing GitLab
domain (e.g., `registry.gitlab.example.com`).
+NOTE: **Note:**
+As well as manually generated SSL certificates (explained here), certificates automatically
+generated by Let's Encrypt are also [supported in Omnibus installs](https://docs.gitlab.com/omnibus/settings/ssl.html#host-services).
+
Let's assume that you want the container Registry to be accessible at
`https://registry.gitlab.example.com`.
----
-
**Omnibus GitLab installations**
1. Place your TLS certificate and key in
@@ -210,8 +213,6 @@ look like:
> registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/certificate.key"
> ```
----
-
**Installations from source**
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
@@ -226,8 +227,6 @@ look like:
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Make the relevant changes in NGINX as well (domain, port, TLS certificates path).
----
-
Users should now be able to login to the Container Registry using their GitLab
credentials:
@@ -252,8 +251,6 @@ Registry application itself.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**Installations from source**
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `registry` entry and
@@ -272,8 +269,6 @@ If the Container Registry is enabled, then it will be available on all new
projects. To disable this function and let the owners of a project to enable
the Container Registry by themselves, follow the steps below.
----
-
**Omnibus GitLab installations**
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
@@ -284,8 +279,6 @@ the Container Registry by themselves, follow the steps below.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**Installations from source**
1. Open `/home/git/gitlab/config/gitlab.yml`, find the `default_projects_features`
@@ -321,8 +314,6 @@ This path is accessible to:
> **Warning** You should confirm that all GitLab, Registry and web server users
have access to this directory.
----
-
**Omnibus GitLab installations**
The default location where images are stored in Omnibus, is
@@ -336,8 +327,6 @@ The default location where images are stored in Omnibus, is
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**Installations from source**
The default location where images are stored in source installations, is
@@ -446,8 +435,6 @@ In the examples below we set the Registry's port to `5001`.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**Installations from source**
1. Open the configuration file of your Registry server and edit the
@@ -484,7 +471,7 @@ You can use GitLab as an auth endpoint and use a non-bundled Container Registry.
```
1. A certificate keypair is required for GitLab and the Container Registry to
- communicate securely. By default omnibus-gitlab will generate one keypair,
+ communicate securely. By default Omnibus GitLab will generate one keypair,
which is saved to `/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key`.
When using a non-bundled Container Registry, you will need to supply a
custom certificate key. To do that, add the following to
@@ -500,7 +487,7 @@ You can use GitLab as an auth endpoint and use a non-bundled Container Registry.
**Note:** The file specified at `registry_key_path` gets populated with the
content specified by `internal_key`, each time reconfigure is executed. If
- no file is specified, omnibus-gitlab will default it to
+ no file is specified, Omnibus GitLab will default it to
`/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key` and will populate
it.
@@ -565,8 +552,6 @@ To configure a notification endpoint in Omnibus:
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**Installations from source**
Configuring the notification endpoint is done in your registry config YML file created
@@ -640,8 +625,6 @@ Start with a value between `25000000` (25MB) and `50000000` (50MB).
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**For installations from source**
1. Edit `config/gitlab.yml`:
@@ -673,8 +656,6 @@ You can add a configuration option for backwards compatibility.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**For installations from source**
1. Edit the YML configuration file you created when you [deployed the registry][registry-deploy]. Add the following snippet:
@@ -696,11 +677,43 @@ project or branch name. Special characters can include:
- Trailing hyphen/dash
- Double hyphen/dash
-To get around this, you can [change the group path](../user/group/index.md#changing-a-groups-path),
-[change the project path](../user/project/settings/index.md#renaming-a-repository) or change the
-branch name. Another option is to create a [push rule](../push_rules/push_rules.html) to prevent
+To get around this, you can [change the group path](../user/group/index.md#changing-a-groups-path),
+[change the project path](../user/project/settings/index.md#renaming-a-repository) or change the
+branch name. Another option is to create a [push rule](../push_rules/push_rules.html) to prevent
this at the instance level.
+### Image push errors
+
+When getting errors or "retrying" loops in an attempt to push an image but `docker login` works fine,
+there is likely an issue with the headers forwarded to the registry by NGINX. The default recommended
+NGINX configurations should handle this, but it might occur in custom setups where the SSL is
+offloaded to a third party reverse proxy.
+
+This problem was discussed in a [docker project issue][docker-image-push-issue] and a simple solution
+would be to enable relative urls in the registry.
+
+**For Omnibus installations**
+
+1. Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ registry['env'] = {
+ "REGISTRY_HTTP_RELATIVEURLS" => true
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab][] for the changes to take effect.
+
+**For installations from source**
+
+1. Edit the YML configuration file you created when you [deployed the registry][registry-deploy]. Add the following snippet:
+
+ ```yaml
+ http:
+ relativeurls: true
+ ```
+
+1. Restart the registry for the changes to take affect.
[ce-18239]: https://gitlab.com/gitlab-org/gitlab-ce/issues/18239
[docker-insecure-self-signed]: https://docs.docker.com/registry/insecure/#use-self-signed-certificates
@@ -720,3 +733,4 @@ this at the instance level.
[new-domain]: #configure-container-registry-under-its-own-domain
[notifications-config]: https://docs.docker.com/registry/notifications/
[registry-notifications-config]: https://docs.docker.com/registry/configuration/#notifications
+[docker-image-push-issue]: https://github.com/docker/distribution/issues/970
diff --git a/doc/administration/custom_hooks.md b/doc/administration/custom_hooks.md
index 113514e1ee8..7238d08ab09 100644
--- a/doc/administration/custom_hooks.md
+++ b/doc/administration/custom_hooks.md
@@ -12,17 +12,17 @@ NOTE: **Note:**
Custom Git hooks won't be replicated to secondary nodes if you use [GitLab Geo](geo/replication/index.md)
Git natively supports hooks that are executed on different actions.
-Examples of server-side git hooks include pre-receive, post-receive, and update.
+Examples of server-side Git hooks include pre-receive, post-receive, and update.
See [Git SCM Server-Side Hooks][hooks] for more information about each hook type.
-As of gitlab-shell version 2.2.0 (which requires GitLab 7.5+), GitLab
-administrators can add custom git hooks to any GitLab project.
+As of GitLab Shell version 2.2.0 (which requires GitLab 7.5+), GitLab
+administrators can add custom Git hooks to any GitLab project.
## Create a custom Git hook for a repository
Server-side Git hooks are typically placed in the repository's `hooks`
-subdirectory. In GitLab, hook directories are are symlinked to the gitlab-shell
-`hooks` directory for ease of maintenance between gitlab-shell upgrades.
+subdirectory. In GitLab, hook directories are symlinked to the GitLab Shell
+`hooks` directory for ease of maintenance between GitLab Shell upgrades.
Custom hooks are implemented differently, but the behavior is exactly the same
once the hook is created. Follow the steps below to set up a custom hook for a
repository:
@@ -36,7 +36,7 @@ repository:
1. Inside the new `custom_hooks` directory, create a file with a name matching
the hook type. For a pre-receive hook the file name should be `pre-receive`
with no extension.
-1. Make the hook file executable and make sure it's owned by git.
+1. Make the hook file executable and make sure it's owned by Git.
1. Write the code to make the Git hook function as expected. Hooks can be
in any language. Ensure the 'shebang' at the top properly reflects the language
type. For example, if the script is in Ruby the shebang will probably be
@@ -49,17 +49,17 @@ as appropriate.
To create a Git hook that applies to all of your repositories in
your instance, set a global Git hook. Since all the repositories' `hooks`
-directories are symlinked to gitlab-shell's `hooks` directory, adding any hook
-to the gitlab-shell `hooks` directory will also apply it to all repositories. Follow
+directories are symlinked to GitLab Shell's `hooks` directory, adding any hook
+to the GitLab Shell `hooks` directory will also apply it to all repositories. Follow
the steps below to properly set up a custom hook for all repositories:
1. On the GitLab server, navigate to the configured custom hook directory. The
- default is in the gitlab-shell directory. The gitlab-shell `hook` directory
+ default is in the GitLab Shell directory. The GitLab Shell `hook` directory
for an installation from source the path is usually
`/home/git/gitlab-shell/hooks`. For Omnibus installs the path is usually
`/opt/gitlab/embedded/service/gitlab-shell/hooks`.
To look in a different directory for the global custom hooks,
- set `custom_hooks_dir` in the gitlab-shell config. For
+ set `custom_hooks_dir` in the GitLab Shell config. For
Omnibus installations, this can be set in `gitlab.rb`; and in source
installations, this can be set in `gitlab-shell/config.yml`.
1. Create a new directory in this location. Depending on your hook, it will be
diff --git a/doc/administration/database_load_balancing.md b/doc/administration/database_load_balancing.md
index dc4cc401fca..64eca0b00f6 100644
--- a/doc/administration/database_load_balancing.md
+++ b/doc/administration/database_load_balancing.md
@@ -122,6 +122,7 @@ production:
discover:
nameserver: localhost
record: secondary.postgresql.service.consul
+ record_type: A
port: 8600
interval: 60
disconnect_timeout: 120
@@ -137,12 +138,16 @@ The following options can be set:
| Option | Description | Default |
|----------------------|---------------------------------------------------------------------------------------------------|-----------|
| `nameserver` | The nameserver to use for looking up the DNS record. | localhost |
-| `record` | The A record to look up. This option is required for service discovery to work. | |
+| `record` | The record to look up. This option is required for service discovery to work. | |
+| `record_type` | Optional record type to look up, this can be either A or SRV (since GitLab 12.3) | A |
| `port` | The port of the nameserver. | 8600 |
| `interval` | The minimum time in seconds between checking the DNS record. | 60 |
| `disconnect_timeout` | The time in seconds after which an old connection is closed, after the list of hosts was updated. | 120 |
| `use_tcp` | Lookup DNS resources using TCP instead of UDP | false |
+If `record_type` is set to `SRV`, GitLab will continue to use a round-robin algorithm
+and will ignore the `weight` and `priority` in the record.
+
The `interval` value specifies the _minimum_ time between checks. If the A
record has a TTL greater than this value, then service discovery will honor said
TTL. For example, if the TTL of the A record is 90 seconds, then service
diff --git a/doc/administration/dependency_proxy.md b/doc/administration/dependency_proxy.md
index 776c60703fc..5153666705f 100644
--- a/doc/administration/dependency_proxy.md
+++ b/doc/administration/dependency_proxy.md
@@ -48,7 +48,7 @@ local location or even use object storage.
The dependency proxy files for Omnibus GitLab installations are stored under
`/var/opt/gitlab/gitlab-rails/shared/dependency_proxy/` and for source
-installations under `shared/dependency_proxy/` (relative to the git home directory).
+installations under `shared/dependency_proxy/` (relative to the Git home directory).
To change the local storage path:
**Omnibus GitLab installations**
@@ -70,6 +70,7 @@ To change the local storage path:
enabled: true
storage_path: shared/dependency_proxy
```
+
1. [Restart GitLab] for the changes to take effect.
### Using object storage
diff --git a/doc/administration/environment_variables.md b/doc/administration/environment_variables.md
index 874b1f3c80d..37d7194af53 100644
--- a/doc/administration/environment_variables.md
+++ b/doc/administration/environment_variables.md
@@ -13,6 +13,7 @@ override certain values.
Variable | Type | Description
-------- | ---- | -----------
+`ENABLE_BOOTSNAP` | string | Enables Bootsnap for speeding up initial Rails boot (`1` to enable)
`GITLAB_CDN_HOST` | string | Sets the base URL for a CDN to serve static assets (e.g. `//mycdnsubdomain.fictional-cdn.com`)
`GITLAB_ROOT_PASSWORD` | string | Sets the password for the `root` user on installation
`GITLAB_HOST` | string | The full URL of the GitLab server (including `http://` or `https://`)
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index 8eee9427b56..27866b7536e 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -171,14 +171,21 @@ If the **primary** and **secondary** nodes have a checksum verification mismatch
## Current limitations
-Until [issue #5064][ee-5064] is completed, background verification doesn't cover
-CI job artifacts and traces, LFS objects, or user uploads in file storage.
-Verify their integrity manually by following [these instructions][foreground-verification]
-on both nodes, and comparing the output between them.
+Automatic background verification doesn't cover attachments, LFS objects,
+job artifacts, and user uploads in file storage. You can keep track of the
+progress to include them in [ee-1430]. For now, you can verify their integrity
+manually by following [these instructions][foreground-verification] on both
+nodes, and comparing the output between them.
+
+In GitLab EE 12.1, Geo calculates checksums for attachments, LFS objects and
+archived traces on secondary nodes after the transfer, compares it with the
+stored checksums, and rejects transfers if mismatched. Please note that Geo
+currently does not support an automatic way to verify these data if they have
+been synced before GitLab EE 12.1.
Data in object storage is **not verified**, as the object store is responsible
for ensuring the integrity of the data.
[reset-verification]: background_verification.md#reset-verification-for-projects-where-verification-has-failed
[foreground-verification]: ../../raketasks/check.md
-[ee-5064]: https://gitlab.com/gitlab-org/gitlab-ee/issues/5064
+[ee-1430]: https://gitlab.com/groups/gitlab-org/-/epics/1430
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
index 30d35df25c7..64d7ef2d609 100644
--- a/doc/administration/geo/disaster_recovery/bring_primary_back.md
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -30,7 +30,7 @@ To bring the former **primary** node up to date:
`sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
the GitLab instance from scratch and set it up as a **secondary** node by
following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
-
+
NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
for this node during disaster recovery procedure you may need to [block
all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node)
diff --git a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-project-page.png b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-project-page.png
index fd51523104b..0a7b841897b 100644
--- a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-project-page.png
+++ b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-project-page.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
index b2a6da69d3d..85759d903a4 100644
--- a/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
+++ b/doc/administration/geo/disaster_recovery/img/checksum-differences-admin-projects.png
Binary files differ
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index d44e141b66b..ba95843b0b0 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -1,4 +1,4 @@
-# Disaster Recovery **(PREMIUM ONLY)**
+# Disaster Recovery (Geo) **(PREMIUM ONLY)**
Geo replicates your database, your Git repositories, and few other assets.
We will support and replicate more data in the future, that will enable you to
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 393326c3347..0c2d48854f0 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -187,7 +187,7 @@ access to the **primary** node during the maintenance window.
before it is completed will cause the work to be lost.
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
-
+
- All replication meters to each 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
- Database replication lag is 0ms.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 0e11dffa0d6..fd076bb79d8 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -17,7 +17,7 @@ You are encouraged to first read through all the steps before executing them
in your testing/production environment.
NOTE: **Note:**
-**Do not** set up any custom authentication for the **secondary** nodes. This will be handled by the **primary** node.
+**Do not** set up any custom authentication for the **secondary** nodes. This will be handled by the **primary** node.
Any change that requires access to the **Admin Area** needs to be done in the
**primary** node because the **secondary** node is a read-only replica.
@@ -242,7 +242,7 @@ node's Geo Nodes dashboard in your browser.
![Geo dashboard](img/geo_node_dashboard.png)
If your installation isn't working properly, check the
-[troubleshooting document].
+[troubleshooting document](troubleshooting.md).
The two most obvious issues that can become apparent in the dashboard are:
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 71a1ce87833..9272287a4a7 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -116,7 +116,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
by default. However, Geo requires the **secondary** node to be able to
connect to the **primary** node's database. For this reason, we need the address of
each node.
-
+
NOTE: **Note:** For external PostgreSQL instances, see [additional instructions](external_database.md).
If you are using a cloud provider, you can lookup the addresses for each
@@ -424,22 +424,22 @@ data before running `pg_basebackup`.
- If PostgreSQL is listening on a non-standard port, add `--port=` as well.
- If your database is too large to be transferred in 30 minutes, you will need
- to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
- initial replication to take under an hour.
- - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
- (e.g., you know the network path is secure, or you are using a site-to-site
- VPN). This is **not** safe over the public Internet!
- - You can read more details about each `sslmode` in the
- [PostgreSQL documentation][pg-docs-ssl];
- the instructions above are carefully written to ensure protection against
- both passive eavesdroppers and active "man-in-the-middle" attackers.
- - Change the `--slot-name` to the name of the replication slot
- to be used on the **primary** database. The script will attempt to create the
- replication slot automatically if it does not exist.
- - If you're repurposing an old server into a Geo **secondary** node, you'll need to
- add `--force` to the command line.
- - When not in a production machine you can disable backup step if you
- really sure this is what you want by adding `--skip-backup`
+ to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
+ initial replication to take under an hour.
+ - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
+ (e.g., you know the network path is secure, or you are using a site-to-site
+ VPN). This is **not** safe over the public Internet!
+ - You can read more details about each `sslmode` in the
+ [PostgreSQL documentation][pg-docs-ssl];
+ the instructions above are carefully written to ensure protection against
+ both passive eavesdroppers and active "man-in-the-middle" attackers.
+ - Change the `--slot-name` to the name of the replication slot
+ to be used on the **primary** database. The script will attempt to create the
+ replication slot automatically if it does not exist.
+ - If you're repurposing an old server into a Geo **secondary** node, you'll need to
+ add `--force` to the command line.
+ - When not in a production machine you can disable backup step if you
+ really sure this is what you want by adding `--skip-backup`
The replication process is now complete.
diff --git a/doc/administration/geo/replication/external_database.md b/doc/administration/geo/replication/external_database.md
index 85687d4a648..256195998a7 100644
--- a/doc/administration/geo/replication/external_database.md
+++ b/doc/administration/geo/replication/external_database.md
@@ -25,7 +25,6 @@ developed and tested. We aim to be compatible with most external
This command will use your defined `external_url` in `/etc/gitlab/gitlab.rb`.
-
### Configure the external database to be replicated
To set up an external database, you can either:
diff --git a/doc/administration/geo/replication/img/geo_architecture.png b/doc/administration/geo/replication/img/geo_architecture.png
index d318cd5d0f4..aac63be41ff 100644
--- a/doc/administration/geo/replication/img/geo_architecture.png
+++ b/doc/administration/geo/replication/img/geo_architecture.png
Binary files differ
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index f0d329d5296..dbd466b906d 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -1,6 +1,12 @@
-# Geo Replication **(PREMIUM ONLY)**
+# Replication (Geo) **(PREMIUM ONLY)**
-Geo is the solution for widely distributed development teams.
+> - Introduced in GitLab Enterprise Edition 8.9.
+> - Using Geo in combination with
+> [High Availability](../../high_availability/README.md)
+> is considered **Generally Available** (GA) in
+> [GitLab Premium](https://about.gitlab.com/pricing/) 10.4.
+
+Replication with Geo is the solution for widely distributed development teams.
## Overview
@@ -8,14 +14,8 @@ Fetching large repositories can take a long time for teams located far from a si
Geo provides local, read-only instances of your GitLab instances, reducing the time it takes to clone and fetch large repositories and speeding up development.
-> - Geo is part of [GitLab Premium](https://about.gitlab.com/pricing/#self-managed).
-> - Introduced in GitLab Enterprise Edition 8.9.
-> - We recommend you use:
-> - At least GitLab Enterprise Edition 10.0 for basic Geo features.
-> - The latest version for a better experience.
-> - Make sure that all nodes run the same GitLab version.
-> - Geo requires PostgreSQL 9.6 and Git 2.9, in addition to GitLab's usual [minimum requirements](../../../install/requirements.md).
-> - Using Geo in combination with [High Availability](../../high_availability/README.md) is considered **Generally Available** (GA) in GitLab [GitLab Premium](https://about.gitlab.com/pricing/) 10.4.
+NOTE: **Note:**
+Check the [requirements](#requirements-for-running-geo) carefully before setting up Geo.
For a video introduction to Geo, see [Introduction to GitLab Geo - GitLab Features](https://www.youtube.com/watch?v=-HDLxSjEh6w).
@@ -111,6 +111,13 @@ The following are required to run Geo:
- [Ubuntu](https://www.ubuntu.com) 16.04+
- PostgreSQL 9.6+ with [FDW](https://www.postgresql.org/docs/9.6/postgres-fdw.html) support and [Streaming Replication](https://wiki.postgresql.org/wiki/Streaming_Replication)
- Git 2.9+
+- All nodes must run the same GitLab version.
+
+Additionally, check GitLab's [minimum requirements](../../../install/requirements.md),
+and we recommend you use:
+
+- At least GitLab Enterprise Edition 10.0 for basic Geo features.
+- The latest version for a better experience.
### Firewall rules
@@ -239,35 +246,48 @@ This list of limitations only reflects the latest version of GitLab. If you are
- Object pools for forked project deduplication work only on the **primary** node, and are duplicated on the **secondary** node.
- [External merge request diffs](../../merge_request_diffs.md) will not be replicated if they are on-disk, and viewing merge requests will fail. However, external MR diffs in object storage **are** supported. The default configuration (in-database) does work.
-### Limitations on replication
-
-Only the following items are replicated to the **secondary** node:
-
-- All database content. For example, snippets, epics, issues, merge requests, groups, and project metadata.
-- Project repositories.
-- Project wiki repositories.
-- User uploads. For example, attachments to issues, merge requests, epics, and avatars.
-- CI job artifacts and traces.
+### Limitations on replication/verification
+
+The following table lists the GitLab features along with their replication
+and verification status on a **secondary** node.
+
+You can keep track of the progress to include the missing items in:
+
+- [ee-893](https://gitlab.com/groups/gitlab-org/-/epics/893).
+- [ee-1430](https://gitlab.com/groups/gitlab-org/-/epics/1430).
+
+| Feature | Replicated | Verified |
+|-----------|------------|----------|
+| All database content (e.g. snippets, epics, issues, merge requests, groups, and project metadata) | Yes | Yes |
+| Project repository | Yes | Yes |
+| Project wiki repository | Yes | Yes |
+| Project designs repository | No | No |
+| Uploads (e.g. attachments to issues, merge requests, epics, and avatars) | Yes | Yes, only on transfer, or manually (1) |
+| LFS Objects | Yes | Yes, only on transfer, or manually (1) |
+| CI job artifacts (other than traces) | Yes | No, only manually (1) |
+| Archived traces | Yes | Yes, only on transfer, or manually (1) |
+| Personal snippets | Yes | Yes |
+| Version-controlled personal snippets ([unsupported](https://gitlab.com/gitlab-org/gitlab-ce/issues/13426)) | No | No |
+| Project snippets | Yes | Yes |
+| Version-controlled project snippets ([unsupported](https://gitlab.com/gitlab-org/gitlab-ce/issues/13426)) | No | No |
+| Object pools for forked project deduplication | No | No |
+| [Server-side Git Hooks](../../custom_hooks.md) | No | No |
+| [Elasticsearch integration](../../../integration/elasticsearch.md) | No | No |
+| [GitLab Pages](../../pages/index.md) | No | No |
+| [Container Registry](../../container_registry.md) ([track progress](https://gitlab.com/gitlab-org/gitlab-ee/issues/2870)) | No | No |
+| [NPM Registry](../../npm_registry.md) | No | No |
+| [Maven Packages](../../maven_packages.md) | No | No |
+| [External merge request diffs](../../merge_request_diffs.md) | No, if they are on-disk | No |
+| Content in object storage ([track progress](https://gitlab.com/groups/gitlab-org/-/epics/1526)) | No | No |
+
+1. The integrity can be verified manually using [Integrity Check Rake Task](../../raketasks/check.md) on both nodes and comparing the output between them.
DANGER: **DANGER**
-Data not on this list is unavailable on the **secondary** node. Failing over without manually replicating data not on this list will cause the data to be **lost**.
-
-### Examples of data not replicated
-
-Take special note that these examples of GitLab features are both:
-
-- Commonly used.
-- **Not** replicated by Geo at present.
-
-Examples include:
-
-- [Elasticsearch integration](../../../integration/elasticsearch.md).
-- [Container Registry](../../container_registry.md). [Object Storage](object_storage.md) can mitigate this.
-- [GitLab Pages](../../pages/index.md).
-- [Mattermost integration](https://docs.gitlab.com/omnibus/gitlab-mattermost/).
-
-CAUTION: **Caution:**
-If you wish to use them on a **secondary** node, or to execute a failover successfully, you will need to replicate their data using some other means.
+Features not on this list, or with **No** in the **Replicated** column,
+are not replicated on the **secondary** node. Failing over without manually
+replicating data from those features will cause the data to be **lost**.
+If you wish to use those features on a **secondary** node, or to execute a failover
+successfully, you must replicate their data using some other means.
## Frequently Asked Questions
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
index 4f64e21f8ef..e24eb2bd428 100644
--- a/doc/administration/geo/replication/remove_geo_node.md
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -19,10 +19,10 @@ Once removed from the Geo admin page, you must stop and uninstall the **secondar
```bash
# Stop gitlab and remove its supervision process
sudo gitlab-ctl uninstall
-
+
# Debian/Ubuntu
sudo dpkg --remove gitlab-ee
-
+
# Redhat/Centos
sudo rpm --erase gitlab-ee
```
@@ -32,9 +32,9 @@ Once GitLab has been uninstalled from the **secondary** node, the replication sl
1. On the **primary** node, start a PostgreSQL console session:
```bash
- sudo gitlab-psql
+ sudo gitlab-psql
```
-
+
NOTE: **Note:**
Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
@@ -43,9 +43,9 @@ Once GitLab has been uninstalled from the **secondary** node, the replication sl
```sql
SELECT * FROM pg_replication_slots;
```
-
+
1. Remove the replication slot for the **secondary** node:
```sql
SELECT pg_drop_replication_slot('<name_of_slot>');
- ```
+ ```
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 28abfff973d..fe1557fd8b5 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -230,7 +230,7 @@ sudo gitlab-ctl reconfigure
This will increase the timeout to three hours (10800 seconds). Choose a time
long enough to accommodate a full clone of your largest repositories.
-### Reseting Geo **secondary** node replication
+### Resetting Geo **secondary** node replication
If you get a **secondary** node in a broken state and want to reset the replication state,
to start again from scratch, there are a few steps that can help you:
@@ -524,6 +524,20 @@ If it doesn't exist or inadvertent changes have been made to it, run `sudo gitla
If this path is mounted on a remote volume, please check your volume configuration and that it has correct permissions.
+### An existing tracking database cannot be reused
+
+Geo cannot reuse an existing tracking database.
+
+It is safest to use a fresh secondary, or reset the whole secondary by following
+[Resetting Geo secondary node replication](#resetting-geo-secondary-node-replication).
+
+If you are not concerned about possible orphaned directories and files, then you
+can simply reset the existing tracking database with:
+
+```sh
+sudo gitlab-rake geo:db:reset
+```
+
### Geo node has a database that is writable which is an indication it is not configured for replication with the primary node.
This error refers to a problem with the database replica on a **secondary** node,
diff --git a/doc/administration/geo/replication/updating_the_geo_nodes.md b/doc/administration/geo/replication/updating_the_geo_nodes.md
index c27f6c78455..39174780e24 100644
--- a/doc/administration/geo/replication/updating_the_geo_nodes.md
+++ b/doc/administration/geo/replication/updating_the_geo_nodes.md
@@ -10,10 +10,33 @@ all you need to do is update GitLab itself:
1. Log into each node (**primary** and **secondary** nodes).
1. [Update GitLab][update].
-1. [Update tracking database on **secondary** node](#update-tracking-database-on-secondary-node) when
- the tracking database is enabled.
1. [Test](#check-status-after-updating) **primary** and **secondary** nodes, and check version in each.
+### Check status after updating
+
+Now that the update process is complete, you may want to check whether
+everything is working correctly:
+
+1. Run the Geo raketask on all nodes, everything should be green:
+
+ ```sh
+ sudo gitlab-rake gitlab:geo:check
+ ```
+
+1. Check the **primary** node's Geo dashboard for any errors.
+1. Test the data replication by pushing code to the **primary** node and see if it
+ is received by **secondary** nodes.
+
+## Upgrading to GitLab 12.1
+
+By default, GitLab 12.1 will attempt to automatically upgrade the embedded PostgreSQL server to 10.7 from 9.6. Please see [the omnibus documentation](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-geo-instance) for the recommended procedure.
+
+This can be temporarily disabled by running the following before ugprading:
+
+```sh
+sudo touch /etc/gitlab/disable-postgresql-upgrade
+```
+
## Upgrading to GitLab 10.8
Before 10.8, broadcast messages would not propagate without flushing the cache on the **secondary** nodes. This has been fixed in 10.8, but requires one last cache flush on each **secondary** node:
@@ -241,7 +264,7 @@ Omnibus is the following:
1. Check the steps about defining `postgresql['sql_user_password']`, `gitlab_rails['db_password']`.
1. Make sure `postgresql['max_replication_slots']` matches the number of **secondary** Geo nodes locations.
1. Install GitLab on the **secondary** server.
-1. Re-run the [database replication process][database-replication].
+1. Re-run the [database replication process](database.md#step-3-initiate-the-replication-process).
## Special update notes for 9.0.x
@@ -409,22 +432,7 @@ is prepended with the relevant node for better clarity:
sudo gitlab-ctl start
```
-## Check status after updating
-
-Now that the update process is complete, you may want to check whether
-everything is working correctly:
-
-1. Run the Geo raketask on all nodes, everything should be green:
-
- ```sh
- sudo gitlab-rake gitlab:geo:check
- ```
-
-1. Check the **primary** node's Geo dashboard for any errors.
-1. Test the data replication by pushing code to the **primary** node and see if it
- is received by **secondary** nodes.
-
-## Update tracking database on **secondary** node
+### Update tracking database on **secondary** node
After updating a **secondary** node, you might need to run migrations on
the tracking database. The tracking database was added in GitLab 9.1,
diff --git a/doc/administration/git_protocol.md b/doc/administration/git_protocol.md
index 1780e1babe9..9d653d4e09e 100644
--- a/doc/administration/git_protocol.md
+++ b/doc/administration/git_protocol.md
@@ -53,7 +53,7 @@ sudo service ssh restart
## Instructions
In order to use the new protocol, clients need to either pass the configuration
-`-c protocol.version=2` to the git command, or set it globally:
+`-c protocol.version=2` to the Git command, or set it globally:
```sh
git config --global protocol.version 2
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index 4407facfca9..daec5f14e36 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -2,127 +2,127 @@
[Gitaly](https://gitlab.com/gitlab-org/gitaly) is the service that
provides high-level RPC access to Git repositories. Without it, no other
-components can read or write Git data.
+components can read or write Git data. GitLab components that access Git
+repositories (gitlab-rails, gitlab-shell, gitlab-workhorse, etc.) act as clients
+to Gitaly. End users do not have direct access to Gitaly.
-GitLab components that access Git repositories (gitlab-rails,
-gitlab-shell, gitlab-workhorse) act as clients to Gitaly. End users do
-not have direct access to Gitaly.
+In the rest of this page, Gitaly server is referred to the standalone node that
+only runs Gitaly, and Gitaly client to the GitLab Rails node that runs all other
+processes except Gitaly.
## Configuring Gitaly
The Gitaly service itself is configured via a TOML configuration file.
-This file is documented [in the gitaly
+This file is documented [in the Gitaly
repository](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/configuration/README.md).
-To change a Gitaly setting in Omnibus you can use
-`gitaly['my_setting']` in `/etc/gitlab/gitlab.rb`. Changes will be applied
-when you run `gitlab-ctl reconfigure`.
+In case you want to change some of its settings:
-```ruby
-gitaly['prometheus_listen_addr'] = 'localhost:9236'
-```
+**For Omnibus GitLab**
-To change a Gitaly setting in installations from source you can edit
-`/home/git/gitaly/config.toml`. Changes will be applied when you run
-`service gitlab restart`.
+1. Edit `/etc/gitlab/gitlab.rb` and add or change the [Gitaly settings](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/1dd07197c7e5ae23626aad5a4a070a800b670380/files/gitlab-config-template/gitlab.rb.template#L1622-1676).
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-```toml
-prometheus_listen_addr = "localhost:9236"
-```
+**For installations from source**
-## Client-side GRPC logs
-
-Gitaly uses the [gRPC](https://grpc.io/) RPC framework. The Ruby gRPC
-client has its own log file which may contain useful information when
-you are seeing Gitaly errors. You can control the log level of the
-gRPC client with the `GRPC_LOG_LEVEL` environment variable. The
-default level is `WARN`.
+1. Edit `/home/git/gitaly/config.toml` and add or change the [Gitaly settings](https://gitlab.com/gitlab-org/gitaly/blob/master/config.toml.example).
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
## Running Gitaly on its own server
-> This is an optional way to deploy Gitaly which can benefit GitLab
+This is an optional way to deploy Gitaly which can benefit GitLab
installations that are larger than a single machine. Most
installations will be better served with the default configuration
used by Omnibus and the GitLab source installation guide.
Starting with GitLab 11.4, Gitaly is able to serve all Git requests without
-needed a shared NFS mount for Git repository data.
+requiring a shared NFS mount for Git repository data.
Between 11.4 and 11.8 the exception was the
[Elasticsearch indexer](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer).
But since 11.8 the indexer uses Gitaly for data access as well. NFS can still
be leveraged for redudancy on block level of the Git data. But only has to
be mounted on the Gitaly server.
-NOTE: **Note:** While Gitaly can be used as a replacement for NFS, we do not recommend
-using EFS as it may impact GitLab's performance. Please review the [relevant documentation](../high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs)
+Starting with GitLab 11.8, it is possible to use ElasticSearch in conjunction with
+a Gitaly setup that isn't utilising NFS. In order to use ElasticSearch in this
+scenario, the [new repository indexer](../../integration/elasticsearch.md#elasticsearch-repository-indexer-beta)
+needs to be enabled in your GitLab configuration.
+
+NOTE: **Note:** While Gitaly can be used as a replacement for NFS, it's not recommended
+to use EFS as it may impact GitLab's performance. Review the [relevant documentation](../high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs)
for more details.
### Network architecture
-- gitlab-rails shards repositories into "repository storages"
-- `gitlab-rails/config/gitlab.yml` contains a map from storage names to
- (Gitaly address, Gitaly token) pairs
-- the `storage name` -\> `(Gitaly address, Gitaly token)` map in
- `gitlab.yml` is the single source of truth for the Gitaly network
- topology
-- a (Gitaly address, Gitaly token) corresponds to a Gitaly server
-- a Gitaly server hosts one or more storages
-- Gitaly addresses must be specified in such a way that they resolve
- correctly for ALL Gitaly clients
-- Gitaly clients are: unicorn, sidekiq, gitlab-workhorse,
- gitlab-shell, Elasticsearch Indexer, and Gitaly itself
-- special case: a Gitaly server must be able to make RPC calls **to
- itself** via its own (Gitaly address, Gitaly token) pair as
- specified in `gitlab-rails/config/gitlab.yml`
-- Gitaly servers must not be exposed to the public internet
+The following list depicts what the network architecture of Gitaly is:
-Gitaly network traffic is unencrypted by default, but supports
-[TLS](#tls-support). Authentication is done through a static token.
+- GitLab Rails shards repositories into [repository storages](../repository_storage_paths.md).
+- `/config/gitlab.yml` contains a map from storage names to
+ `(Gitaly address, Gitaly token)` pairs.
+- the `storage name` -\> `(Gitaly address, Gitaly token)` map in
+ `/config/gitlab.yml` is the single source of truth for the Gitaly network
+ topology.
+- A `(Gitaly address, Gitaly token)` corresponds to a Gitaly server.
+- A Gitaly server hosts one or more storages.
+- A GitLab server can use one or more Gitaly servers.
+- Gitaly addresses must be specified in such a way that they resolve
+ correctly for ALL Gitaly clients.
+- Gitaly clients are: Unicorn, Sidekiq, gitlab-workhorse,
+ gitlab-shell, Elasticsearch Indexer, and Gitaly itself.
+- A Gitaly server must be able to make RPC calls **to itself** via its own
+ `(Gitaly address, Gitaly token)` pair as specified in `/config/gitlab.yml`.
+- Gitaly servers must not be exposed to the public internet as Gitaly's network
+ traffic is unencrypted by default. The use of firewall is highly recommended
+ to restrict access to the Gitaly server. Another option is to
+ [use TLS](#tls-support).
+- Authentication is done through a static token which is shared among the Gitaly
+ and GitLab Rails nodes.
-NOTE: **Note:** Gitaly network traffic is unencrypted so we recommend a firewall to
-restrict access to your Gitaly server.
+Below we describe how to configure two Gitaly servers one at
+`gitaly1.internal` and the other at `gitaly2.internal`
+with secret token `abc123secret`. We assume
+your GitLab installation has three repository storages: `default`,
+`storage1` and `storage2`.
-Below we describe how to configure a Gitaly server at address
-`gitaly.internal:8075` with secret token `abc123secret`. We assume
-your GitLab installation has two repository storages, `default` and
-`storage1`.
+### 1. Installation
-### Installation
+First install Gitaly on each Gitaly server using either
+Omnibus GitLab or install it from source:
-First install Gitaly using either Omnibus or from source.
+- For Omnibus GitLab: [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
+ package you want using **steps 1 and 2** from the GitLab downloads page but
+ **_do not_** provide the `EXTERNAL_URL=` value.
+- From source: [Install Gitaly](../../install/installation.md#install-gitaly).
-Omnibus: [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
-package you want using **steps 1 and 2** from the GitLab downloads page but
-**_do not_** provide the `EXTERNAL_URL=` value.
+### 2. Client side token configuration
-Source: [Install Gitaly](../../install/installation.md#install-gitaly)
+Configure a token on the instance that runs the GitLab Rails application.
-### Client side token configuration
+**For Omnibus GitLab**
-Configure a token on the client side.
+1. On the client node(s), edit `/etc/gitlab/gitlab.rb`:
-Omnibus installations:
+ ```ruby
+ gitlab_rails['gitaly_token'] = 'abc123secret'
+ ```
-```ruby
-# /etc/gitlab/gitlab.rb
-gitlab_rails['gitaly_token'] = 'abc123secret'
-```
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-Source installations:
+**For installations from source**
-```yaml
-# /home/git/gitlab/config/gitlab.yml
-gitlab:
- gitaly:
- token: 'abc123secret'
-```
+1. On the client node(s), edit `/home/git/gitlab/config/gitlab.yml`:
+
+ ```yaml
+ gitlab:
+ gitaly:
+ token: 'abc123secret'
+ ```
-You need to reconfigure (Omnibus) or restart (source) for these
-changes to be picked up.
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-### Gitaly server configuration
+### 3. Gitaly server configuration
-Next, on the Gitaly server, we need to configure storage paths, enable
+Next, on the Gitaly servers, you need to configure storage paths, enable
the network listener and configure the token.
NOTE: **Note:** if you want to reduce the risk of downtime when you enable
@@ -137,131 +137,228 @@ the Gitaly server. The easiest way to accomplish this is to copy `/etc/gitlab/gi
from an existing GitLab server to the Gitaly server. Without this shared secret,
Git operations in GitLab will result in an API error.
-NOTE: **Note:** In most or all cases the storage paths below end in `/repositories` which is
-different than `path` in `git_data_dirs` of Omnibus installations. Check the
-directory layout on your Gitaly server to be sure.
+NOTE: **Note:**
+In most or all cases, the storage paths below end in `/repositories` which is
+not the case with `path` in `git_data_dirs` of Omnibus GitLab installations.
+Check the directory layout on your Gitaly server to be sure.
+
+**For Omnibus GitLab**
+
+1. Edit `/etc/gitlab/gitlab.rb`:
+
+ <!--
+ updates to following example must also be made at
+ https://gitlab.com/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
+ -->
+
+ ```ruby
+ # /etc/gitlab/gitlab.rb
+
+ # Avoid running unnecessary services on the Gitaly server
+ postgresql['enable'] = false
+ redis['enable'] = false
+ nginx['enable'] = false
+ prometheus['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+
+ # Prevent database connections during 'gitlab-ctl reconfigure'
+ gitlab_rails['rake_cache_clear'] = false
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the gitlab-shell API callback URL. Without this, `git push` will
+ # fail. This can be your 'front door' GitLab URL or an internal load
+ # balancer.
+ # Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
+ gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
+
+ # Make Gitaly accept connections on all network interfaces. You must use
+ # firewalls to restrict access to this address/port.
+ gitaly['listen_addr'] = "0.0.0.0:8075"
+ gitaly['auth_token'] = 'abc123secret'
+
+ # To use TLS for Gitaly you need to add
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "path/to/cert.pem"
+ gitaly['key_path'] = "path/to/key.pem"
+ ```
+
+1. Append the following to `/etc/gitlab/gitlab.rb` for each respective server:
+
+ For `gitaly1.internal`:
+
+ ```
+ gitaly['storage'] = [
+ { 'name' => 'default' },
+ { 'name' => 'storage1' },
+ ]
+ ```
+
+ For `gitaly2.internal`:
+
+ ```
+ gitaly['storage'] = [
+ { 'name' => 'storage2' },
+ ]
+ ```
+
+ NOTE: **Note:**
+ In some cases, you'll have to set `path` for `gitaly['storage']` in the
+ format `'path' => '/mnt/gitlab/<storage name>/repositories'`.
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+**For installations from source**
+
+1. On the client node(s), edit `/home/git/gitaly/config.toml`:
+
+ ```toml
+ listen_addr = '0.0.0.0:8075'
+ tls_listen_addr = '0.0.0.0:9999'
+
+ [tls]
+ certificate_path = /path/to/cert.pem
+ key_path = /path/to/key.pem
+
+ [auth]
+ token = 'abc123secret'
+ ```
+
+1. Append the following to `/home/git/gitaly/config.toml` for each respective server:
+
+ For `gitaly1.internal`:
+
+ ```toml
+ [[storage]]
+ name = 'default'
+
+ [[storage]]
+ name = 'storage1'
+ ```
+
+ For `gitaly2.internal`:
+
+ ```toml
+ [[storage]]
+ name = 'storage2'
+ ```
+
+ NOTE: **Note:**
+ In some cases, you'll have to set `path` for each `[[storage]]` in the
+ format `path = '/mnt/gitlab/<storage name>/repositories'`.
+
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+
+### 4. Converting clients to use the Gitaly server
+
+As the final step, you need to update the client machines to switch from using
+their local Gitaly service to the new Gitaly server you just configured. This
+is a risky step because if there is any sort of network, firewall, or name
+resolution problem preventing your GitLab server from reaching the Gitaly server,
+then all Gitaly requests will fail.
-Omnibus installations:
+Additionally, you need to
+[disable Rugged if previously manually enabled](../high_availability/nfs.md#improving-nfs-performance-with-gitlab).
-<!--
-updates to following example must also be made at
-https://gitlab.com/charts/gitlab/blob/master/doc/advanced/external-gitaly/external-omnibus-gitaly.md#configure-omnibus-gitlab
--->
+We assume that your `gitaly1.internal` Gitaly server can be reached at
+`gitaly1.internal:8075` from your GitLab server, and that Gitaly server
+can read and write to `/mnt/gitlab/default` and `/mnt/gitlab/storage1`.
-```ruby
-# /etc/gitlab/gitlab.rb
-
-# Avoid running unnecessary services on the Gitaly server
-postgresql['enable'] = false
-redis['enable'] = false
-nginx['enable'] = false
-prometheus['enable'] = false
-unicorn['enable'] = false
-sidekiq['enable'] = false
-gitlab_workhorse['enable'] = false
-
-# Prevent database connections during 'gitlab-ctl reconfigure'
-gitlab_rails['rake_cache_clear'] = false
-gitlab_rails['auto_migrate'] = false
-
-# Configure the gitlab-shell API callback URL. Without this, `git push` will
-# fail. This can be your 'front door' GitLab URL or an internal load
-# balancer.
-# Don't forget to copy `/etc/gitlab/gitlab-secrets.json` from web server to Gitaly server.
-gitlab_rails['internal_api_url'] = 'https://gitlab.example.com'
-
-# Make Gitaly accept connections on all network interfaces. You must use
-# firewalls to restrict access to this address/port.
-gitaly['listen_addr'] = "0.0.0.0:8075"
-gitaly['auth_token'] = 'abc123secret'
-
-gitaly['storage'] = [
- { 'name' => 'default', 'path' => '/mnt/gitlab/default/repositories' },
- { 'name' => 'storage1', 'path' => '/mnt/gitlab/storage1/repositories' },
-]
+We assume also that your `gitaly2.internal` Gitaly server can be reached at
+`gitaly2.internal:8075` from your GitLab server, and that Gitaly server
+can read and write to `/mnt/gitlab/storage2`.
-# To use TLS for Gitaly you need to add
-gitaly['tls_listen_addr'] = "0.0.0.0:9999"
-gitaly['certificate_path'] = "path/to/cert.pem"
-gitaly['key_path'] = "path/to/key.pem"
-```
+**For Omnibus GitLab**
-Source installations:
+1. Edit `/etc/gitlab/gitlab.rb`:
-```toml
-# /home/git/gitaly/config.toml
-listen_addr = '0.0.0.0:8075'
-tls_listen_addr = '0.0.0.0:9999'
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage1' => { 'gitaly_address' => 'tcp://gitaly1.internal:8075' },
+ 'storage2' => { 'gitaly_address' => 'tcp://gitaly2.internal:8075' },
+ })
-[tls]
-certificate_path = /path/to/cert.pem
-key_path = /path/to/key.pem
+ gitlab_rails['gitaly_token'] = 'abc123secret'
+ ```
-[auth]
-token = 'abc123secret'
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. Tail the logs to see the requests:
-[[storage]]
-name = 'default'
-path = '/mnt/gitlab/default/repositories'
+ ```sh
+ sudo gitlab-ctl tail gitaly
+ ```
-[[storage]]
-name = 'storage1'
-path = '/mnt/gitlab/storage1/repositories'
-```
+**For installations from source**
-Again, reconfigure (Omnibus) or restart (source).
+1. Edit `/home/git/gitlab/config/gitlab.yml`:
-### Converting clients to use the Gitaly server
+ ```yaml
+ gitlab:
+ repositories:
+ storages:
+ default:
+ gitaly_address: tcp://gitaly1.internal:8075
+ path: /some/dummy/path
+ storage1:
+ gitaly_address: tcp://gitaly1.internal:8075
+ path: /some/dummy/path
+ storage2:
+ gitaly_address: tcp://gitaly2.internal:8075
+ path: /some/dummy/path
-Now as the final step update the client machines to switch from using
-their local Gitaly service to the new Gitaly server you just
-configured. This is a risky step because if there is any sort of
-network, firewall, or name resolution problem preventing your GitLab
-server from reaching the Gitaly server then all Gitaly requests will
-fail.
+ gitaly:
+ token: 'abc123secret'
+ ```
-Additionally, you need to
-[disable Rugged if previously manually enabled](../high_availability/nfs.md#improving-nfs-performance-with-gitlab).
+ NOTE: **Note:**
+ `/some/dummy/path` should be set to a local folder that exists, however no
+ data will be stored in this folder. This will no longer be necessary after
+ [this issue](https://gitlab.com/gitlab-org/gitaly/issues/1282) is resolved.
-We assume that your Gitaly server can be reached at
-`gitaly.internal:8075` from your GitLab server, and that Gitaly can read and
-write to `/mnt/gitlab/default` and `/mnt/gitlab/storage1` respectively.
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+1. Tail the logs to see the requests:
-Omnibus installations:
+ ```sh
+ tail -f /home/git/gitlab/log/gitaly.log
+ ```
-```ruby
-# /etc/gitlab/gitlab.rb
-git_data_dirs({
- 'default' => { 'gitaly_address' => 'tcp://gitaly.internal:8075' },
- 'storage1' => { 'gitaly_address' => 'tcp://gitaly.internal:8075' },
-})
+When you tail the Gitaly logs on your Gitaly server you should see requests
+coming in. One sure way to trigger a Gitaly request is to clone a repository
+from your GitLab server over HTTP.
-gitlab_rails['gitaly_token'] = 'abc123secret'
-```
+### Disabling the Gitaly service in a cluster environment
-Source installations:
-
-```yaml
-# /home/git/gitlab/config/gitlab.yml
-gitlab:
- repositories:
- storages:
- default:
- path: /mnt/gitlab/default/repositories
- gitaly_address: tcp://gitaly.internal:8075
- storage1:
- path: /mnt/gitlab/storage1/repositories
- gitaly_address: tcp://gitaly.internal:8075
-
- gitaly:
- token: 'abc123secret'
-```
+If you are running Gitaly [as a remote
+service](#running-gitaly-on-its-own-server) you may want to disable
+the local Gitaly service that runs on your GitLab server by default.
+Disabling Gitaly only makes sense when you run GitLab in a custom
+cluster configuration, where different services run on different
+machines. Disabling Gitaly on all machines in the cluster is not a
+valid configuration.
+
+To disable Gitaly on a client node:
+
+**For Omnibus GitLab**
+
+1. Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ gitaly['enable'] = false
+ ```
+
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+**For installations from source**
+
+1. Edit `/etc/default/gitlab`:
-Now reconfigure (Omnibus) or restart (source). When you tail the
-Gitaly logs on your Gitaly server (`sudo gitlab-ctl tail gitaly` or
-`tail -f /home/git/gitlab/log/gitaly.log`) you should see requests
-coming in. One sure way to trigger a Gitaly request is to clone a
-repository from your GitLab server over HTTP.
+ ```shell
+ gitaly_enabled=false
+ ```
+
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
## TLS support
@@ -271,188 +368,209 @@ Gitaly supports TLS encryption. To be able to communicate
with a Gitaly instance that listens for secure connections you will need to use `tls://` url
scheme in the `gitaly_address` of the corresponding storage entry in the GitLab configuration.
-The admin needs to bring their own certificate as we do not provide that automatically.
-The certificate to be used needs to be installed on all Gitaly nodes and on all client nodes that communicate with it following procedures described in [GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
+You will need to bring your own certificates as this isn't provided automatically.
+The certificate to be used needs to be installed on all Gitaly nodes and on all
+client nodes that communicate with it following the procedure described in
+[GitLab custom certificate configuration](https://docs.gitlab.com/omnibus/settings/ssl.html#install-custom-public-certificates).
-Note that it is possible to configure Gitaly servers with both an
+NOTE: **Note:**
+It is possible to configure Gitaly servers with both an
unencrypted listening address `listen_addr` and an encrypted listening
address `tls_listen_addr` at the same time. This allows you to do a
gradual transition from unencrypted to encrypted traffic, if necessary.
-To observe what type of connections are actually being used in a
-production environment you can use the following Prometheus query:
+To configure Gitaly with TLS:
-```
-sum(rate(gitaly_connections_total[5m])) by (type)
-```
+**For Omnibus GitLab**
-### Example TLS configuration
+1. On the client nodes, edit `/etc/gitlab/gitlab.rb`:
-### Omnibus installations:
+ ```ruby
+ git_data_dirs({
+ 'default' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage1' => { 'gitaly_address' => 'tls://gitaly1.internal:9999' },
+ 'storage2' => { 'gitaly_address' => 'tls://gitaly2.internal:9999' },
+ })
-#### On client nodes:
+ gitlab_rails['gitaly_token'] = 'abc123secret'
+ ```
-```ruby
-# /etc/gitlab/gitlab.rb
-git_data_dirs({
- 'default' => { 'gitaly_address' => 'tls://gitaly.internal:9999' },
- 'storage1' => { 'gitaly_address' => 'tls://gitaly.internal:9999' },
-})
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+1. On the Gitaly server nodes, edit `/etc/gitlab/gitlab.rb`:
-gitlab_rails['gitaly_token'] = 'abc123secret'
-```
+ ```ruby
+ gitaly['tls_listen_addr'] = "0.0.0.0:9999"
+ gitaly['certificate_path'] = "path/to/cert.pem"
+ gitaly['key_path'] = "path/to/key.pem"
+ ```
-#### On Gitaly server nodes:
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-```ruby
-gitaly['tls_listen_addr'] = "0.0.0.0:9999"
-gitaly['certificate_path'] = "path/to/cert.pem"
-gitaly['key_path'] = "path/to/key.pem"
-```
+**For installations from source**
-### Source installations:
+1. On the client nodes, edit `/home/git/gitlab/config/gitlab.yml`:
-#### On client nodes:
+ ```yaml
+ gitlab:
+ repositories:
+ storages:
+ default:
+ gitaly_address: tls://gitaly1.internal:9999
+ path: /some/dummy/path
+ storage1:
+ gitaly_address: tls://gitaly1.internal:9999
+ path: /some/dummy/path
+ storage2:
+ gitaly_address: tls://gitaly2.internal:9999
+ path: /some/dummy/path
-```yaml
-# /home/git/gitlab/config/gitlab.yml
-gitlab:
- repositories:
- storages:
- default:
- path: /mnt/gitlab/default/repositories
- gitaly_address: tls://gitaly.internal:9999
- storage1:
- path: /mnt/gitlab/storage1/repositories
- gitaly_address: tls://gitaly.internal:9999
+ gitaly:
+ token: 'abc123secret'
+ ```
- gitaly:
- token: 'abc123secret'
-```
+ NOTE: **Note:**
+ `/some/dummy/path` should be set to a local folder that exists, however no
+ data will be stored in this folder. This will no longer be necessary after
+ [this issue](https://gitlab.com/gitlab-org/gitaly/issues/1282) is resolved.
+
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
+1. On the Gitaly server nodes, edit `/home/git/gitaly/config.toml`:
+
+ ```toml
+ tls_listen_addr = '0.0.0.0:9999'
+
+ [tls]
+ certificate_path = '/path/to/cert.pem'
+ key_path = '/path/to/key.pem'
+ ```
-#### On Gitaly server nodes:
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-```toml
-# /home/git/gitaly/config.toml
-tls_listen_addr = '0.0.0.0:9999'
+To observe what type of connections are actually being used in a
+production environment you can use the following Prometheus query:
-[tls]
-certificate_path = '/path/to/cert.pem'
-key_path = '/path/to/key.pem'
+```
+sum(rate(gitaly_connections_total[5m])) by (type)
```
-## Gitaly-ruby
+## `gitaly-ruby`
-Gitaly was developed to replace Ruby application code in gitlab-ce/ee.
+Gitaly was developed to replace the Ruby application code in GitLab.
In order to save time and/or avoid the risk of rewriting existing
application logic, in some cases we chose to copy some application code
-from gitlab-ce into Gitaly almost as-is. To be able to run that code, we
-made gitaly-ruby, which is a sidecar process for the main Gitaly Go
-process. Some examples of things that are implemented in gitaly-ruby are
-RPC's that deal with wiki's, and RPC's that create commits on behalf of
+from GitLab into Gitaly almost as-is. To be able to run that code,
+`gitaly-ruby` was created, which is a "sidecar" process for the main Gitaly Go
+process. Some examples of things that are implemented in `gitaly-ruby` are
+RPCs that deal with wikis, and RPCs that create commits on behalf of
a user, such as merge commits.
-### Number of gitaly-ruby workers
+### Number of `gitaly-ruby` workers
-Gitaly-ruby has much less capacity than Gitaly itself. If your Gitaly
-server has to handle a lot of request, the default setting of having
-just 1 active gitaly-ruby sidecar might not be enough. If you see
-ResourceExhausted errors from Gitaly it's very likely that you have not
-enough gitaly-ruby capacity.
+`gitaly-ruby` has much less capacity than Gitaly itself. If your Gitaly
+server has to handle a lot of requests, the default setting of having
+just one active `gitaly-ruby` sidecar might not be enough. If you see
+`ResourceExhausted` errors from Gitaly, it's very likely that you have not
+enough `gitaly-ruby` capacity.
-You can increase the number of gitaly-ruby processes on your Gitaly
+You can increase the number of `gitaly-ruby` processes on your Gitaly
server with the following settings.
-Omnibus:
+**For Omnibus GitLab**
-```ruby
-# /etc/gitlab/gitlab.rb
-# Default is 2 workers. The minimum is 2; 1 worker is always reserved as
-# a passive stand-by.
-gitaly['ruby_num_workers'] = 4
-```
+1. Edit `/etc/gitlab/gitlab.rb`:
-Source:
+ ```ruby
+ # Default is 2 workers. The minimum is 2; 1 worker is always reserved as
+ # a passive stand-by.
+ gitaly['ruby_num_workers'] = 4
+ ```
-```toml
-# /home/git/gitaly/config.toml
-[gitaly-ruby]
-num_workers = 4
-```
+1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure).
-### Observing gitaly-ruby traffic
+**For installations from source**
-Gitaly-ruby is a somewhat hidden, internal implementation detail of
-Gitaly. There is not that much visibility into what goes on inside
-gitaly-ruby processes.
+1. Edit `/home/git/gitaly/config.toml`:
-If you have Prometheus set up to scrape your Gitaly process, you can see
-request rates and error codes for individual RPC's in gitaly-ruby by
-querying `grpc_client_handled_total`. Strictly speaking this metric does
-not differentiate between gitaly-ruby and other RPC's, but in practice
-(as of GitLab 11.9), all gRPC calls made by Gitaly itself are internal
-calls from the main Gitaly process to one of its gitaly-ruby sidecars.
+ ```toml
+ [gitaly-ruby]
+ num_workers = 4
+ ```
-Assuming your `grpc_client_handled_total` counter only observes Gitaly,
-the following query shows you RPC's are (most likely) internally
-implemented as calls to gitaly-ruby.
+1. Save the file and [restart GitLab](../restart_gitlab.md#installations-from-source).
-```
-sum(rate(grpc_client_handled_total[5m])) by (grpc_method) > 0
-```
+## Eliminating NFS altogether
-## Disabling or enabling the Gitaly service in a cluster environment
+If you are planning to use Gitaly without NFS for your storage needs
+and want to eliminate NFS from your environment altogether, there are
+a few things that you need to do:
-If you are running Gitaly [as a remote
-service](#running-gitaly-on-its-own-server) you may want to disable
-the local Gitaly service that runs on your GitLab server by default.
+1. Make sure the [`git` user home directory](https://docs.gitlab.com/omnibus/settings/configuration.html#moving-the-home-directory-for-a-user) is on local disk.
+1. Configure [database lookup of SSH keys](../operations/fast_ssh_key_lookup.md)
+ to eliminate the need for a shared authorized_keys file.
+1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
+ including [live tracing](../job_traces.md#new-live-trace-architecture).
+1. Configure [object storage for LFS objects](../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
+1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+
+NOTE: **Note:**
+One current feature of GitLab that still requires a shared directory (NFS) is
+[GitLab Pages](../../user/project/pages/index.md).
+There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/issues/196)
+to eliminate the need for NFS to support GitLab Pages.
-> 'Disabling Gitaly' only makes sense when you run GitLab in a custom
-cluster configuration, where different services run on different
-machines. Disabling Gitaly on all machines in the cluster is not a
-valid configuration.
+## Limiting RPC concurrency
-If you are setting up a GitLab cluster where Gitaly does not need to
-run on all machines, you can disable the Gitaly service in your
-Omnibus installation, add the following line to `/etc/gitlab/gitlab.rb`:
+It can happen that CI clone traffic puts a large strain on your Gitaly
+service. The bulk of the work gets done in the SSHUploadPack (for Git
+SSH) and PostUploadPack (for Git HTTP) RPC's. To prevent such workloads
+from overcrowding your Gitaly server you can set concurrency limits in
+Gitaly's configuration file.
```ruby
-gitaly['enable'] = false
+# in /etc/gitlab/gitlab.rb
+
+gitaly['concurrency'] = [
+ {
+ 'rpc' => "/gitaly.SmartHTTPService/PostUploadPack",
+ 'max_per_repo' => 20
+ },
+ {
+ 'rpc' => "/gitaly.SSHService/SSHUploadPack",
+ 'max_per_repo' => 20
+ }
+]
```
+This will limit the number of in-flight RPC calls for the given RPC's.
+The limit is applied per repository. In the example above, each on the
+Gitaly server can have at most 20 simultaneous PostUploadPack calls in
+flight, and the same for SSHUploadPack. If another request comes in for
+a repository that hase used up its 20 slots, that request will get
+queued.
-When you run `gitlab-ctl reconfigure` the Gitaly service will be
-disabled.
-
-To disable the Gitaly service in a GitLab cluster where you installed
-GitLab from source, add the following to `/etc/default/gitlab` on the
-machine where you want to disable Gitaly.
-
-```shell
-gitaly_enabled=false
-```
+You can observe the behavior of this queue via the Gitaly logs and via
+Prometheus. In the Gitaly logs, you can look for the string (or
+structured log field) `acquire_ms`. Messages that have this field are
+reporting about the concurrency limiter. In Prometheus, look for the
+`gitaly_rate_limiting_in_progress`, `gitaly_rate_limiting_queued` and
+`gitaly_rate_limiting_seconds` metrics.
-When you run `service gitlab restart` Gitaly will be disabled on this
-particular machine.
+The name of the Prometheus metric is not quite right because this is a
+concurrency limiter, not a rate limiter. If a client makes 1000 requests
+in a row in a very short timespan, the concurrency will not exceed 1,
+and this mechanism (the concurrency limiter) will do nothing.
-## Eliminating NFS altogether
+## Troubleshooting Gitaly
-If you are planning to use Gitaly without NFS for your storage needs
-and want to eliminate NFS from your environment altogether, there are
-a few things that you need to do:
+### Commits, pushes, and clones return a 401
- 1. Make sure the [`git` user home directory](https://docs.gitlab.com/omnibus/settings/configuration.html#moving-the-home-directory-for-a-user) is on local disk.
- 1. Configure [database lookup of SSH keys](../operations/fast_ssh_key_lookup.md)
- to eliminate the need for a shared authorized_keys file.
- 1. Configure [object storage for job artifacts](../job_artifacts.md#using-object-storage)
- including [live tracing](../job_traces.md#new-live-trace-architecture).
- 1. Configure [object storage for LFS objects](../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
- 1. Configure [object storage for uploads](../uploads.md#using-object-storage-core-only).
+```
+remote: GitLab: 401 Unauthorized
+```
-NOTE: **Note:** One current feature of GitLab still requires a shared directory (NFS): [GitLab Pages](../../user/project/pages/index.md).
-There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/issues/196)
-to eliminate the need for NFS to support GitLab Pages.
+You will need to sync your `gitlab-secrets.json` file with your GitLab
+app nodes.
-## Troubleshooting Gitaly in production
+### `gitaly-debug`
Since GitLab 11.6, Gitaly comes with a command-line tool called
`gitaly-debug` that can be run on a Gitaly server to aid in
@@ -462,3 +580,153 @@ Git clone speed for a specific repository on the server.
For an up to date list of sub-commands see [the gitaly-debug
README](https://gitlab.com/gitlab-org/gitaly/blob/master/cmd/gitaly-debug/README.md).
+
+### Client side GRPC logs
+
+Gitaly uses the [gRPC](https://grpc.io/) RPC framework. The Ruby gRPC
+client has its own log file which may contain useful information when
+you are seeing Gitaly errors. You can control the log level of the
+gRPC client with the `GRPC_LOG_LEVEL` environment variable. The
+default level is `WARN`.
+
+### Observing `gitaly-ruby` traffic
+
+[`gitaly-ruby`](#gitaly-ruby) is an internal implementation detail of Gitaly,
+so, there's not that much visibility into what goes on inside
+`gitaly-ruby` processes.
+
+If you have Prometheus set up to scrape your Gitaly process, you can see
+request rates and error codes for individual RPCs in `gitaly-ruby` by
+querying `grpc_client_handled_total`. Strictly speaking, this metric does
+not differentiate between `gitaly-ruby` and other RPCs, but in practice
+(as of GitLab 11.9), all gRPC calls made by Gitaly itself are internal
+calls from the main Gitaly process to one of its `gitaly-ruby` sidecars.
+
+Assuming your `grpc_client_handled_total` counter only observes Gitaly,
+the following query shows you RPCs are (most likely) internally
+implemented as calls to `gitaly-ruby`:
+
+```
+sum(rate(grpc_client_handled_total[5m])) by (grpc_method) > 0
+```
+
+### Repository changes fail with a `401 Unauthorized` error
+
+If you're running Gitaly on its own server and notice that users can
+successfully clone and fetch repositories (via both SSH and HTTPS), but can't
+push to them or make changes to the repository in the web UI without getting a
+`401 Unauthorized` message, then it's possible Gitaly is failing to authenticate
+with the other nodes due to having the [wrong secrets file](#3-gitaly-server-configuration).
+
+Confirm the following are all true:
+
+- When any user performs a `git push` to any repository on this Gitaly node, it
+ fails with the following error (note the `401 Unauthorized`):
+
+ ```sh
+ remote: GitLab: 401 Unauthorized
+ To <REMOTE_URL>
+ ! [remote rejected] branch-name -> branch-name (pre-receive hook declined)
+ error: failed to push some refs to '<REMOTE_URL>'
+ ```
+
+- When any user adds or modifies a file from the repository using the GitLab
+ UI, it immediatley fails with a red `401 Unauthorized` banner.
+- Creating a new project and [initializing it with a README](../../gitlab-basics/create-project.md#blank-projects)
+ successfully creates the project but doesn't create the README.
+- When [tailing the logs](https://docs.gitlab.com/omnibus/settings/logs.md#tail-logs-in-a-console-on-the-server) on an app node and reproducing the error, you get `401` errors
+ when reaching the `/api/v4/internal/allowed` endpoint:
+
+ ```sh
+ # api_json.log
+ {
+ "time": "2019-07-18T00:30:14.967Z",
+ "severity": "INFO",
+ "duration": 0.57,
+ "db": 0,
+ "view": 0.57,
+ "status": 401,
+ "method": "POST",
+ "path": "\/api\/v4\/internal\/allowed",
+ "params": [
+ {
+ "key": "action",
+ "value": "git-receive-pack"
+ },
+ {
+ "key": "changes",
+ "value": "REDACTED"
+ },
+ {
+ "key": "gl_repository",
+ "value": "REDACTED"
+ },
+ {
+ "key": "project",
+ "value": "\/path\/to\/project.git"
+ },
+ {
+ "key": "protocol",
+ "value": "web"
+ },
+ {
+ "key": "env",
+ "value": "{\"GIT_ALTERNATE_OBJECT_DIRECTORIES\":[],\"GIT_ALTERNATE_OBJECT_DIRECTORIES_RELATIVE\":[],\"GIT_OBJECT_DIRECTORY\":null,\"GIT_OBJECT_DIRECTORY_RELATIVE\":null}"
+ },
+ {
+ "key": "user_id",
+ "value": "2"
+ },
+ {
+ "key": "secret_token",
+ "value": "[FILTERED]"
+ }
+ ],
+ "host": "gitlab.example.com",
+ "ip": "REDACTED",
+ "ua": "Ruby",
+ "route": "\/api\/:version\/internal\/allowed",
+ "queue_duration": 4.24,
+ "gitaly_calls": 0,
+ "gitaly_duration": 0,
+ "correlation_id": "XPUZqTukaP3"
+ }
+
+ # nginx_access.log
+ [IP] - - [18/Jul/2019:00:30:14 +0000] "POST /api/v4/internal/allowed HTTP/1.1" 401 30 "" "Ruby"
+ ```
+
+To fix this problem, confirm that your [`gitlab-secrets.json` file](#3-gitaly-server-configuration)
+on the Gitaly node matches the one on all other nodes. If it doesn't match,
+update the secrets file on the Gitaly node to match the others, then
+[reconfigure the node](../restart_gitlab.md#omnibus-gitlab-reconfigure).
+
+### Command line tools cannot connect to Gitaly
+
+If you are having trouble connecting to a Gitaly node with command line (CLI) tools, and certain actions result in a `14: Connect Failed` error message, it means that gRPC cannot reach your Gitaly node.
+
+Verify that you can reach Gitaly via TCP:
+
+```bash
+sudo gitlab-rake gitlab:tcp_check[GITALY_SERVER_IP,GITALY_LISTEN_PORT]
+```
+
+If the TCP connection fails, check your network settings and your firewall rules. If the TCP connection succeeds, your networking and firewall rules are correct.
+
+If you use proxy servers in your command line environment, such as Bash, these can interfere with your gRPC traffic.
+
+If you use Bash or a compatible command line environment, run the following commands to determine whether you have proxy servers configured:
+
+```bash
+echo $http_proxy
+echo $https_proxy
+```
+
+If either of these variables have a value, your Gitaly CLI connections may be getting routed through a proxy which cannot connect to Gitaly.
+
+To remove the proxy setting, run the following commands (depending on which variables had values):
+
+```bash
+unset http_proxy
+unset https_proxy
+``` \ No newline at end of file
diff --git a/doc/administration/high_availability/README.md b/doc/administration/high_availability/README.md
index e81d2741082..56665ba8b9a 100644
--- a/doc/administration/high_availability/README.md
+++ b/doc/administration/high_availability/README.md
@@ -1,3 +1,7 @@
+---
+type: reference, concepts
+---
+
# Scaling and High Availability
GitLab supports several different types of clustering and high-availability.
@@ -160,25 +164,24 @@ contention due to certain workloads.
#### Reference Architecture
-- **Status:** Work-in-progress
- **Supported Users (approximate):** 10,000
-- **Related Issues:** [gitlab-com/support/support-team-meta#1513](https://gitlab.com/gitlab-com/support/support-team-meta/issues/1513),
- [gitlab-org/quality/team-tasks#110](https://gitlab.com/gitlab-org/quality/team-tasks/issues/110)
-
-The Support and Quality teams are in the process of building and performance testing
-an environment that will support about 10,000 users. The specifications below
-are a work-in-progress representation of the work so far. Quality will be
-certifying this environment in FY20-Q2. The specifications may be adjusted
-prior to certification based on performance testing.
-
-- 3 PostgreSQL - 4 CPU, 8GB RAM per node
-- 1 PgBouncer - 2 CPU, 4GB RAM
-- 2 Redis - 2 CPU, 8GB RAM per node
-- 3 Consul/Sentinel - 2 CPU, 2GB RAM per node
-- 4 Sidekiq - 4 CPU, 8GB RAM per node
-- 5 GitLab application nodes - 20 CPU, 64GB RAM per node
-- 1 Gitaly - 20 CPU, 64GB RAM
-- 1 Monitoring node - 4 CPU, 8GB RAM
+- **Known Issues:** While validating the reference architecture, slow endpoints were discovered and are being investigated. [gitlab-org/gitlab-ce/issues/64335](https://gitlab.com/gitlab-org/gitlab-ce/issues/64335)
+
+The Support and Quality teams built, performance tested, and validated an
+environment that supports about 10,000 users. The specifications below are a
+representation of the work so far. The specifications may be adjusted in the
+future based on additional testing and iteration.
+
+NOTE: **Note:** The specifications here were performance tested against a specific coded workload. Your exact needs may be more, depending on your workload. Your workload is influenced by factors such as - but not limited to - how active your users are, how much automation you use, mirroring, and repo/change size.
+
+- 3 PostgreSQL - 4 CPU, 16GiB memory per node
+- 1 PgBouncer - 2 CPU, 4GiB memory
+- 2 Redis - 2 CPU, 8GiB memory per node
+- 3 Consul/Sentinel - 2 CPU, 2GiB memory per node
+- 4 Sidekiq - 4 CPU, 16GiB memory per node
+- 5 GitLab application nodes - 16 CPU, 64GiB memory per node
+- 1 Gitaly - 16 CPU, 64GiB memory
+- 1 Monitoring node - 2 CPU, 8GiB memory, 100GiB local storage
### Fully Distributed
@@ -211,4 +214,3 @@ separately:
1. [Configure the GitLab application servers](gitlab.md)
1. [Configure the load balancers](load_balancer.md)
1. [Monitoring node (Prometheus and Grafana)](monitoring_node.md)
-
diff --git a/doc/administration/high_availability/alpha_database.md b/doc/administration/high_availability/alpha_database.md
index 7bf20be60e6..7afd739f44c 100644
--- a/doc/administration/high_availability/alpha_database.md
+++ b/doc/administration/high_availability/alpha_database.md
@@ -2,5 +2,4 @@
redirect_to: 'database.md'
---
-This documentation has been moved to the main
-[database documentation](database.md#configure_using_omnibus_for_high_availability).
+This document was moved to [another location](database.md).
diff --git a/doc/administration/high_availability/consul.md b/doc/administration/high_availability/consul.md
index 49199b659bc..b02a61b9256 100644
--- a/doc/administration/high_availability/consul.md
+++ b/doc/administration/high_availability/consul.md
@@ -1,6 +1,8 @@
-# Working with the bundled Consul service **(PREMIUM ONLY)**
+---
+type: reference
+---
-## Overview
+# Working with the bundled Consul service **(PREMIUM ONLY)**
As part of its High Availability stack, GitLab Premium includes a bundled version of [Consul](https://www.consul.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
@@ -26,27 +28,27 @@ On each Consul node perform the following:
1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
- ```ruby
- # Disable all components except Consul
- roles ['consul_role']
+ ```ruby
+ # Disable all components except Consul
+ roles ['consul_role']
- # START user configuration
- # Replace placeholders:
- #
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses gathered for CONSUL_SERVER_NODES
- consul['configuration'] = {
- server: true,
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
- }
+ # START user configuration
+ # Replace placeholders:
+ #
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses gathered for CONSUL_SERVER_NODES
+ consul['configuration'] = {
+ server: true,
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
+ }
- # Disable auto migrations
- gitlab_rails['auto_migrate'] = false
- #
- # END user configuration
- ```
+ # Disable auto migrations
+ gitlab_rails['auto_migrate'] = false
+ #
+ # END user configuration
+ ```
- > `consul_role` was introduced with GitLab 10.3
+ > `consul_role` was introduced with GitLab 10.3
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes
to take effect.
@@ -77,6 +79,7 @@ check the [Troubleshooting section](#troubleshooting) before proceeding.
### Checking cluster membership
To see which nodes are part of the cluster, run the following on any member in the cluster
+
```
# /opt/gitlab/embedded/bin/consul members
Node Address Status Type Build Protocol DC
@@ -112,18 +115,18 @@ You will see messages like the following in `gitlab-ctl tail consul` output if y
2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
```
-
To fix this:
1. Pick an address on each node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
- ```ruby
- consul['configuration'] = {
- ...
- bind_addr: 'IP ADDRESS'
- }
- ```
+ ```ruby
+ consul['configuration'] = {
+ ...
+ bind_addr: 'IP ADDRESS'
+ }
+ ```
+
1. Run `gitlab-ctl reconfigure`
If you still see the errors, you may have to [erase the consul database and reinitialize](#recreate-from-scratch) on the affected node.
@@ -144,12 +147,13 @@ To fix this:
1. Pick an address on the node that all of the other nodes can reach this node through.
1. Update your `/etc/gitlab/gitlab.rb`
- ```ruby
- consul['configuration'] = {
- ...
- bind_addr: 'IP ADDRESS'
- }
- ```
+ ```ruby
+ consul['configuration'] = {
+ ...
+ bind_addr: 'IP ADDRESS'
+ }
+ ```
+
1. Run `gitlab-ctl reconfigure`
### Outage recovery
@@ -157,6 +161,7 @@ To fix this:
If you lost enough server agents in the cluster to break quorum, then the cluster is considered failed, and it will not function without manual intervenetion.
#### Recreate from scratch
+
By default, GitLab does not store anything in the consul cluster that cannot be recreated. To erase the consul database and reinitialize
```
@@ -168,4 +173,5 @@ By default, GitLab does not store anything in the consul cluster that cannot be
After this, the cluster should start back up, and the server agents rejoin. Shortly after that, the client agents should rejoin as well.
#### Recover a failed cluster
+
If you have taken advantage of consul to store other data, and want to restore the failed cluster, please follow the [Consul guide](https://www.consul.io/docs/guides/outage.html) to recover a failed cluster.
diff --git a/doc/administration/high_availability/database.md b/doc/administration/high_availability/database.md
index c32a73080ff..7c9e02d889e 100644
--- a/doc/administration/high_availability/database.md
+++ b/doc/administration/high_availability/database.md
@@ -1,5 +1,12 @@
+---
+type: reference
+---
+
# Configuring PostgreSQL for Scaling and High Availability
+In this section, you'll be guided through configuring a PostgreSQL database
+to be used with GitLab in a highly available environment.
+
## Provide your own PostgreSQL instance **(CORE ONLY)**
If you're hosting GitLab on a cloud provider, you can optionally use a
@@ -33,15 +40,15 @@ deploy the bundled PostgreSQL.
1. SSH into the PostgreSQL server.
1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page.
- - Do not complete any other steps on the download page.
+ - Do not complete any other steps on the download page.
1. Generate a password hash for PostgreSQL. This assumes you will use the default
username of `gitlab` (recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of `POSTGRESQL_PASSWORD_HASH`.
- ```sh
- sudo gitlab-ctl pg-password-md5 gitlab
- ```
+ ```sh
+ sudo gitlab-ctl pg-password-md5 gitlab
+ ```
1. Edit `/etc/gitlab/gitlab.rb` and add the contents below, updating placeholder
values appropriately.
@@ -51,32 +58,32 @@ deploy the bundled PostgreSQL.
addresses of the GitLab application servers that will connect to the
database. Example: `%w(123.123.123.123/32 123.123.123.234/32)`
- ```ruby
- # Disable all components except PostgreSQL
- roles ['postgres_role']
- repmgr['enable'] = false
- consul['enable'] = false
- prometheus['enable'] = false
- alertmanager['enable'] = false
- pgbouncer_exporter['enable'] = false
- redis_exporter['enable'] = false
- gitlab_monitor['enable'] = false
-
- postgresql['listen_address'] = '0.0.0.0'
- postgresql['port'] = 5432
-
- # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
- postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
-
- # Replace XXX.XXX.XXX.XXX/YY with Network Address
- # ????
- postgresql['trust_auth_cidr_addresses'] = %w(APPLICATION_SERVER_IP_BLOCKS)
-
- # Disable automatic database migrations
- gitlab_rails['auto_migrate'] = false
- ```
+ ```ruby
+ # Disable all components except PostgreSQL
+ roles ['postgres_role']
+ repmgr['enable'] = false
+ consul['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ pgbouncer_exporter['enable'] = false
+ redis_exporter['enable'] = false
+ gitlab_monitor['enable'] = false
+
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['port'] = 5432
+
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ # ????
+ postgresql['trust_auth_cidr_addresses'] = %w(APPLICATION_SERVER_IP_BLOCKS)
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+ ```
- NOTE: **Note:** The role `postgres_role` was introduced with GitLab 10.3
+ NOTE: **Note:** The role `postgres_role` was introduced with GitLab 10.3
1. [Reconfigure GitLab] for the changes to take effect.
1. Note the PostgreSQL node's IP address or hostname, port, and
@@ -194,9 +201,9 @@ When using default setup, minimum configuration requires:
- `CONSUL_PASSWORD_HASH`. This is a hash generated out of consul username/password pair.
Can be generated with:
- ```sh
- sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
- ```
+ ```sh
+ sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
+ ```
- `CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
@@ -237,9 +244,9 @@ We will need the following password information for the application's database u
- `POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
Can be generated with:
- ```sh
- sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
- ```
+ ```sh
+ sudo gitlab-ctl pg-password-md5 POSTGRESQL_USERNAME
+ ```
##### Pgbouncer information
@@ -250,9 +257,9 @@ When using default setup, minimum configuration requires:
- `PGBOUNCER_PASSWORD_HASH`. This is a hash generated out of pgbouncer username/password pair.
Can be generated with:
- ```sh
- sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
- ```
+ ```sh
+ sudo gitlab-ctl pg-password-md5 PGBOUNCER_USERNAME
+ ```
- `PGBOUNCER_NODE`, is the IP address or a FQDN of the node running Pgbouncer.
@@ -288,7 +295,6 @@ Make sure you install the necessary dependencies from step 1,
add GitLab package repository from step 2.
When installing the GitLab package, do not supply `EXTERNAL_URL` value.
-
#### Configuring the Database nodes
1. Make sure to [configure the Consul nodes](consul.md).
@@ -296,53 +302,55 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
1. On the master database node, edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
- ```ruby
- # Disable all components except PostgreSQL and Repmgr and Consul
- roles ['postgres_role']
-
- # PostgreSQL configuration
- postgresql['listen_address'] = '0.0.0.0'
- postgresql['hot_standby'] = 'on'
- postgresql['wal_level'] = 'replica'
- postgresql['shared_preload_libraries'] = 'repmgr_funcs'
-
- # Disable automatic database migrations
- gitlab_rails['auto_migrate'] = false
-
- # Configure the consul agent
- consul['services'] = %w(postgresql)
-
- # START user configuration
- # Please set the real values as explained in Required Information section
- #
- # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
- postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
- # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
- postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
- # Replace X with value of number of db nodes + 1
- postgresql['max_wal_senders'] = X
-
- # Replace XXX.XXX.XXX.XXX/YY with Network Address
- postgresql['trust_auth_cidr_addresses'] = %w(XXX.XXX.XXX.XXX/YY)
- repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 XXX.XXX.XXX.XXX/YY)
-
- # Replace placeholders:
- #
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses gathered for CONSUL_SERVER_NODES
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
- }
- #
- # END user configuration
- ```
-
- > `postgres_role` was introduced with GitLab 10.3
+ ```ruby
+ # Disable all components except PostgreSQL and Repmgr and Consul
+ roles ['postgres_role']
+
+ # PostgreSQL configuration
+ postgresql['listen_address'] = '0.0.0.0'
+ postgresql['hot_standby'] = 'on'
+ postgresql['wal_level'] = 'replica'
+ postgresql['shared_preload_libraries'] = 'repmgr_funcs'
+
+ # Disable automatic database migrations
+ gitlab_rails['auto_migrate'] = false
+
+ # Configure the consul agent
+ consul['services'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ #
+ # Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
+ postgresql['pgbouncer_user_password'] = 'PGBOUNCER_PASSWORD_HASH'
+ # Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
+ postgresql['sql_user_password'] = 'POSTGRESQL_PASSWORD_HASH'
+ # Replace X with value of number of db nodes + 1
+ postgresql['max_wal_senders'] = X
+ postgresql['max_replication_slots'] = X
+
+ # Replace XXX.XXX.XXX.XXX/YY with Network Address
+ postgresql['trust_auth_cidr_addresses'] = %w(XXX.XXX.XXX.XXX/YY)
+ repmgr['trust_auth_cidr_addresses'] = %w(127.0.0.1/32 XXX.XXX.XXX.XXX/YY)
+
+ # Replace placeholders:
+ #
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses gathered for CONSUL_SERVER_NODES
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
+ }
+ #
+ # END user configuration
+ ```
+
+ > `postgres_role` was introduced with GitLab 10.3
1. On secondary nodes, add all the configuration specified above for primary node
to `/etc/gitlab/gitlab.rb`. In addition, append the following configuration
to inform gitlab-ctl that they are standby nodes initially and it need not
attempt to register them as primary node
+
```
# HA setting to specify if a node should attempt to be master on initialization
repmgr['master_on_initialization'] = false
@@ -367,31 +375,31 @@ Select one node as a primary node.
1. Open a database prompt:
- ```sh
- gitlab-psql -d gitlabhq_production
- ```
+ ```sh
+ gitlab-psql -d gitlabhq_production
+ ```
1. Enable the `pg_trgm` extension:
- ```sh
- CREATE EXTENSION pg_trgm;
- ```
+ ```sh
+ CREATE EXTENSION pg_trgm;
+ ```
1. Exit the database prompt by typing `\q` and Enter.
1. Verify the cluster is initialized with one node:
- ```sh
- gitlab-ctl repmgr cluster show
- ```
+ ```sh
+ gitlab-ctl repmgr cluster show
+ ```
- The output should be similar to the following:
+ The output should be similar to the following:
- ```
- Role | Name | Upstream | Connection String
- ----------+----------|----------|----------------------------------------
- * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
- ```
+ ```
+ Role | Name | Upstream | Connection String
+ ----------+----------|----------|----------------------------------------
+ * master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
1. Note down the hostname/ip in the connection string: `host=HOSTNAME`. We will
refer to the hostname in the next section as `MASTER_NODE_NAME`. If the value
@@ -402,43 +410,43 @@ Select one node as a primary node.
1. Set up the repmgr standby:
- ```sh
- gitlab-ctl repmgr standby setup MASTER_NODE_NAME
- ```
-
- Do note that this will remove the existing data on the node. The command
- has a wait time.
-
- The output should be similar to the following:
-
- ```console
- # gitlab-ctl repmgr standby setup MASTER_NODE_NAME
- Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
- If this is not what you want, hit Ctrl-C now to exit
- To skip waiting, rerun with the -w option
- Sleeping for 30 seconds
- Stopping the database
- Removing the data
- Cloning the data
- Starting the database
- Registering the node with the cluster
- ok: run: repmgrd: (pid 19068) 0s
- ```
+ ```sh
+ gitlab-ctl repmgr standby setup MASTER_NODE_NAME
+ ```
+
+ Do note that this will remove the existing data on the node. The command
+ has a wait time.
+
+ The output should be similar to the following:
+
+ ```console
+ # gitlab-ctl repmgr standby setup MASTER_NODE_NAME
+ Doing this will delete the entire contents of /var/opt/gitlab/postgresql/data
+ If this is not what you want, hit Ctrl-C now to exit
+ To skip waiting, rerun with the -w option
+ Sleeping for 30 seconds
+ Stopping the database
+ Removing the data
+ Cloning the data
+ Starting the database
+ Registering the node with the cluster
+ ok: run: repmgrd: (pid 19068) 0s
+ ```
1. Verify the node now appears in the cluster:
- ```sh
- gitlab-ctl repmgr cluster show
- ```
+ ```sh
+ gitlab-ctl repmgr cluster show
+ ```
- The output should be similar to the following:
+ The output should be similar to the following:
- ```
- Role | Name | Upstream | Connection String
- ----------+---------|-----------|------------------------------------------------
- * master | MASTER | | host=MASTER_NODE_NAME user=gitlab_repmgr dbname=gitlab_repmgr
- standby | STANDBY | MASTER | host=STANDBY_HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
- ```
+ ```
+ Role | Name | Upstream | Connection String
+ ----------+---------|-----------|------------------------------------------------
+ * master | MASTER | | host=MASTER_NODE_NAME user=gitlab_repmgr dbname=gitlab_repmgr
+ standby | STANDBY | MASTER | host=STANDBY_HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
+ ```
Repeat the above steps on all secondary nodes.
@@ -478,88 +486,7 @@ Check the [Troubleshooting section](#troubleshooting) before proceeding.
#### Configuring the Pgbouncer node
-1. Make sure you collect [`CONSUL_SERVER_NODES`](#consul-information), [`CONSUL_PASSWORD_HASH`](#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](#pgbouncer-information) before executing the next step.
-
-1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
-
- ```ruby
- # Disable all components except Pgbouncer and Consul agent
- roles ['pgbouncer_role']
-
- # Configure Pgbouncer
- pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
-
- # Configure Consul agent
- consul['watchers'] = %w(postgresql)
-
- # START user configuration
- # Please set the real values as explained in Required Information section
- # Replace CONSUL_PASSWORD_HASH with with a generated md5 value
- # Replace PGBOUNCER_PASSWORD_HASH with with a generated md5 value
- pgbouncer['users'] = {
- 'gitlab-consul': {
- password: 'CONSUL_PASSWORD_HASH'
- },
- 'pgbouncer': {
- password: 'PGBOUNCER_PASSWORD_HASH'
- }
- }
- # Replace placeholders:
- #
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses gathered for CONSUL_SERVER_NODES
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
- }
- #
- # END user configuration
- ```
-
- > `pgbouncer_role` was introduced with GitLab 10.3
-
-1. [Reconfigure GitLab] for the changes to take effect.
-
-1. Create a `.pgpass` file so Consul is able to
- reload pgbouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
-
- ```sh
- gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
- ```
-
-##### PGBouncer Checkpoint
-
-1. Ensure the node is talking to the current master:
-
- ```sh
- gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
- ```
-
- If there is an error `psql: ERROR: Auth failed` after typing in the
- password, ensure you previously generated the MD5 password hashes with the correct
- format. The correct format is to concatenate the password and the username:
- `PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
- needed to generate an MD5 password hash for the `pgbouncer` user.
-
-1. Once the console prompt is available, run the following queries:
-
- ```sh
- show databases ; show clients ;
- ```
-
- The output should be similar to the following:
-
- ```
- name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
- ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
- gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
- pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
- (2 rows)
-
- type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
- ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
- C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
- (2 rows)
- ```
+See our [documentation for Pgbouncer](pgbouncer.md) for information on running Pgbouncer as part of an HA setup.
#### Configuring the Application nodes
@@ -568,15 +495,15 @@ attributes set, but the following need to be set.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```ruby
- # Disable PostgreSQL on the application node
- postgresql['enable'] = false
+ ```ruby
+ # Disable PostgreSQL on the application node
+ postgresql['enable'] = false
- gitlab_rails['db_host'] = 'PGBOUNCER_NODE'
- gitlab_rails['db_port'] = 6432
- gitlab_rails['db_password'] = 'POSTGRESQL_USER_PASSWORD'
- gitlab_rails['auto_migrate'] = false
- ```
+ gitlab_rails['db_host'] = 'PGBOUNCER_NODE'
+ gitlab_rails['db_port'] = 6432
+ gitlab_rails['db_password'] = 'POSTGRESQL_USER_PASSWORD'
+ gitlab_rails['auto_migrate'] = false
+ ```
1. [Reconfigure GitLab] for the changes to take effect.
@@ -736,45 +663,45 @@ After deploying the configuration follow these steps:
1. On `10.6.0.21`, our primary database
- Enable the `pg_trgm` extension
+ Enable the `pg_trgm` extension
- ```sh
- gitlab-psql -d gitlabhq_production
- ```
+ ```sh
+ gitlab-psql -d gitlabhq_production
+ ```
- ```
- CREATE EXTENSION pg_trgm;
- ```
+ ```
+ CREATE EXTENSION pg_trgm;
+ ```
1. On `10.6.0.22`, our first standby database
- Make this node a standby of the primary
+ Make this node a standby of the primary
- ```sh
- gitlab-ctl repmgr standby setup 10.6.0.21
- ```
+ ```sh
+ gitlab-ctl repmgr standby setup 10.6.0.21
+ ```
1. On `10.6.0.23`, our second standby database
- Make this node a standby of the primary
+ Make this node a standby of the primary
- ```sh
- gitlab-ctl repmgr standby setup 10.6.0.21
- ```
+ ```sh
+ gitlab-ctl repmgr standby setup 10.6.0.21
+ ```
1. On `10.6.0.31`, our application server
- Set gitlab-consul's pgbouncer password to `toomanysecrets`
+ Set gitlab-consul's pgbouncer password to `toomanysecrets`
- ```sh
- gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
- ```
+ ```sh
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
- Run database migrations
+ Run database migrations
- ```sh
- gitlab-rake gitlab:db:configure
- ```
+ ```sh
+ gitlab-rake gitlab:db:configure
+ ```
#### Example minimal setup
@@ -911,16 +838,16 @@ standby nodes.
1. Ensure the old master node is not still active.
1. Login to the server that should become the new master and run:
- ```sh
- gitlab-ctl repmgr standby promote
- ```
+ ```sh
+ gitlab-ctl repmgr standby promote
+ ```
1. If there are any other standby servers in the cluster, have them follow
the new master server:
- ```sh
- gitlab-ctl repmgr standby follow NEW_MASTER
- ```
+ ```sh
+ gitlab-ctl repmgr standby follow NEW_MASTER
+ ```
#### Restore procedure
@@ -930,42 +857,42 @@ after it has been restored to service.
- If you want to remove the node from the cluster, on any other node in the
cluster, run:
- ```sh
- gitlab-ctl repmgr standby unregister --node=X
- ```
+ ```sh
+ gitlab-ctl repmgr standby unregister --node=X
+ ```
- where X is the value of node in `repmgr.conf` on the old server.
+ where X is the value of node in `repmgr.conf` on the old server.
- To find this, you can use:
+ To find this, you can use:
- ```sh
- awk -F = '$1 == "node" { print $2 }' /var/opt/gitlab/postgresql/repmgr.conf
- ```
+ ```sh
+ awk -F = '$1 == "node" { print $2 }' /var/opt/gitlab/postgresql/repmgr.conf
+ ```
- It will output something like:
+ It will output something like:
- ```
- 959789412
- ```
+ ```
+ 959789412
+ ```
- Then you will use this id to unregister the node:
+ Then you will use this id to unregister the node:
- ```sh
- gitlab-ctl repmgr standby unregister --node=959789412
- ```
+ ```sh
+ gitlab-ctl repmgr standby unregister --node=959789412
+ ```
- To add the node as a standby server:
- ```sh
- gitlab-ctl repmgr standby follow NEW_MASTER
- gitlab-ctl restart repmgrd
- ```
+ ```sh
+ gitlab-ctl repmgr standby follow NEW_MASTER
+ gitlab-ctl restart repmgrd
+ ```
- CAUTION: **Warning:** When the server is brought back online, and before
- you switch it to a standby node, repmgr will report that there are two masters.
- If there are any clients that are still attempting to write to the old master,
- this will cause a split, and the old master will need to be resynced from
- scratch by performing a `gitlab-ctl repmgr standby setup NEW_MASTER`.
+ CAUTION: **Warning:** When the server is brought back online, and before
+ you switch it to a standby node, repmgr will report that there are two masters.
+ If there are any clients that are still attempting to write to the old master,
+ this will cause a split, and the old master will need to be resynced from
+ scratch by performing a `gitlab-ctl repmgr standby setup NEW_MASTER`.
#### Alternate configurations
@@ -1008,13 +935,13 @@ the previous section:
1. On the current master node, create a password for the `gitlab` and
`gitlab_repmgr` user:
- ```sh
- gitlab-psql -d template1
- template1=# \password gitlab_repmgr
- Enter password: ****
- Confirm password: ****
- template1=# \password gitlab
- ```
+ ```sh
+ gitlab-psql -d template1
+ template1=# \password gitlab_repmgr
+ Enter password: ****
+ Confirm password: ****
+ template1=# \password gitlab
+ ```
1. On each database node:
@@ -1028,9 +955,9 @@ the previous section:
1. Create a `.pgpass` file. Enter the `gitlab_repmgr` password twice to
when asked:
- ```sh
- gitlab-ctl write-pgpass --user gitlab_repmgr --hostuser gitlab-psql --database '*'
- ```
+ ```sh
+ gitlab-ctl write-pgpass --user gitlab_repmgr --hostuser gitlab-psql --database '*'
+ ```
1. On each pgbouncer node, edit `/etc/gitlab/gitlab.rb`:
1. Ensure `gitlab_rails['db_password']` is set to the plaintext password for
@@ -1058,7 +985,7 @@ If you enable Monitoring, it must be enabled on **all** database servers.
## Troubleshooting
-### Consul and PostgreSQL changes not taking effect.
+### Consul and PostgreSQL changes not taking effect
Due to the potential impacts, `gitlab-ctl reconfigure` only reloads Consul and PostgreSQL, it will not restart the services. However, not all changes can be activated by reloading.
diff --git a/doc/administration/high_availability/gitaly.md b/doc/administration/high_availability/gitaly.md
index b7eaa4ce105..739d1ae35fb 100644
--- a/doc/administration/high_availability/gitaly.md
+++ b/doc/administration/high_availability/gitaly.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Configuring Gitaly for Scaled and High Availability
Gitaly does not yet support full high availability. However, Gitaly is quite
@@ -46,3 +50,15 @@ Continue configuration of other components by going back to:
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/high_availability/gitlab.md b/doc/administration/high_availability/gitlab.md
index 9b1b7142e83..8818a9606de 100644
--- a/doc/administration/high_availability/gitlab.md
+++ b/doc/administration/high_availability/gitlab.md
@@ -1,4 +1,8 @@
-# Configuring GitLab Scaling and High Availability
+---
+type: reference
+---
+
+# Configuring GitLab for Scaling and High Availability
> **Note:** There is some additional configuration near the bottom for
additional GitLab application servers. It's important to read and understand
@@ -7,33 +11,33 @@
1. If necessary, install the NFS client utility packages using the following
commands:
- ```
- # Ubuntu/Debian
- apt-get install nfs-common
+ ```
+ # Ubuntu/Debian
+ apt-get install nfs-common
- # CentOS/Red Hat
- yum install nfs-utils nfs-utils-lib
- ```
+ # CentOS/Red Hat
+ yum install nfs-utils nfs-utils-lib
+ ```
1. Specify the necessary NFS shares. Mounts are specified in
`/etc/fstab`. The exact contents of `/etc/fstab` will depend on how you chose
to configure your NFS server. See [NFS documentation](nfs.md) for the various
options. Here is an example snippet to add to `/etc/fstab`:
- ```
- 10.1.0.1:/var/opt/gitlab/.ssh /var/opt/gitlab/.ssh nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/gitlab-ci/builds nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- 10.1.0.1:/var/opt/gitlab/git-data /var/opt/gitlab/git-data nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
- ```
+ ```
+ 10.1.0.1:/var/opt/gitlab/.ssh /var/opt/gitlab/.ssh nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+ 10.1.0.1:/var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+ 10.1.0.1:/var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+ 10.1.0.1:/var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/gitlab-ci/builds nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+ 10.1.0.1:/var/opt/gitlab/git-data /var/opt/gitlab/git-data nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
+ ```
1. Create the shared directories. These may be different depending on your NFS
mount locations.
- ```
- mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
- ```
+ ```
+ mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
+ ```
1. Download/install GitLab Omnibus using **steps 1 and 2** from
[GitLab downloads](https://about.gitlab.com/downloads). Do not complete other
@@ -46,52 +50,52 @@
added NFS mounts in the default data locations. Additionally the UID and GIDs
given are just examples and you should configure with your preferred values.
- ```ruby
- external_url 'https://gitlab.example.com'
-
- # Prevent GitLab from starting if NFS data mounts are not available
- high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
-
- # Disable components that will not be on the GitLab application server
- roles ['application_role']
- nginx['enable'] = true
-
- # PostgreSQL connection details
- gitlab_rails['db_adapter'] = 'postgresql'
- gitlab_rails['db_encoding'] = 'unicode'
- gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
- gitlab_rails['db_password'] = 'DB password'
-
- # Redis connection details
- gitlab_rails['redis_port'] = '6379'
- gitlab_rails['redis_host'] = '10.1.0.6' # IP/hostname of Redis server
- gitlab_rails['redis_password'] = 'Redis Password'
-
- # Ensure UIDs and GIDs match between servers for permissions via NFS
- user['uid'] = 9000
- user['gid'] = 9000
- web_server['uid'] = 9001
- web_server['gid'] = 9001
- registry['uid'] = 9002
- registry['gid'] = 9002
- ```
+ ```ruby
+ external_url 'https://gitlab.example.com'
+
+ # Prevent GitLab from starting if NFS data mounts are not available
+ high_availability['mountpoint'] = '/var/opt/gitlab/git-data'
+
+ # Disable components that will not be on the GitLab application server
+ roles ['application_role']
+ nginx['enable'] = true
+
+ # PostgreSQL connection details
+ gitlab_rails['db_adapter'] = 'postgresql'
+ gitlab_rails['db_encoding'] = 'unicode'
+ gitlab_rails['db_host'] = '10.1.0.5' # IP/hostname of database server
+ gitlab_rails['db_password'] = 'DB password'
+
+ # Redis connection details
+ gitlab_rails['redis_port'] = '6379'
+ gitlab_rails['redis_host'] = '10.1.0.6' # IP/hostname of Redis server
+ gitlab_rails['redis_password'] = 'Redis Password'
+
+ # Ensure UIDs and GIDs match between servers for permissions via NFS
+ user['uid'] = 9000
+ user['gid'] = 9000
+ web_server['uid'] = 9001
+ web_server['gid'] = 9001
+ registry['uid'] = 9002
+ registry['gid'] = 9002
+ ```
1. [Enable monitoring](#enable-monitoring)
- > **Note:** To maintain uniformity of links across HA clusters, the `external_url`
- on the first application server as well as the additional application
- servers should point to the external url that users will use to access GitLab.
- In a typical HA setup, this will be the url of the load balancer which will
- route traffic to all GitLab application servers in the HA cluster.
- >
- > **Note:** When you specify `https` in the `external_url`, as in the example
- above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
- certificates are not present, Nginx will fail to start. See
- [Nginx documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
- for more information.
- >
- > **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure
- of GitLab. Omnibus will not recursively `chown` directories if set after the initial reconfigure.
+ > **Note:** To maintain uniformity of links across HA clusters, the `external_url`
+ on the first application server as well as the additional application
+ servers should point to the external url that users will use to access GitLab.
+ In a typical HA setup, this will be the url of the load balancer which will
+ route traffic to all GitLab application servers in the HA cluster.
+ >
+ > **Note:** When you specify `https` in the `external_url`, as in the example
+ above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
+ certificates are not present, Nginx will fail to start. See
+ [Nginx documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
+ for more information.
+ >
+ > **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure
+ of GitLab. Omnibus will not recursively `chown` directories if set after the initial reconfigure.
## First GitLab application server
@@ -114,12 +118,12 @@ need some extra configuration.
secondary servers **prior to** running the first `reconfigure` in the steps
above.
- ```ruby
- gitlab_shell['secret_token'] = 'fbfb19c355066a9afb030992231c4a363357f77345edd0f2e772359e5be59b02538e1fa6cae8f93f7d23355341cea2b93600dab6d6c3edcdced558fc6d739860'
- gitlab_rails['otp_key_base'] = 'b719fe119132c7810908bba18315259ed12888d4f5ee5430c42a776d840a396799b0a5ef0a801348c8a357f07aa72bbd58e25a84b8f247a25c72f539c7a6c5fa'
- gitlab_rails['secret_key_base'] = '6e657410d57c71b4fc3ed0d694e7842b1895a8b401d812c17fe61caf95b48a6d703cb53c112bc01ebd197a85da81b18e29682040e99b4f26594772a4a2c98c6d'
- gitlab_rails['db_key_base'] = 'bf2e47b68d6cafaef1d767e628b619365becf27571e10f196f98dc85e7771042b9203199d39aff91fcb6837c8ed83f2a912b278da50999bb11a2fbc0fba52964'
- ```
+ ```ruby
+ gitlab_shell['secret_token'] = 'fbfb19c355066a9afb030992231c4a363357f77345edd0f2e772359e5be59b02538e1fa6cae8f93f7d23355341cea2b93600dab6d6c3edcdced558fc6d739860'
+ gitlab_rails['otp_key_base'] = 'b719fe119132c7810908bba18315259ed12888d4f5ee5430c42a776d840a396799b0a5ef0a801348c8a357f07aa72bbd58e25a84b8f247a25c72f539c7a6c5fa'
+ gitlab_rails['secret_key_base'] = '6e657410d57c71b4fc3ed0d694e7842b1895a8b401d812c17fe61caf95b48a6d703cb53c112bc01ebd197a85da81b18e29682040e99b4f26594772a4a2c98c6d'
+ gitlab_rails['db_key_base'] = 'bf2e47b68d6cafaef1d767e628b619365becf27571e10f196f98dc85e7771042b9203199d39aff91fcb6837c8ed83f2a912b278da50999bb11a2fbc0fba52964'
+ ```
1. Run `touch /etc/gitlab/skip-auto-reconfigure` to prevent database migrations
from running on upgrade. Only the primary GitLab application server should
diff --git a/doc/administration/high_availability/img/fully-distributed.png b/doc/administration/high_availability/img/fully-distributed.png
index ad23207134e..c3cd2bf24f0 100644
--- a/doc/administration/high_availability/img/fully-distributed.png
+++ b/doc/administration/high_availability/img/fully-distributed.png
Binary files differ
diff --git a/doc/administration/high_availability/img/horizontal.png b/doc/administration/high_availability/img/horizontal.png
index c3bd489d96f..75d08e1097a 100644
--- a/doc/administration/high_availability/img/horizontal.png
+++ b/doc/administration/high_availability/img/horizontal.png
Binary files differ
diff --git a/doc/administration/high_availability/img/hybrid.png b/doc/administration/high_availability/img/hybrid.png
index 7d4a56bf0ea..8dd9923e597 100644
--- a/doc/administration/high_availability/img/hybrid.png
+++ b/doc/administration/high_availability/img/hybrid.png
Binary files differ
diff --git a/doc/administration/high_availability/load_balancer.md b/doc/administration/high_availability/load_balancer.md
index 28b226cacd5..f11d27487d1 100644
--- a/doc/administration/high_availability/load_balancer.md
+++ b/doc/administration/high_availability/load_balancer.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Load Balancer for GitLab HA
In an active/active GitLab configuration, you will need a load balancer to route
@@ -56,11 +60,21 @@ for details on managing SSL certificates and configuring Nginx.
### Basic ports
-| LB Port | Backend Port | Protocol |
-| ------- | ------------ | --------------- |
-| 80 | 80 | HTTP [^1] |
-| 443 | 443 | TCP or HTTPS [^1] [^2] |
-| 22 | 22 | TCP |
+| LB Port | Backend Port | Protocol |
+| ------- | ------------ | ------------------------ |
+| 80 | 80 | HTTP (*1*) |
+| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
+| 22 | 22 | TCP |
+
+- (*1*): [Web terminal](../../ci/environments.md#web-terminals) support requires
+ your load balancer to correctly handle WebSocket connections. When using
+ HTTP or HTTPS proxying, this means your load balancer must be configured
+ to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
+ [web terminal](../integration/terminal.md) integration guide for
+ more details.
+- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
+ certificate to the load balancers. If you wish to terminate SSL at the
+ GitLab application server instead, use TCP protocol.
### GitLab Pages Ports
@@ -68,12 +82,19 @@ If you're using GitLab Pages with custom domain support you will need some
additional port configurations.
GitLab Pages requires a separate virtual IP address. Configure DNS to point the
`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
-[GitLab Pages documentation][gitlab-pages] for more information.
+[GitLab Pages documentation](../pages/index.md) for more information.
-| LB Port | Backend Port | Protocol |
-| ------- | ------------ | -------- |
-| 80 | Varies [^3] | HTTP |
-| 443 | Varies [^3] | TCP [^4] |
+| LB Port | Backend Port | Protocol |
+| ------- | ------------- | --------- |
+| 80 | Varies (*1*) | HTTP |
+| 443 | Varies (*1*) | TCP (*2*) |
+
+- (*1*): The backend port for GitLab Pages depends on the
+ `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
+ setting. See [GitLab Pages documentation](../pages/index.md) for more details.
+- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
+ configure custom domains with custom SSL, which would not be possible
+ if SSL was terminated at the load balancer.
### Alternate SSH Port
@@ -82,7 +103,7 @@ it may be helpful to configure an alternate SSH hostname that allows users
to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
compared to the other GitLab HTTP configuration above.
-Configure DNS for an alternate SSH hostname such as altssh.gitlab.example.com.
+Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
| LB Port | Backend Port | Protocol |
| ------- | ------------ | -------- |
@@ -97,20 +118,14 @@ Read more on high-availability configuration:
1. [Configure NFS](nfs.md)
1. [Configure the GitLab application servers](gitlab.md)
-[^1]: [Web terminal](../../ci/environments.md#web-terminals) support requires
- your load balancer to correctly handle WebSocket connections. When using
- HTTP or HTTPS proxying, this means your load balancer must be configured
- to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
- [web terminal](../integration/terminal.md) integration guide for
- more details.
-[^2]: When using HTTPS protocol for port 443, you will need to add an SSL
- certificate to the load balancers. If you wish to terminate SSL at the
- GitLab application server instead, use TCP protocol.
-[^3]: The backend port for GitLab Pages depends on the
- `gitlab_pages['external_http']` and `gitlab_pages['external_https']`
- setting. See [GitLab Pages documentation][gitlab-pages] for more details.
-[^4]: Port 443 for GitLab Pages should always use the TCP protocol. Users can
- configure custom domains with custom SSL, which would not be possible
- if SSL was terminated at the load balancer.
-
-[gitlab-pages]: ../pages/index.md
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/high_availability/monitoring_node.md b/doc/administration/high_availability/monitoring_node.md
index 385e7441ac9..b2750603c74 100644
--- a/doc/administration/high_availability/monitoring_node.md
+++ b/doc/administration/high_availability/monitoring_node.md
@@ -1,10 +1,16 @@
+---
+type: reference
+---
+
# Configuring a Monitoring node for Scaling and High Availability
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3786) in GitLab 12.0.
+You can configure a Prometheus node to monitor GitLab.
+
## Standalone Monitoring node using GitLab Omnibus
-The GitLab Omnibus package can be used to configure a standalone Monitoring node running Prometheus and Grafana.
+The GitLab Omnibus package can be used to configure a standalone Monitoring node running [Prometheus](../monitoring/prometheus/index.md) and [Grafana](../monitoring/performance/grafana_configuration.md).
The monitoring node is not highly available. See [Scaling and High Availability](README.md)
for an overview of GitLab scaling and high availability options.
@@ -12,7 +18,7 @@ The steps below are the minimum necessary to configure a Monitoring node running
Omnibus:
1. SSH into the Monitoring node.
-1. [Download/install](https://about.gitlab.com/install) the Omnibus GitLab
+1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page.
- Do not complete any other steps on the download page.
@@ -20,44 +26,44 @@ Omnibus:
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
- ```ruby
- external_url 'http://gitlab.example.com'
-
- # Enable Prometheus
- prometheus['enable'] = true
- prometheus['listen_address'] = '0.0.0.0:9090'
- prometheus['monitor_kubernetes'] = false
-
- # Enable Grafana
- grafana['enable'] = true
- grafana['admin_password'] = 'toomanysecrets'
-
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
-
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
-
- # Disable all other services
- gitlab_rails['auto_migrate'] = false
- alertmanager['enable'] = false
- gitaly['enable'] = false
- gitlab_monitor['enable'] = false
- gitlab_workhorse['enable'] = false
- nginx['enable'] = true
- postgres_exporter['enable'] = false
- postgresql['enable'] = false
- redis['enable'] = false
- redis_exporter['enable'] = false
- sidekiq['enable'] = false
- unicorn['enable'] = false
- node_exporter['enable'] = false
- ```
+ ```ruby
+ external_url 'http://gitlab.example.com'
+
+ # Enable Prometheus
+ prometheus['enable'] = true
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ prometheus['monitor_kubernetes'] = false
+
+ # Enable Grafana
+ grafana['enable'] = true
+ grafana['admin_password'] = 'toomanysecrets'
+
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
+
+ # Replace placeholders
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses of the Consul server nodes
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
+ }
+
+ # Disable all other services
+ gitlab_rails['auto_migrate'] = false
+ alertmanager['enable'] = false
+ gitaly['enable'] = false
+ gitlab_monitor['enable'] = false
+ gitlab_workhorse['enable'] = false
+ nginx['enable'] = true
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ sidekiq['enable'] = false
+ unicorn['enable'] = false
+ node_exporter['enable'] = false
+ ```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
@@ -67,3 +73,15 @@ Once monitoring using Service Discovery is enabled with `consul['monitoring_serv
ensure that `prometheus['scrape_configs']` is not set in `/etc/gitlab/gitlab.rb`. Setting both
`consul['monitoring_service_discovery'] = true` and `prometheus['scrape_configs']` in `/etc/gitlab/gitlab.rb`
will result in errors.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/high_availability/nfs.md b/doc/administration/high_availability/nfs.md
index 561ba214686..274bd32299b 100644
--- a/doc/administration/high_availability/nfs.md
+++ b/doc/administration/high_availability/nfs.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# NFS
You can view information and options set for each of the mounted NFS file
@@ -42,34 +46,20 @@ maintaining ID mapping without LDAP, in most cases you should enable numeric UID
and GIDs (which is off by default in some cases) for simplified permission
management between systems:
- - [NetApp instructions](https://library.netapp.com/ecmdocs/ECMP1401220/html/GUID-24367A9F-E17B-4725-ADC1-02D86F56F78E.html)
- - For non-NetApp devices, disable NFSv4 `idmapping` by performing opposite of [enable NFSv4 idmapper](https://wiki.archlinux.org/index.php/NFS#Enabling_NFSv4_idmapping)
+- [NetApp instructions](https://library.netapp.com/ecmdocs/ECMP1401220/html/GUID-24367A9F-E17B-4725-ADC1-02D86F56F78E.html)
+- For non-NetApp devices, disable NFSv4 `idmapping` by performing opposite of [enable NFSv4 idmapper](https://wiki.archlinux.org/index.php/NFS#Enabling_NFSv4_idmapping)
### Improving NFS performance with GitLab
-NOTE: **Note:** This is only available starting in certain versions of GitLab: 11.5.11,
-11.6.11, 11.7.12, 11.8.8, 11.9.0 and up (e.g. 11.10, 11.11, etc.)
+NOTE: **Note:** From GitLab 12.1, it will automatically be detected if Rugged can and should be used per storage.
-If you are using NFS to share Git data, we recommend that you enable a
-number of feature flags that will allow GitLab application processes to
-access Git data directly instead of going through the [Gitaly
-service](../gitaly/index.md). Depending on your workload and disk
-performance, these flags may help improve performance. See [the
-issue](https://gitlab.com/gitlab-org/gitlab-ce/issues/57317) for more
-details.
-
-To do this, run the Rake task:
+If you previously enabled Rugged using the feature flag, you will need to unset the feature flag by using:
```sh
-sudo gitlab-rake gitlab:features:enable_rugged
+sudo gitlab-rake gitlab:features:unset_rugged
```
-If you need to undo this setting for some reason such as switching to [Gitaly without NFS](gitaly.md)
-(recommended), run:
-
-```sh
-sudo gitlab-rake gitlab:features:disable_rugged
-```
+If the Rugged feature flag is explicitly set to either true or false, GitLab will use the value explicitly set.
### Known issues
@@ -81,16 +71,21 @@ bug](https://bugzilla.redhat.com/show_bug.cgi?id=1552203) that may be fixed in
[more recent kernels with this
commit](https://github.com/torvalds/linux/commit/95da1b3a5aded124dd1bda1e3cdb876184813140).
+NOTE: **Note** Red Hat Enterprise 7 [shipped a kernel
+update](https://access.redhat.com/errata/RHSA-2019:2029) on August 6,
+2019 that may have resolved this problem. The following instructions may
+not be needed if the latest kernel is updated properly.
+
GitLab recommends all NFS users disable the NFS server
delegation feature. To disable NFS server delegations
on an Linux NFS server, do the following:
1. On the NFS server, run:
- ```sh
- echo 0 > /proc/sys/fs/leases-enable
- sysctl -w fs.leases-enable=0
- ```
+ ```sh
+ echo 0 > /proc/sys/fs/leases-enable
+ sysctl -w fs.leases-enable=0
+ ```
1. Restart the NFS server process. For example, on CentOS run `service nfs restart`.
@@ -236,3 +231,15 @@ Read more on high-availability configuration:
1. [Configure the load balancers](load_balancer.md)
[udp-log-shipping]: https://docs.gitlab.com/omnibus/settings/logs.html#udp-log-shipping-gitlab-enterprise-edition-only "UDP log shipping"
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/high_availability/nfs_host_client_setup.md b/doc/administration/high_availability/nfs_host_client_setup.md
index a8d69b9ab0a..9b0e085fe25 100644
--- a/doc/administration/high_availability/nfs_host_client_setup.md
+++ b/doc/administration/high_availability/nfs_host_client_setup.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Configuring NFS for GitLab HA
Setting up NFS for a GitLab HA setup allows all applications nodes in a cluster
@@ -133,3 +137,15 @@ client with the command below.
```sh
sudo ufw allow from <client-ip-address> to any port nfs
```
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/high_availability/pgbouncer.md b/doc/administration/high_availability/pgbouncer.md
index 053dae25823..b99724d12a2 100644
--- a/doc/administration/high_availability/pgbouncer.md
+++ b/doc/administration/high_availability/pgbouncer.md
@@ -1,6 +1,8 @@
-# Working with the bundle Pgbouncer service
+---
+type: reference
+---
-## Overview
+# Working with the bundle Pgbouncer service
As part of its High Availability stack, GitLab Premium includes a bundled version of [Pgbouncer](https://pgbouncer.github.io/) that can be managed through `/etc/gitlab/gitlab.rb`.
@@ -14,7 +16,89 @@ It is recommended to run pgbouncer alongside the `gitlab-rails` service, or on i
### Running Pgbouncer as part of an HA GitLab installation
-See our [HA documentation for PostgreSQL](database.md) for information on running pgbouncer as part of a HA setup
+1. Make sure you collect [`CONSUL_SERVER_NODES`](database.md#consul-information), [`CONSUL_PASSWORD_HASH`](database.md#consul-information), and [`PGBOUNCER_PASSWORD_HASH`](database.md#pgbouncer-information) before executing the next step.
+
+1. Edit `/etc/gitlab/gitlab.rb` replacing values noted in the `# START user configuration` section:
+
+ ```ruby
+ # Disable all components except Pgbouncer and Consul agent
+ roles ['pgbouncer_role']
+
+ # Configure Pgbouncer
+ pgbouncer['admin_users'] = %w(pgbouncer gitlab-consul)
+
+ # Configure Consul agent
+ consul['watchers'] = %w(postgresql)
+
+ # START user configuration
+ # Please set the real values as explained in Required Information section
+ # Replace CONSUL_PASSWORD_HASH with with a generated md5 value
+ # Replace PGBOUNCER_PASSWORD_HASH with with a generated md5 value
+ pgbouncer['users'] = {
+ 'gitlab-consul': {
+ password: 'CONSUL_PASSWORD_HASH'
+ },
+ 'pgbouncer': {
+ password: 'PGBOUNCER_PASSWORD_HASH'
+ }
+ }
+ # Replace placeholders:
+ #
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses gathered for CONSUL_SERVER_NODES
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z)
+ }
+ #
+ # END user configuration
+ ```
+
+ NOTE: **Note:**
+ `pgbouncer_role` was introduced with GitLab 10.3.
+
+1. Run `gitlab-ctl reconfigure`
+
+1. Create a `.pgpass` file so Consul is able to
+ reload pgbouncer. Enter the `PGBOUNCER_PASSWORD` twice when asked:
+
+ ```sh
+ gitlab-ctl write-pgpass --host 127.0.0.1 --database pgbouncer --user pgbouncer --hostuser gitlab-consul
+ ```
+
+#### PGBouncer Checkpoint
+
+1. Ensure the node is talking to the current master:
+
+ ```sh
+ gitlab-ctl pgb-console # You will be prompted for PGBOUNCER_PASSWORD
+ ```
+
+ If there is an error `psql: ERROR: Auth failed` after typing in the
+ password, ensure you previously generated the MD5 password hashes with the correct
+ format. The correct format is to concatenate the password and the username:
+ `PASSWORDUSERNAME`. For example, `Sup3rS3cr3tpgbouncer` would be the text
+ needed to generate an MD5 password hash for the `pgbouncer` user.
+
+1. Once the console prompt is available, run the following queries:
+
+ ```sh
+ show databases ; show clients ;
+ ```
+
+ The output should be similar to the following:
+
+ ```
+ name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
+ ---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
+ gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
+ pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
+ (2 rows)
+
+ type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
+ ------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
+ C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
+ (2 rows)
+ ```
### Running Pgbouncer as part of a non-HA GitLab installation
@@ -24,39 +108,39 @@ See our [HA documentation for PostgreSQL](database.md) for information on runnin
1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
- ```ruby
- postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
- postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
- postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
- postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
- ```
+ ```ruby
+ postgresql['pgbouncer_user_password'] = 'PGBOUNCER_USER_PASSWORD_HASH'
+ postgresql['sql_user_password'] = 'SQL_USER_PASSWORD_HASH'
+ postgresql['listen_address'] = 'XX.XX.XX.Y' # Where XX.XX.XX.Y is the ip address on the node postgresql should listen on
+ postgresql['md5_auth_cidr_addresses'] = %w(AA.AA.AA.B/32) # Where AA.AA.AA.B is the IP address of the pgbouncer node
+ ```
1. Run `gitlab-ctl reconfigure`
- **Note:** If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
+ **Note:** If the database was already running, it will need to be restarted after reconfigure by running `gitlab-ctl restart postgresql`.
1. On the node you are running pgbouncer on, make sure the following is set in `/etc/gitlab/gitlab.rb`
- ```ruby
- pgbouncer['enable'] = true
- pgbouncer['databases'] = {
- gitlabhq_production: {
- host: 'DATABASE_HOST',
- user: 'pgbouncer',
- password: 'PGBOUNCER_USER_PASSWORD_HASH'
- }
- }
- ```
+ ```ruby
+ pgbouncer['enable'] = true
+ pgbouncer['databases'] = {
+ gitlabhq_production: {
+ host: 'DATABASE_HOST',
+ user: 'pgbouncer',
+ password: 'PGBOUNCER_USER_PASSWORD_HASH'
+ }
+ }
+ ```
1. Run `gitlab-ctl reconfigure`
1. On the node running unicorn, make sure the following is set in `/etc/gitlab/gitlab.rb`
- ```ruby
- gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
- gitlab_rails['db_port'] = '6432'
- gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
- ```
+ ```ruby
+ gitlab_rails['db_host'] = 'PGBOUNCER_HOST'
+ gitlab_rails['db_port'] = '6432'
+ gitlab_rails['db_password'] = 'SQL_USER_PASSWORD'
+ ```
1. Run `gitlab-ctl reconfigure`
@@ -66,28 +150,28 @@ See our [HA documentation for PostgreSQL](database.md) for information on runnin
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/issues/3786) in GitLab 12.0.
- If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
+If you enable Monitoring, it must be enabled on **all** pgbouncer servers.
- 1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
+1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
- ```ruby
- # Enable service discovery for Prometheus
- consul['enable'] = true
- consul['monitoring_service_discovery'] = true
+ ```ruby
+ # Enable service discovery for Prometheus
+ consul['enable'] = true
+ consul['monitoring_service_discovery'] = true
- # Replace placeholders
- # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
- # with the addresses of the Consul server nodes
- consul['configuration'] = {
- retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
- }
+ # Replace placeholders
+ # Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
+ # with the addresses of the Consul server nodes
+ consul['configuration'] = {
+ retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
+ }
- # Set the network addresses that the exporters will listen on
- node_exporter['listen_address'] = '0.0.0.0:9100'
- pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
- ```
+ # Set the network addresses that the exporters will listen on
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ pgbouncer_exporter['listen_address'] = '0.0.0.0:9188'
+ ```
- 1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
+1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
### Interacting with pgbouncer
@@ -109,6 +193,7 @@ pgbouncer=#
The password you will be prompted for is the PGBOUNCER_USER_PASSWORD
To get some basic information about the instance, run
+
```shell
pgbouncer=# show databases; show clients; show servers;
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
diff --git a/doc/administration/high_availability/redis.md b/doc/administration/high_availability/redis.md
index 27310c59755..1b79dde9476 100644
--- a/doc/administration/high_availability/redis.md
+++ b/doc/administration/high_availability/redis.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Configuring Redis for Scaling and High Availability
## Provide your own Redis instance **(CORE ONLY)**
@@ -47,28 +51,28 @@ Omnibus:
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
- ```ruby
- ## Enable Redis
- redis['enable'] = true
-
- ## Disable all other services
- sidekiq['enable'] = false
- gitlab_workhorse['enable'] = false
- unicorn['enable'] = false
- postgresql['enable'] = false
- nginx['enable'] = false
- prometheus['enable'] = false
- alertmanager['enable'] = false
- pgbouncer_exporter['enable'] = false
- gitlab_monitor['enable'] = false
- gitaly['enable'] = false
-
- redis['bind'] = '0.0.0.0'
- redis['port'] = '6379'
- redis['password'] = 'SECRET_PASSWORD_HERE'
-
- gitlab_rails['auto_migrate'] = false
- ```
+ ```ruby
+ ## Enable Redis
+ redis['enable'] = true
+
+ ## Disable all other services
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ unicorn['enable'] = false
+ postgresql['enable'] = false
+ nginx['enable'] = false
+ prometheus['enable'] = false
+ alertmanager['enable'] = false
+ pgbouncer_exporter['enable'] = false
+ gitlab_monitor['enable'] = false
+ gitaly['enable'] = false
+
+ redis['bind'] = '0.0.0.0'
+ redis['port'] = '6379'
+ redis['password'] = 'SECRET_PASSWORD_HERE'
+
+ gitlab_rails['auto_migrate'] = false
+ ```
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
1. Note the Redis node's IP address or hostname, port, and
@@ -359,37 +363,37 @@ The prerequisites for a HA Redis setup are the following:
1. SSH into the **master** Redis server.
1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- and type (Community, Enterprise editions) of your current install.
- - Do not complete any other steps on the download page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
- ```ruby
- # Specify server role as 'redis_master_role'
- roles ['redis_master_role']
+ ```ruby
+ # Specify server role as 'redis_master_role'
+ roles ['redis_master_role']
- # IP address pointing to a local IP that the other machines can reach to.
- # You can also set bind to '0.0.0.0' which listen in all interfaces.
- # If you really need to bind to an external accessible IP, make
- # sure you add extra firewall rules to prevent unauthorized access.
- redis['bind'] = '10.0.0.1'
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.0.0.1'
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
- # Set up password authentication for Redis (use the same password in all nodes).
- redis['password'] = 'redis-password-goes-here'
- ```
+ # Set up password authentication for Redis (use the same password in all nodes).
+ redis['password'] = 'redis-password-goes-here'
+ ```
1. Only the primary GitLab application server should handle migrations. To
prevent database migrations from running on upgrade, add the following
configuration to your `/etc/gitlab/gitlab.rb` file:
- ```
- gitlab_rails['auto_migrate'] = false
- ```
+ ```
+ gitlab_rails['auto_migrate'] = false
+ ```
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
@@ -402,42 +406,42 @@ The prerequisites for a HA Redis setup are the following:
1. SSH into the **slave** Redis server.
1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- and type (Community, Enterprise editions) of your current install.
- - Do not complete any other steps on the download page.
+ - Make sure you select the correct Omnibus package, with the same version
+ and type (Community, Enterprise editions) of your current install.
+ - Do not complete any other steps on the download page.
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
- ```ruby
- # Specify server role as 'redis_slave_role'
- roles ['redis_slave_role']
+ ```ruby
+ # Specify server role as 'redis_slave_role'
+ roles ['redis_slave_role']
- # IP address pointing to a local IP that the other machines can reach to.
- # You can also set bind to '0.0.0.0' which listen in all interfaces.
- # If you really need to bind to an external accessible IP, make
- # sure you add extra firewall rules to prevent unauthorized access.
- redis['bind'] = '10.0.0.2'
+ # IP address pointing to a local IP that the other machines can reach to.
+ # You can also set bind to '0.0.0.0' which listen in all interfaces.
+ # If you really need to bind to an external accessible IP, make
+ # sure you add extra firewall rules to prevent unauthorized access.
+ redis['bind'] = '10.0.0.2'
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
- # The same password for Redis authentication you set up for the master node.
- redis['password'] = 'redis-password-goes-here'
+ # The same password for Redis authentication you set up for the master node.
+ redis['password'] = 'redis-password-goes-here'
- # The IP of the master Redis node.
- redis['master_ip'] = '10.0.0.1'
+ # The IP of the master Redis node.
+ redis['master_ip'] = '10.0.0.1'
- # Port of master Redis server, uncomment to change to non default. Defaults
- # to `6379`.
- #redis['master_port'] = 6379
- ```
+ # Port of master Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+ ```
1. To prevent reconfigure from running automatically on upgrade, run:
- ```
- sudo touch /etc/gitlab/skip-auto-reconfigure
- ```
+ ```
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
1. Go through the steps again for all the other slave nodes.
@@ -487,89 +491,89 @@ multiple machines with the Sentinel daemon.
1. **You can omit this step if the Sentinels will be hosted in the same node as
the other Redis instances.**
- [Download/install](https://about.gitlab.com/downloads-ee) the
- Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
- GitLab downloads page.
- - Make sure you select the correct Omnibus package, with the same version
- the GitLab application is running.
- - Do not complete any other steps on the download page.
+ [Download/install](https://about.gitlab.com/downloads-ee) the
+ Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
+ GitLab downloads page.
+ - Make sure you select the correct Omnibus package, with the same version
+ the GitLab application is running.
+ - Do not complete any other steps on the download page.
1. Edit `/etc/gitlab/gitlab.rb` and add the contents (if you are installing the
Sentinels in the same node as the other Redis instances, some values might
be duplicate below):
- ```ruby
- roles ['redis_sentinel_role']
-
- # Must be the same in every sentinel node
- redis['master_name'] = 'gitlab-redis'
-
- # The same password for Redis authentication you set up for the master node.
- redis['master_password'] = 'redis-password-goes-here'
-
- # The IP of the master Redis node.
- redis['master_ip'] = '10.0.0.1'
-
- # Define a port so Redis can listen for TCP requests which will allow other
- # machines to connect to it.
- redis['port'] = 6379
-
- # Port of master Redis server, uncomment to change to non default. Defaults
- # to `6379`.
- #redis['master_port'] = 6379
-
- ## Configure Sentinel
- sentinel['bind'] = '10.0.0.1'
-
- # Port that Sentinel listens on, uncomment to change to non default. Defaults
- # to `26379`.
- # sentinel['port'] = 26379
-
- ## Quorum must reflect the amount of voting sentinels it take to start a failover.
- ## Value must NOT be greater then the amount of sentinels.
- ##
- ## The quorum can be used to tune Sentinel in two ways:
- ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
- ## we deploy, we are basically making Sentinel more sensible to master failures,
- ## triggering a failover as soon as even just a minority of Sentinels is no longer
- ## able to talk with the master.
- ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
- ## making Sentinel able to failover only when there are a very large number (larger
- ## than majority) of well connected Sentinels which agree about the master being down.s
- sentinel['quorum'] = 2
-
- ## Consider unresponsive server down after x amount of ms.
- # sentinel['down_after_milliseconds'] = 10000
-
- ## Specifies the failover timeout in milliseconds. It is used in many ways:
- ##
- ## - The time needed to re-start a failover after a previous failover was
- ## already tried against the same master by a given Sentinel, is two
- ## times the failover timeout.
- ##
- ## - The time needed for a slave replicating to a wrong master according
- ## to a Sentinel current configuration, to be forced to replicate
- ## with the right master, is exactly the failover timeout (counting since
- ## the moment a Sentinel detected the misconfiguration).
- ##
- ## - The time needed to cancel a failover that is already in progress but
- ## did not produced any configuration change (SLAVEOF NO ONE yet not
- ## acknowledged by the promoted slave).
- ##
- ## - The maximum time a failover in progress waits for all the slaves to be
- ## reconfigured as slaves of the new master. However even after this time
- ## the slaves will be reconfigured by the Sentinels anyway, but not with
- ## the exact parallel-syncs progression as specified.
- # sentinel['failover_timeout'] = 60000
- ```
+ ```ruby
+ roles ['redis_sentinel_role']
+
+ # Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ # The same password for Redis authentication you set up for the master node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ # The IP of the master Redis node.
+ redis['master_ip'] = '10.0.0.1'
+
+ # Define a port so Redis can listen for TCP requests which will allow other
+ # machines to connect to it.
+ redis['port'] = 6379
+
+ # Port of master Redis server, uncomment to change to non default. Defaults
+ # to `6379`.
+ #redis['master_port'] = 6379
+
+ ## Configure Sentinel
+ sentinel['bind'] = '10.0.0.1'
+
+ # Port that Sentinel listens on, uncomment to change to non default. Defaults
+ # to `26379`.
+ # sentinel['port'] = 26379
+
+ ## Quorum must reflect the amount of voting sentinels it take to start a failover.
+ ## Value must NOT be greater then the amount of sentinels.
+ ##
+ ## The quorum can be used to tune Sentinel in two ways:
+ ## 1. If a the quorum is set to a value smaller than the majority of Sentinels
+ ## we deploy, we are basically making Sentinel more sensible to master failures,
+ ## triggering a failover as soon as even just a minority of Sentinels is no longer
+ ## able to talk with the master.
+ ## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
+ ## making Sentinel able to failover only when there are a very large number (larger
+ ## than majority) of well connected Sentinels which agree about the master being down.s
+ sentinel['quorum'] = 2
+
+ ## Consider unresponsive server down after x amount of ms.
+ # sentinel['down_after_milliseconds'] = 10000
+
+ ## Specifies the failover timeout in milliseconds. It is used in many ways:
+ ##
+ ## - The time needed to re-start a failover after a previous failover was
+ ## already tried against the same master by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## - The time needed for a slave replicating to a wrong master according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right master, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## - The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (SLAVEOF NO ONE yet not
+ ## acknowledged by the promoted slave).
+ ##
+ ## - The maximum time a failover in progress waits for all the slaves to be
+ ## reconfigured as slaves of the new master. However even after this time
+ ## the slaves will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ # sentinel['failover_timeout'] = 60000
+ ```
1. To prevent database migrations from running on upgrade, run:
- ```
- sudo touch /etc/gitlab/skip-auto-reconfigure
- ```
+ ```
+ sudo touch /etc/gitlab/skip-auto-reconfigure
+ ```
- Only the primary GitLab application server should handle migrations.
+ Only the primary GitLab application server should handle migrations.
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
1. Go through the steps again for all the other Sentinel nodes.
@@ -593,20 +597,20 @@ which ideally should not have Redis or Sentinels on it for a HA setup.
1. SSH into the server where the GitLab application is installed.
1. Edit `/etc/gitlab/gitlab.rb` and add/change the following lines:
- ```
- ## Must be the same in every sentinel node
- redis['master_name'] = 'gitlab-redis'
-
- ## The same password for Redis authentication you set up for the master node.
- redis['master_password'] = 'redis-password-goes-here'
-
- ## A list of sentinels with `host` and `port`
- gitlab_rails['redis_sentinels'] = [
- {'host' => '10.0.0.1', 'port' => 26379},
- {'host' => '10.0.0.2', 'port' => 26379},
- {'host' => '10.0.0.3', 'port' => 26379}
- ]
- ```
+ ```ruby
+ ## Must be the same in every sentinel node
+ redis['master_name'] = 'gitlab-redis'
+
+ ## The same password for Redis authentication you set up for the master node.
+ redis['master_password'] = 'redis-password-goes-here'
+
+ ## A list of sentinels with `host` and `port`
+ gitlab_rails['redis_sentinels'] = [
+ {'host' => '10.0.0.1', 'port' => 26379},
+ {'host' => '10.0.0.2', 'port' => 26379},
+ {'host' => '10.0.0.3', 'port' => 26379}
+ ]
+ ```
1. [Reconfigure Omnibus GitLab][reconfigure] for the changes to take effect.
@@ -791,31 +795,34 @@ cache, queues, and shared_state. To make this work with Sentinel:
1. Set the appropriate variable in `/etc/gitlab/gitlab.rb` for each instance you are using:
- ```ruby
- gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL
- gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL
- gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL
- ```
+ ```ruby
+ gitlab_rails['redis_cache_instance'] = REDIS_CACHE_URL
+ gitlab_rails['redis_queues_instance'] = REDIS_QUEUES_URL
+ gitlab_rails['redis_shared_state_instance'] = REDIS_SHARED_STATE_URL
+ ```
+
**Note**: Redis URLs should be in the format: `redis://:PASSWORD@SENTINEL_MASTER_NAME`
- 1. PASSWORD is the plaintext password for the Redis instance
- 1. SENTINEL_MASTER_NAME is the Sentinel master name (e.g. `gitlab-redis-cache`)
+ 1. PASSWORD is the plaintext password for the Redis instance
+ 1. SENTINEL_MASTER_NAME is the Sentinel master name (e.g. `gitlab-redis-cache`)
+
1. Include an array of hashes with host/port combinations, such as the following:
- ```ruby
- gitlab_rails['redis_cache_sentinels'] = [
- { host: REDIS_CACHE_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_CACHE_SENTINEL_HOST2, port: PORT2 }
- ]
- gitlab_rails['redis_queues_sentinels'] = [
- { host: REDIS_QUEUES_SENTINEL_HOST, port: PORT1 },
- { host: REDIS_QUEUES_SENTINEL_HOST2, port: PORT2 }
- ]
- gitlab_rails['redis_shared_state_sentinels'] = [
- { host: SHARED_STATE_SENTINEL_HOST, port: PORT1 },
- { host: SHARED_STATE_SENTINEL_HOST2, port: PORT2 }
- ]
- ```
+ ```ruby
+ gitlab_rails['redis_cache_sentinels'] = [
+ { host: REDIS_CACHE_SENTINEL_HOST, port: PORT1 },
+ { host: REDIS_CACHE_SENTINEL_HOST2, port: PORT2 }
+ ]
+ gitlab_rails['redis_queues_sentinels'] = [
+ { host: REDIS_QUEUES_SENTINEL_HOST, port: PORT1 },
+ { host: REDIS_QUEUES_SENTINEL_HOST2, port: PORT2 }
+ ]
+ gitlab_rails['redis_shared_state_sentinels'] = [
+ { host: SHARED_STATE_SENTINEL_HOST, port: PORT1 },
+ { host: SHARED_STATE_SENTINEL_HOST2, port: PORT2 }
+ ]
+ ```
+
1. Note that for each persistence class, GitLab will default to using the
configuration specified in `gitlab_rails['redis_sentinels']` unless
overridden by the settings above.
@@ -879,12 +886,12 @@ in order for the HA setup to work as expected.
Before proceeding with the troubleshooting below, check your firewall rules:
- Redis machines
- - Accept TCP connection in `6379`
- - Connect to the other Redis machines via TCP in `6379`
+ - Accept TCP connection in `6379`
+ - Connect to the other Redis machines via TCP in `6379`
- Sentinel machines
- - Accept TCP connection in `26379`
- - Connect to other Sentinel machines via TCP in `26379`
- - Connect to the Redis machines via TCP in `6379`
+ - Accept TCP connection in `26379`
+ - Connect to other Sentinel machines via TCP in `26379`
+ - Connect to the Redis machines via TCP in `6379`
### Troubleshooting Redis replication
@@ -952,38 +959,38 @@ To make sure your configuration is correct:
1. SSH into your GitLab application server
1. Enter the Rails console:
- ```
- # For Omnibus installations
- sudo gitlab-rails console
+ ```
+ # For Omnibus installations
+ sudo gitlab-rails console
- # For source installations
- sudo -u git rails console production
- ```
+ # For source installations
+ sudo -u git rails console production
+ ```
1. Run in the console:
- ```ruby
- redis = Redis.new(Gitlab::Redis::SharedState.params)
- redis.info
- ```
+ ```ruby
+ redis = Redis.new(Gitlab::Redis::SharedState.params)
+ redis.info
+ ```
- Keep this screen open and try to simulate a failover below.
+ Keep this screen open and try to simulate a failover below.
1. To simulate a failover on master Redis, SSH into the Redis server and run:
- ```bash
- # port must match your master redis port, and the sleep time must be a few seconds bigger than defined one
- redis-cli -h localhost -p 6379 DEBUG sleep 20
- ```
+ ```bash
+ # port must match your master redis port, and the sleep time must be a few seconds bigger than defined one
+ redis-cli -h localhost -p 6379 DEBUG sleep 20
+ ```
1. Then back in the Rails console from the first step, run:
- ```
- redis.info
- ```
+ ```
+ redis.info
+ ```
- You should see a different port after a few seconds delay
- (the failover/reconnect time).
+ You should see a different port after a few seconds delay
+ (the failover/reconnect time).
## Changelog
diff --git a/doc/administration/high_availability/redis_source.md b/doc/administration/high_availability/redis_source.md
index be6b547372a..63915e5d96c 100644
--- a/doc/administration/high_availability/redis_source.md
+++ b/doc/administration/high_availability/redis_source.md
@@ -1,3 +1,7 @@
+---
+type: reference
+---
+
# Configuring non-Omnibus Redis for GitLab HA
This is the documentation for configuring a Highly Available Redis setup when
@@ -49,22 +53,22 @@ Assuming that the Redis master instance IP is `10.0.0.1`:
1. [Install Redis](../../install/installation.md#7-redis).
1. Edit `/etc/redis/redis.conf`:
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.1
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.1
- ## Define a `port` to force redis to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 6379
+ ## Define a `port` to force redis to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 6379
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- ```
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ ```
1. Restart the Redis service for the changes to take effect.
@@ -75,25 +79,25 @@ Assuming that the Redis slave instance IP is `10.0.0.2`:
1. [Install Redis](../../install/installation.md#7-redis).
1. Edit `/etc/redis/redis.conf`:
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.2
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.2
- ## Define a `port` to force redis to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 6379
+ ## Define a `port` to force redis to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 6379
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
- ## Define `slaveof` pointing to the Redis master instance with IP and port.
- slaveof 10.0.0.1 6379
- ```
+ ## Define `slaveof` pointing to the Redis master instance with IP and port.
+ slaveof 10.0.0.1 6379
+ ```
1. Restart the Redis service for the changes to take effect.
1. Go through the steps again for all the other slave nodes.
@@ -110,56 +114,57 @@ master with IP `10.0.0.1` (some settings might overlap with the master):
1. [Install Redis Sentinel](https://redis.io/topics/sentinel)
1. Edit `/etc/redis/sentinel.conf`:
- ```conf
- ## Define a `bind` address pointing to a local IP that your other machines
- ## can reach you. If you really need to bind to an external accessible IP, make
- ## sure you add extra firewall rules to prevent unauthorized access:
- bind 10.0.0.1
-
- ## Define a `port` to force Sentinel to listen on TCP so other machines can
- ## connect to it (default port is `6379`).
- port 26379
-
- ## Set up password authentication (use the same password in all nodes).
- ## The password should be defined equal for both `requirepass` and `masterauth`
- ## when setting up Redis to use with Sentinel.
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
-
- ## Define with `sentinel auth-pass` the same shared password you have
- ## defined for both Redis master and slaves instances.
- sentinel auth-pass gitlab-redis redis-password-goes-here
-
- ## Define with `sentinel monitor` the IP and port of the Redis
- ## master node, and the quorum required to start a failover.
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
-
- ## Define with `sentinel down-after-milliseconds` the time in `ms`
- ## that an unresponsive server will be considered down.
- sentinel down-after-milliseconds gitlab-redis 10000
-
- ## Define a value for `sentinel failover_timeout` in `ms`. This has multiple
- ## meanings:
- ##
- ## * The time needed to re-start a failover after a previous failover was
- ## already tried against the same master by a given Sentinel, is two
- ## times the failover timeout.
- ##
- ## * The time needed for a slave replicating to a wrong master according
- ## to a Sentinel current configuration, to be forced to replicate
- ## with the right master, is exactly the failover timeout (counting since
- ## the moment a Sentinel detected the misconfiguration).
- ##
- ## * The time needed to cancel a failover that is already in progress but
- ## did not produced any configuration change (SLAVEOF NO ONE yet not
- ## acknowledged by the promoted slave).
- ##
- ## * The maximum time a failover in progress waits for all the slaves to be
- ## reconfigured as slaves of the new master. However even after this time
- ## the slaves will be reconfigured by the Sentinels anyway, but not with
- ## the exact parallel-syncs progression as specified.
- sentinel failover_timeout 30000
- ```
+ ```conf
+ ## Define a `bind` address pointing to a local IP that your other machines
+ ## can reach you. If you really need to bind to an external accessible IP, make
+ ## sure you add extra firewall rules to prevent unauthorized access:
+ bind 10.0.0.1
+
+ ## Define a `port` to force Sentinel to listen on TCP so other machines can
+ ## connect to it (default port is `6379`).
+ port 26379
+
+ ## Set up password authentication (use the same password in all nodes).
+ ## The password should be defined equal for both `requirepass` and `masterauth`
+ ## when setting up Redis to use with Sentinel.
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+
+ ## Define with `sentinel auth-pass` the same shared password you have
+ ## defined for both Redis master and slaves instances.
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+
+ ## Define with `sentinel monitor` the IP and port of the Redis
+ ## master node, and the quorum required to start a failover.
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+
+ ## Define with `sentinel down-after-milliseconds` the time in `ms`
+ ## that an unresponsive server will be considered down.
+ sentinel down-after-milliseconds gitlab-redis 10000
+
+ ## Define a value for `sentinel failover_timeout` in `ms`. This has multiple
+ ## meanings:
+ ##
+ ## * The time needed to re-start a failover after a previous failover was
+ ## already tried against the same master by a given Sentinel, is two
+ ## times the failover timeout.
+ ##
+ ## * The time needed for a slave replicating to a wrong master according
+ ## to a Sentinel current configuration, to be forced to replicate
+ ## with the right master, is exactly the failover timeout (counting since
+ ## the moment a Sentinel detected the misconfiguration).
+ ##
+ ## * The time needed to cancel a failover that is already in progress but
+ ## did not produced any configuration change (SLAVEOF NO ONE yet not
+ ## acknowledged by the promoted slave).
+ ##
+ ## * The maximum time a failover in progress waits for all the slaves to be
+ ## reconfigured as slaves of the new master. However even after this time
+ ## the slaves will be reconfigured by the Sentinels anyway, but not with
+ ## the exact parallel-syncs progression as specified.
+ sentinel failover_timeout 30000
+ ```
+
1. Restart the Redis service for the changes to take effect.
1. Go through the steps again for all the other Sentinel nodes.
@@ -180,21 +185,21 @@ setup:
[resque.yml.example][resque], and uncomment the Sentinel lines, pointing to
the correct server credentials:
- ```yaml
- # resque.yaml
- production:
- url: redis://:redi-password-goes-here@gitlab-redis/
- sentinels:
- -
- host: 10.0.0.1
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.2
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.3
- port: 26379 # point to sentinel, not to redis port
- ```
+ ```yaml
+ # resque.yaml
+ production:
+ url: redis://:redi-password-goes-here@gitlab-redis/
+ sentinels:
+ -
+ host: 10.0.0.1
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.2
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.3
+ port: 26379 # point to sentinel, not to redis port
+ ```
1. [Restart GitLab][restart] for the changes to take effect.
@@ -232,23 +237,23 @@ or a failover promotes a different **Master** node.
1. In `/etc/redis/redis.conf`:
- ```conf
- bind 10.0.0.1
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- ```
+ ```conf
+ bind 10.0.0.1
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ ```
1. In `/etc/redis/sentinel.conf`:
- ```conf
- bind 10.0.0.1
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
+ ```conf
+ bind 10.0.0.1
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
1. Restart the Redis service for the changes to take effect.
@@ -256,24 +261,24 @@ or a failover promotes a different **Master** node.
1. In `/etc/redis/redis.conf`:
- ```conf
- bind 10.0.0.2
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- slaveof 10.0.0.1 6379
- ```
+ ```conf
+ bind 10.0.0.2
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ slaveof 10.0.0.1 6379
+ ```
1. In `/etc/redis/sentinel.conf`:
- ```conf
- bind 10.0.0.2
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
+ ```conf
+ bind 10.0.0.2
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
1. Restart the Redis service for the changes to take effect.
@@ -281,24 +286,24 @@ or a failover promotes a different **Master** node.
1. In `/etc/redis/redis.conf`:
- ```conf
- bind 10.0.0.3
- port 6379
- requirepass redis-password-goes-here
- masterauth redis-password-goes-here
- slaveof 10.0.0.1 6379
- ```
+ ```conf
+ bind 10.0.0.3
+ port 6379
+ requirepass redis-password-goes-here
+ masterauth redis-password-goes-here
+ slaveof 10.0.0.1 6379
+ ```
1. In `/etc/redis/sentinel.conf`:
- ```conf
- bind 10.0.0.3
- port 26379
- sentinel auth-pass gitlab-redis redis-password-goes-here
- sentinel monitor gitlab-redis 10.0.0.1 6379 2
- sentinel down-after-milliseconds gitlab-redis 10000
- sentinel failover_timeout 30000
- ```
+ ```conf
+ bind 10.0.0.3
+ port 26379
+ sentinel auth-pass gitlab-redis redis-password-goes-here
+ sentinel monitor gitlab-redis 10.0.0.1 6379 2
+ sentinel down-after-milliseconds gitlab-redis 10000
+ sentinel failover_timeout 30000
+ ```
1. Restart the Redis service for the changes to take effect.
@@ -306,20 +311,20 @@ or a failover promotes a different **Master** node.
1. Edit `/home/git/gitlab/config/resque.yml`:
- ```yaml
- production:
- url: redis://:redi-password-goes-here@gitlab-redis/
- sentinels:
- -
- host: 10.0.0.1
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.2
- port: 26379 # point to sentinel, not to redis port
- -
- host: 10.0.0.3
- port: 26379 # point to sentinel, not to redis port
- ```
+ ```yaml
+ production:
+ url: redis://:redi-password-goes-here@gitlab-redis/
+ sentinels:
+ -
+ host: 10.0.0.1
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.2
+ port: 26379 # point to sentinel, not to redis port
+ -
+ host: 10.0.0.3
+ port: 26379 # point to sentinel, not to redis port
+ ```
1. [Restart GitLab][restart] for the changes to take effect.
diff --git a/doc/administration/housekeeping.md b/doc/administration/housekeeping.md
index 1b01419e062..abe39d188aa 100644
--- a/doc/administration/housekeeping.md
+++ b/doc/administration/housekeeping.md
@@ -2,7 +2,6 @@
> [Introduced][ce-2371] in GitLab 8.4.
----
## Automatic housekeeping
GitLab automatically runs `git gc` and `git repack` on repositories
@@ -32,8 +31,6 @@ the `pushes_since_gc` value is 200 a `git gc` will be run.
You can find this option under your project's **Settings > General > Advanced**.
----
-
![Housekeeping settings](img/housekeeping_settings.png)
[ce-2371]: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/2371 "Housekeeping merge request"
diff --git a/doc/administration/img/custom_hooks_error_msg.png b/doc/administration/img/custom_hooks_error_msg.png
index 4f25c471908..e7d5c157d08 100644
--- a/doc/administration/img/custom_hooks_error_msg.png
+++ b/doc/administration/img/custom_hooks_error_msg.png
Binary files differ
diff --git a/doc/administration/incoming_email.md b/doc/administration/incoming_email.md
index 73a39a6dd35..29915cb3a99 100644
--- a/doc/administration/incoming_email.md
+++ b/doc/administration/incoming_email.md
@@ -151,7 +151,7 @@ Reply by email should now be working.
#### Postfix
-Example configuration for Postfix mail server. Assumes mailbox incoming@gitlab.example.com.
+Example configuration for Postfix mail server. Assumes mailbox `incoming@gitlab.example.com`.
Example for Omnibus installs:
@@ -218,7 +218,7 @@ incoming_email:
#### Gmail
-Example configuration for Gmail/G Suite. Assumes mailbox gitlab-incoming@gmail.com.
+Example configuration for Gmail/G Suite. Assumes mailbox `gitlab-incoming@gmail.com`.
Example for Omnibus installs:
diff --git a/doc/administration/index.md b/doc/administration/index.md
index 00c8863f200..650cb10a64a 100644
--- a/doc/administration/index.md
+++ b/doc/administration/index.md
@@ -2,7 +2,7 @@
description: 'Learn how to install, configure, update, and maintain your GitLab instance.'
---
-# Administrator documentation **(CORE ONLY)**
+# Administrator Docs **(CORE ONLY)**
Learn how to administer your self-managed GitLab instance.
@@ -64,6 +64,7 @@ Learn how to install, configure, update, and maintain your GitLab instance.
- [External Classification Policy Authorization](../user/admin_area/settings/external_authorization.md) **(PREMIUM ONLY)**
- [Upload a license](../user/admin_area/license.md): Upload a license to unlock features that are in paid tiers of GitLab. **(STARTER ONLY)**
- [Admin Area](../user/admin_area/index.md): for self-managed instance-wide configuration and maintenance.
+- [S/MIME Signing](smime_signing_email.md): how to sign all outgoing notification emails with S/MIME
#### Customizing GitLab's appearance
@@ -185,3 +186,13 @@ Learn how to install, configure, update, and maintain your GitLab instance.
- [Debugging tips](troubleshooting/debug.md): Tips to debug problems when things go wrong
- [Log system](logs.md): Where to look for logs.
- [Sidekiq Troubleshooting](troubleshooting/sidekiq.md): Debug when Sidekiq appears hung and is not processing jobs.
+- Useful [diagnostics tools](troubleshooting/diagnostics_tools.md) that are sometimes used by the GitLab
+ Support team.
+- [Troubleshooting ElasticSearch](troubleshooting/elasticsearch.md): Tips to troubleshoot ElasticSearch.
+- [Kubernetes troubleshooting](troubleshooting/kubernetes_cheat_sheet.md): Commands and tips useful
+ for troubleshooting Kubernetes-related issues.
+- Useful links from the Support Team:
+ - [GitLab Developer Docs](https://docs.gitlab.com/ee/development/README.html).
+ - [Repairing and recovering broken Git repositories](https://git.seveas.net/repairing-and-recovering-broken-git-repositories.html).
+ - [Testing with OpenSSL](https://www.feistyduck.com/library/openssl-cookbook/online/ch-testing-with-openssl.html).
+ - [Strace zine](https://wizardzines.com/zines/strace/).
diff --git a/doc/administration/instance_review.md b/doc/administration/instance_review.md
index ab6a4646a71..825435deff9 100644
--- a/doc/administration/instance_review.md
+++ b/doc/administration/instance_review.md
@@ -14,4 +14,3 @@ Additionally you will be contacted by our team for further review which should h
[6995]: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/6995
[ee]: https://about.gitlab.com/pricing/
-
diff --git a/doc/administration/integration/plantuml.md b/doc/administration/integration/plantuml.md
index 8de7b0bc57e..16a193550a1 100644
--- a/doc/administration/integration/plantuml.md
+++ b/doc/administration/integration/plantuml.md
@@ -1,6 +1,7 @@
# PlantUML & GitLab
-> [Introduced][ce-8537] in GitLab 8.16.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/8537) in
+> GitLab 8.16.
When [PlantUML](http://plantuml.com) integration is enabled and configured in
GitLab we are able to create simple diagrams in AsciiDoc and Markdown documents
@@ -15,7 +16,9 @@ server that will generate the diagrams.
With Docker, you can just run a container like this:
-`docker run -d --name plantuml -p 8080:8080 plantuml/plantuml-server:tomcat`
+```sh
+docker run -d --name plantuml -p 8080:8080 plantuml/plantuml-server:tomcat
+```
The **PlantUML URL** will be the hostname of the server running the container.
@@ -26,7 +29,7 @@ own PlantUML server is easy in Debian/Ubuntu distributions using Tomcat.
First you need to create a `plantuml.war` file from the source code:
-```
+```sh
sudo apt-get install graphviz openjdk-8-jdk git-core maven
git clone https://github.com/plantuml/plantuml-server.git
cd plantuml-server
@@ -36,7 +39,7 @@ mvn package
The above sequence of commands will generate a WAR file that can be deployed
using Tomcat:
-```
+```sh
sudo apt-get install tomcat7
sudo cp target/plantuml.war /var/lib/tomcat7/webapps/plantuml.war
sudo chown tomcat7:tomcat7 /var/lib/tomcat7/webapps/plantuml.war
@@ -46,7 +49,7 @@ sudo service tomcat7 restart
Once the Tomcat service restarts the PlantUML service will be ready and
listening for requests on port 8080:
-```
+```text
http://localhost:8080/plantuml
```
@@ -57,9 +60,10 @@ you can change these defaults by editing the `/etc/tomcat7/server.xml` file.
You need to enable PlantUML integration from Settings under Admin Area. To do
that, login with an Admin account and do following:
- - in GitLab go to **Admin Area**->**Settings**->**Integrations**->**PlantUML**
- - check **Enable PlantUML** checkbox
- - set the PlantUML instance as **PlantUML URL**
+- In GitLab, go to **Admin Area > Settings > Integrations**.
+- Expand the **PlantUML** section.
+- Check **Enable PlantUML** checkbox.
+- Set the PlantUML instance as **PlantUML URL**.
## Creating Diagrams
@@ -68,33 +72,34 @@ our AsciiDoc snippets, wikis and repos using delimited blocks:
- **Markdown**
- <pre>
- ```plantuml
- Bob -> Alice : hello
- Alice -> Bob : Go Away
- ```</pre>
+ ~~~markdown
+ ```plantuml
+ Bob -> Alice : hello
+ Alice -> Bob : Go Away
+ ```
+ ~~~
- **AsciiDoc**
- ```
- [plantuml, format="png", id="myDiagram", width="200px"]
- ----
- Bob->Alice : hello
- Alice -> Bob : Go Away
- ----
- ```
+ ```text
+ [plantuml, format="png", id="myDiagram", width="200px"]
+ ----
+ Bob->Alice : hello
+ Alice -> Bob : Go Away
+ ----
+ ```
- **reStructuredText**
- ```
- .. plantuml::
- :caption: Caption with **bold** and *italic*
+ ```text
+ .. plantuml::
+ :caption: Caption with **bold** and *italic*
- Bob -> Alice: hello
- Alice -> Bob: Go Away
- ```
+ Bob -> Alice: hello
+ Alice -> Bob: Go Away
+ ```
- You can also use the `uml::` directive for compatibility with [sphinxcontrib-plantuml](https://pypi.org/project/sphinxcontrib-plantuml/), but please note that we currently only support the `caption` option.
+ You can also use the `uml::` directive for compatibility with [sphinxcontrib-plantuml](https://pypi.org/project/sphinxcontrib-plantuml/), but please note that we currently only support the `caption` option.
The above blocks will be converted to an HTML img tag with source pointing to the
PlantUML instance. If the PlantUML server is correctly configured, this should
@@ -111,12 +116,10 @@ diagram delimiters `@startuml`/`@enduml` as these are replaced by the AsciiDoc `
Some parameters can be added to the AsciiDoc block definition:
- - *format*: Can be either `png` or `svg`. Note that `svg` is not supported by
- all browsers so use with care. The default is `png`.
- - *id*: A CSS id added to the diagram HTML tag.
- - *width*: Width attribute added to the img tag.
- - *height*: Height attribute added to the img tag.
+- *format*: Can be either `png` or `svg`. Note that `svg` is not supported by
+ all browsers so use with care. The default is `png`.
+- *id*: A CSS id added to the diagram HTML tag.
+- *width*: Width attribute added to the img tag.
+- *height*: Height attribute added to the img tag.
Markdown does not support any parameters and will always use PNG format.
-
-[ce-8537]: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/8537
diff --git a/doc/administration/integration/terminal.md b/doc/administration/integration/terminal.md
index c34858cd0db..24c9cc0bea9 100644
--- a/doc/administration/integration/terminal.md
+++ b/doc/administration/integration/terminal.md
@@ -41,9 +41,9 @@ detail below.
- Every session URL that is created has an authorization header that needs to be sent, to establish a `wss` connection.
- The session URL is not exposed to the users in any way. GitLab holds all the state internally and proxies accordingly.
-## Enabling and disabling terminal support
+## Enabling and disabling terminal support
-NOTE: **Note:** AWS Elastic Load Balancers (ELBs) do not support web sockets.
+NOTE: **Note:** AWS Elastic Load Balancers (ELBs) do not support web sockets.
AWS Application Load Balancers (ALBs) must be used if you want web terminals
to work. See [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information.
diff --git a/doc/administration/issue_closing_pattern.md b/doc/administration/issue_closing_pattern.md
index e1bbabb2878..f9be06c6daf 100644
--- a/doc/administration/issue_closing_pattern.md
+++ b/doc/administration/issue_closing_pattern.md
@@ -1,8 +1,8 @@
# Issue closing pattern **(CORE ONLY)**
>**Note:**
-This is the administration documentation.
-There is a separate [user documentation] on issue closing pattern.
+This is the administration documentation. There is a separate [user documentation](../user/project/issues/managing_issues.md#closing-issues-automatically)
+on issue closing pattern.
When a commit or merge request resolves one or more issues, it is possible to
automatically have these issues closed when the commit or merge request lands
@@ -13,8 +13,8 @@ in the project's default branch.
In order to change the pattern you need to have access to the server that GitLab
is installed on.
-The default pattern can be located in [gitlab.yml.example] under the
-"Automatic issue closing" section.
+The default pattern can be located in [`gitlab.yml.example`](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/gitlab.yml.example)
+under the "Automatic issue closing" section.
> **Tip:**
You are advised to use <http://rubular.com> to test the issue closing pattern.
@@ -31,7 +31,7 @@ Because Rubular doesn't understand `%{issue_ref}`, you can replace this by
gitlab_rails['gitlab_issue_closing_pattern'] = "\b((?:[Cc]los(?:e[sd]|ing)|\b[Ff]ix(?:e[sd]|ing)?) +(?:(?:issues? +)?%{issue_ref}(?:(?:, *| +and +)?))+)"
```
-1. [Reconfigure] GitLab for the changes to take effect.
+1. [Reconfigure](restart_gitlab.md#omnibus-gitlab-reconfigure) GitLab for the changes to take effect.
**For installations from source**
@@ -42,9 +42,4 @@ Because Rubular doesn't understand `%{issue_ref}`, you can replace this by
issue_closing_pattern: "\b((?:[Cc]los(?:e[sd]|ing)|\b[Ff]ix(?:e[sd]|ing)?) +(?:(?:issues? +)?%{issue_ref}(?:(?:, *| +and +)?))+)"
```
-1. [Restart] GitLab for the changes to take effect.
-
-[gitlab.yml.example]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/config/gitlab.yml.example
-[reconfigure]: restart_gitlab.md#omnibus-gitlab-reconfigure
-[restart]: restart_gitlab.md#installations-from-source
-[user documentation]: ../user/project/issues/managing_issues.md#closing-issues-automatically
+1. [Restart](restart_gitlab.md#installations-from-source) GitLab for the changes to take effect.
diff --git a/doc/administration/job_artifacts.md b/doc/administration/job_artifacts.md
index 9df7b2526e2..350cd5b7992 100644
--- a/doc/administration/job_artifacts.md
+++ b/doc/administration/job_artifacts.md
@@ -13,8 +13,6 @@ if you want to know how to disable it.
To disable artifacts site-wide, follow the steps below.
----
-
**In Omnibus installations:**
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
@@ -25,8 +23,6 @@ To disable artifacts site-wide, follow the steps below.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**In installations from source:**
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following lines:
@@ -49,8 +45,6 @@ this is done when the job succeeds, but can also be done on failure, or always,
To change the location where the artifacts are stored locally, follow the steps
below.
----
-
**In Omnibus installations:**
_The artifacts are stored by default in
@@ -65,8 +59,6 @@ _The artifacts are stored by default in
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**In installations from source:**
_The artifacts are stored by default in
@@ -102,7 +94,7 @@ If you're enabling S3 in [GitLab HA](high_availability/README.md), you will need
### Object Storage Settings
-For source installations the following settings are nested under `artifacts:` and then `object_store:`. On omnibus installs they are prefixed by `artifacts_object_store_`.
+For source installations the following settings are nested under `artifacts:` and then `object_store:`. On Omnibus GitLab installs they are prefixed by `artifacts_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
@@ -123,6 +115,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
+| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
@@ -167,8 +160,6 @@ _The artifacts are stored by default in
gitlab-rake gitlab:artifacts:migrate
```
----
-
**In installations from source:**
_The artifacts are stored by default in
@@ -207,8 +198,6 @@ right after that date passes. Artifacts are cleaned up by the
To change the default schedule on which the artifacts are expired, follow the
steps below.
----
-
**In Omnibus installations:**
1. Edit `/etc/gitlab/gitlab.rb` and comment out or add the following line
@@ -219,8 +208,6 @@ steps below.
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**In installations from source:**
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
@@ -240,8 +227,6 @@ steps below.
To disable [the dependencies validation](../ci/yaml/README.md#when-a-dependent-job-will-fail),
you can flip the feature flag from a Rails console.
----
-
**In Omnibus installations:**
1. Enter the Rails console:
@@ -256,8 +241,6 @@ you can flip the feature flag from a Rails console.
Feature.enable('ci_disable_validates_dependencies')
```
----
-
**In installations from source:**
1. Enter the Rails console:
diff --git a/doc/administration/job_traces.md b/doc/administration/job_traces.md
index 957916893d7..6a06eb240de 100644
--- a/doc/administration/job_traces.md
+++ b/doc/administration/job_traces.md
@@ -95,9 +95,9 @@ If job traces have already been archived into local storage, and you want to mig
1. Ensure [Object storage integration for Job Artifacts](job_artifacts.md#object-storage-settings) is enabled
1. Execute the following command
- ```bash
- gitlab-rake gitlab:traces:migrate
- ```
+ ```bash
+ gitlab-rake gitlab:traces:migrate
+ ```
## How to remove job traces
diff --git a/doc/administration/logs.md b/doc/administration/logs.md
index 5a2f389d298..9030c941cad 100644
--- a/doc/administration/logs.md
+++ b/doc/administration/logs.md
@@ -88,7 +88,7 @@ Introduced in GitLab 10.0, this file lives in
It helps you see requests made directly to the API. For example:
```json
-{"time":"2018-10-29T12:49:42.123Z","severity":"INFO","duration":709.08,"db":14.59,"view":694.49,"status":200,"method":"GET","path":"/api/v4/projects","params":[{"key":"action","value":"git-upload-pack"},{"key":"changes","value":"_any"},{"key":"key_id","value":"secret"},{"key":"secret_token","value":"[FILTERED]"}],"host":"localhost","ip":"::1","ua":"Ruby","route":"/api/:version/projects","user_id":1,"username":"root","queue_duration":100.31,"gitaly_calls":30,"gitaly_duration":5.36}
+{"time":"2018-10-29T12:49:42.123Z","severity":"INFO","duration":709.08,"db":14.59,"view":694.49,"status":200,"method":"GET","path":"/api/v4/projects","params":[{"key":"action","value":"git-upload-pack"},{"key":"changes","value":"_any"},{"key":"key_id","value":"secret"},{"key":"secret_token","value":"[FILTERED]"}],"host":"localhost","remote_ip":"::1","ua":"Ruby","route":"/api/:version/projects","user_id":1,"username":"root","queue_duration":100.31,"gitaly_calls":30,"gitaly_duration":5.36}
```
This entry above shows an access to an internal endpoint to check whether an
@@ -151,22 +151,29 @@ etc. For example:
{"severity":"ERROR","time":"2018-11-23T15:42:11.647Z","exception":"Kubeclient::HttpError","error_code":null,"service":"Clusters::Applications::InstallService","app_id":2,"project_ids":[19],"group_ids":[],"message":"SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate)"}
```
-## `githost.log`
+## `git_json.log`
-This file lives in `/var/log/gitlab/gitlab-rails/githost.log` for
-Omnibus GitLab packages or in `/home/git/gitlab/log/githost.log` for
+This file lives in `/var/log/gitlab/gitlab-rails/git_json.log` for
+Omnibus GitLab packages or in `/home/git/gitlab/log/git_json.log` for
installations from source.
+NOTE: **Note:**
+After 12.2, this file was renamed from `githost.log` to
+`git_json.log` and stored in JSON format.
+
GitLab has to interact with Git repositories but in some rare cases
something can go wrong and in this case you will know what exactly
happened. This log file contains all failed requests from GitLab to Git
repositories. In the majority of cases this file will be useful for developers
only. For example:
-```
-December 03, 2014 13:20 -> ERROR -> Command failed [1]: /usr/bin/git --git-dir=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq/.git --work-tree=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq merge --no-ff -mMerge branch 'feature_conflict' into 'feature' source/feature_conflict
-
-error: failed to push some refs to '/Users/vsizov/gitlab-development-kit/repositories/gitlabhq/gitlab_git.git'
+```json
+{
+ "severity":"ERROR",
+ "time":"2019-07-19T22:16:12.528Z",
+ "correlation_id":"FeGxww5Hj64",
+ "message":"Command failed [1]: /usr/bin/git --git-dir=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq/.git --work-tree=/Users/vsizov/gitlab-development-kit/gitlab/tmp/tests/gitlab-satellites/group184/gitlabhq merge --no-ff -mMerge branch 'feature_conflict' into 'feature' source/feature_conflict\n\nerror: failed to push some refs to '/Users/vsizov/gitlab-development-kit/repositories/gitlabhq/gitlab_git.git'"
+}
```
## `audit_json.log`
@@ -236,7 +243,7 @@ I, [2015-02-13T06:17:00.679433 #9291] INFO -- : Moving existing hooks directory
User clone/fetch activity using ssh transport appears in this log as `executing git command <gitaly-upload-pack...`.
-## `unicorn\_stderr.log`
+## `unicorn_stderr.log`
This file lives in `/var/log/gitlab/unicorn/unicorn_stderr.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/unicorn_stderr.log` for
@@ -277,16 +284,16 @@ Introduced in GitLab 11.3. This file lives in `/var/log/gitlab/gitlab-rails/impo
Omnibus GitLab packages or in `/home/git/gitlab/log/importer.log` for
installations from source.
-Currently it logs the progress of project imports from the Bitbucket Server
-importer. Future importers may use this file.
-
-## `auth.log`
+## `auth.log`
Introduced in GitLab 12.0. This file lives in `/var/log/gitlab/gitlab-rails/auth.log` for
Omnibus GitLab packages or in `/home/git/gitlab/log/auth.log` for
installations from source.
-It logs information whenever [Rack Attack] registers an abusive request.
+This log records:
+
+- Information whenever [Rack Attack] registers an abusive request.
+- Requests over the [Rate Limit] on raw endpoints.
NOTE: **Note:**
From [%12.1](https://gitlab.com/gitlab-org/gitlab-ce/issues/62756), user id and username are available on this log.
@@ -305,6 +312,12 @@ GraphQL queries are recorded in that file. For example:
{"query_string":"query IntrospectionQuery{__schema {queryType { name },mutationType { name }}}...(etc)","variables":{"a":1,"b":2},"complexity":181,"depth":1,"duration":7}
```
+## `migrations.log`
+
+Introduced in GitLab 12.3. This file lives in `/var/log/gitlab/gitlab-rails/migrations.log` for
+Omnibus GitLab packages or in `/home/git/gitlab/log/migrations.log` for
+installations from source.
+
## Reconfigure Logs
Reconfigure log files live in `/var/log/gitlab/reconfigure` for Omnibus GitLab
@@ -324,3 +337,4 @@ installations from source.
[repocheck]: repository_checks.md
[Rack Attack]: ../security/rack_attack.md
+[Rate Limit]: ../user/admin_area/settings/rate_limits_on_raw_endpoints.md
diff --git a/doc/administration/merge_request_diffs.md b/doc/administration/merge_request_diffs.md
index 99cd9051778..d52d865cec5 100644
--- a/doc/administration/merge_request_diffs.md
+++ b/doc/administration/merge_request_diffs.md
@@ -90,6 +90,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
+| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
diff --git a/doc/administration/monitoring/gitlab_instance_administration_project/index.md b/doc/administration/monitoring/gitlab_instance_administration_project/index.md
new file mode 100644
index 00000000000..d445b68721d
--- /dev/null
+++ b/doc/administration/monitoring/gitlab_instance_administration_project/index.md
@@ -0,0 +1,36 @@
+# GitLab instance administration project
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/56883) in GitLab 12.2.
+
+GitLab has been adding the ability for administrators to see insights into the health of
+their GitLab instance. In order to surface this experience in a native way, similar to how
+you would interact with an application deployed via GitLab, a base project called
+"GitLab Instance Administration" with
+[internal visibility](../../../public_access/public_access.md#internal-projects) will be
+added under a group called "GitLab Instance Administrators" specifically created for
+visualizing and configuring the monitoring of your GitLab instance.
+
+All administrators at the time of creation of the project and group will be added
+as maintainers of the group and project, and as an admin, you'll be able to add new
+members to the group in order to give them maintainer access to the project.
+
+This project will be used for self-monitoring your GitLab instance.
+
+## Connection to Prometheus
+
+The project will be automatically configured to connect to the
+[internal Prometheus](../prometheus/index.md) instance if the Prometheus
+instance is present (should be the case if GitLab was installed via Omnibus
+and you haven't disabled it).
+
+If that's not the case or if you have an external Prometheus instance or an HA setup,
+you should
+[configure it manually](../../../user/project/integrations/prometheus.md#manual-configuration-of-prometheus).
+
+## Taking action on Prometheus alerts **(ULTIMATE)**
+
+You can [add a webhook](../../../user/project/integrations/prometheus.md#external-prometheus-instances)
+to the Prometheus config in order for GitLab to receive notifications of any alerts.
+
+Once the webhook is setup, you can
+[take action on incoming alerts](../../../user/project/integrations/prometheus.md#taking-action-on-incidents-ultimate).
diff --git a/doc/administration/monitoring/index.md b/doc/administration/monitoring/index.md
index fa0459b24ff..2b3daec42bd 100644
--- a/doc/administration/monitoring/index.md
+++ b/doc/administration/monitoring/index.md
@@ -2,6 +2,9 @@
Explore our features to monitor your GitLab instance:
+- [GitLab self-monitoring](gitlab_instance_administration_project/index.md): The
+ GitLab instance administration project helps to monitor the GitLab instance and
+ take action on alerts.
- [Performance monitoring](performance/index.md): GitLab Performance Monitoring makes it possible to measure a wide variety of statistics of your instance.
- [Prometheus](prometheus/index.md): Prometheus is a powerful time-series monitoring service, providing a flexible platform for monitoring GitLab and other software products.
- [GitHub imports](github_imports.md): Monitor the health and progress of GitLab's GitHub importer with various Prometheus metrics.
diff --git a/doc/administration/monitoring/ip_whitelist.md b/doc/administration/monitoring/ip_whitelist.md
index ad2773de132..6bb2fe81b2c 100644
--- a/doc/administration/monitoring/ip_whitelist.md
+++ b/doc/administration/monitoring/ip_whitelist.md
@@ -12,9 +12,9 @@ hosts or use IP ranges:
1. Open `/etc/gitlab/gitlab.rb` and add or uncomment the following:
- ```ruby
- gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1']
- ```
+ ```ruby
+ gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1']
+ ```
1. Save the file and [reconfigure] GitLab for the changes to take effect.
@@ -24,13 +24,13 @@ hosts or use IP ranges:
1. Edit `config/gitlab.yml`:
- ```yaml
- monitoring:
- # by default only local IPs are allowed to access monitoring resources
- ip_whitelist:
- - 127.0.0.0/8
- - 192.168.0.1
- ```
+ ```yaml
+ monitoring:
+ # by default only local IPs are allowed to access monitoring resources
+ ip_whitelist:
+ - 127.0.0.0/8
+ - 192.168.0.1
+ ```
1. Save the file and [restart] GitLab for the changes to take effect.
diff --git a/doc/administration/monitoring/performance/gitlab_configuration.md b/doc/administration/monitoring/performance/gitlab_configuration.md
index 771584268d9..9a25cc04ee7 100644
--- a/doc/administration/monitoring/performance/gitlab_configuration.md
+++ b/doc/administration/monitoring/performance/gitlab_configuration.md
@@ -8,12 +8,8 @@ The minimum required settings you need to set are the InfluxDB host and port.
Make sure _Enable InfluxDB Metrics_ is checked and hit **Save** to save the
changes.
----
-
![GitLab Performance Monitoring Admin Settings](img/metrics_gitlab_configuration_settings.png)
----
-
Finally, a restart of all GitLab processes is required for the changes to take
effect:
@@ -30,8 +26,6 @@ sudo service gitlab restart
When any migrations are pending, the metrics are disabled until the migrations
have been performed.
----
-
Read more on:
- [Introduction to GitLab Performance Monitoring](introduction.md)
diff --git a/doc/administration/monitoring/performance/grafana_configuration.md b/doc/administration/monitoring/performance/grafana_configuration.md
index 4dd0bbbe937..95be0d5fd88 100644
--- a/doc/administration/monitoring/performance/grafana_configuration.md
+++ b/doc/administration/monitoring/performance/grafana_configuration.md
@@ -1,6 +1,6 @@
# Grafana Configuration
-[Grafana](https://grafana.org/) is a tool that allows you to visualize time
+[Grafana](https://grafana.com/) is a tool that allows you to visualize time
series metrics through graphs and dashboards. It supports several backend
data stores, including InfluxDB. GitLab writes performance data to InfluxDB
and Grafana will allow you to query to display useful graphs.
@@ -118,6 +118,36 @@ If you have set up Grafana, you can enable a link to access it easily from the s
1. Click **Save changes**.
1. The new link will be available in the admin area under **Monitoring > Metrics Dashboard**.
+## Security Update
+
+Users running GitLab version 12.0 or later should immediately upgrade to one of the following security releases due to a known vulnerability with the embedded Grafana dashboard:
+
+- 12.0.6
+- 12.1.6
+
+After upgrading, the Grafana dashboard will be disabled and the location of your existing Grafana data will be changed from `/var/opt/gitlab/grafana/data/` to `/var/opt/gitlab/grafana/data.bak.#{Date.today}/`.
+
+To prevent the data from being relocated, you can run the following command prior to upgrading:
+
+```sh
+echo "0" > /var/opt/gitlab/grafana/CVE_reset_status
+```
+
+To reinstate your old data, move it back into its original location:
+
+```
+sudo mv /var/opt/gitlab/grafana/data.bak.xxxx/ /var/opt/gitlab/grafana/data/
+```
+
+However, you should **not** reinstate your old data _except_ under one of the following conditions:
+
+1. If you are certain that you changed your default admin password when you enabled Grafana
+1. If you run GitLab in a private network, accessed only by trusted users, and your Grafana login page has not been exposed to the internet
+
+If you require access to your old Grafana data but do not meet one of these criteria, you may consider reinstating it temporarily, [exporting the dashboards](https://grafana.com/docs/reference/export_import/#exporting-a-dashboard) you need, then refreshing the data and [re-importing your dashboards](https://grafana.com/docs/reference/export_import/#importing-a-dashboard). Note that this poses a temporary vulnerability while your old Grafana data is in use, and the decision to do so should be weighed carefully with your need to access existing data and dashboards.
+
+For more information and further mitigation details, please refer to our [blog post on the security release](https://about.gitlab.com/2019/08/12/critical-security-release-gitlab-12-dot-1-dot-6-released/).
+
---
Read more on:
diff --git a/doc/administration/monitoring/performance/img/performance_bar.png b/doc/administration/monitoring/performance/img/performance_bar.png
index 2bf686f9017..d1187fd879a 100644
--- a/doc/administration/monitoring/performance/img/performance_bar.png
+++ b/doc/administration/monitoring/performance/img/performance_bar.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png b/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
index 7af6d401d1d..2c43201cbd0 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_gitaly_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_line_profiling.png b/doc/administration/monitoring/performance/img/performance_bar_line_profiling.png
deleted file mode 100644
index a55ce753101..00000000000
--- a/doc/administration/monitoring/performance/img/performance_bar_line_profiling.png
+++ /dev/null
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png b/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png
new file mode 100644
index 00000000000..ecb2dffbf8d
--- /dev/null
+++ b/doc/administration/monitoring/performance/img/performance_bar_redis_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png b/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png
new file mode 100644
index 00000000000..210f80a713d
--- /dev/null
+++ b/doc/administration/monitoring/performance/img/performance_bar_rugged_calls.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/performance_bar_sql_queries.png b/doc/administration/monitoring/performance/img/performance_bar_sql_queries.png
index b3219b4fa94..6b571f4e85c 100644
--- a/doc/administration/monitoring/performance/img/performance_bar_sql_queries.png
+++ b/doc/administration/monitoring/performance/img/performance_bar_sql_queries.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/img/request_profile_result.png b/doc/administration/monitoring/performance/img/request_profile_result.png
index 1b06e240fa0..9176a0b49fd 100644
--- a/doc/administration/monitoring/performance/img/request_profile_result.png
+++ b/doc/administration/monitoring/performance/img/request_profile_result.png
Binary files differ
diff --git a/doc/administration/monitoring/performance/performance_bar.md b/doc/administration/monitoring/performance/performance_bar.md
index 95f497a1470..02f4b78bd60 100644
--- a/doc/administration/monitoring/performance/performance_bar.md
+++ b/doc/administration/monitoring/performance/performance_bar.md
@@ -8,16 +8,14 @@ activated, it looks as follows:
It allows you to see (from left to right):
- the current host serving the page
-- the timing of the page (backend, frontend)
- time taken and number of DB queries, click through for details of these queries
![SQL profiling using the Performance Bar](img/performance_bar_sql_queries.png)
- time taken and number of [Gitaly] calls, click through for details of these calls
![Gitaly profiling using the Performance Bar](img/performance_bar_gitaly_calls.png)
-- profile of the code used to generate the page, line by line. In the profile view, the numbers in the left panel represent wall time, cpu time, and number of calls (based on [rblineprof](https://github.com/tmm1/rblineprof)).
- ![Line profiling using the Performance Bar](img/performance_bar_line_profiling.png)
-- time taken and number of calls to Redis
-- time taken and number of background jobs created by Sidekiq
-- time taken and number of Ruby GC calls
+- time taken and number of [Rugged] calls, click through for details of these calls
+ ![Rugged profiling using the Performance Bar](img/performance_bar_rugged_calls.png)
+- time taken and number of Redis calls, click through for details of these calls
+ ![Redis profiling using the Performance Bar](img/performance_bar_redis_calls.png)
On the far right is a request selector that allows you to view the same metrics
(excluding the page timing and line profiler) for any requests made while the
@@ -40,10 +38,7 @@ display it.
You can toggle the Bar using the same shortcut.
----
-
![GitLab Performance Bar Admin Settings](img/performance_bar_configuration_settings.png)
----
-
[Gitaly]: ../../gitaly/index.md
+[Rugged]: ../../high_availability/nfs.md#improving-nfs-performance-with-gitlab
diff --git a/doc/administration/monitoring/performance/request_profiling.md b/doc/administration/monitoring/performance/request_profiling.md
index 726882fbb87..c32edb60f9d 100644
--- a/doc/administration/monitoring/performance/request_profiling.md
+++ b/doc/administration/monitoring/performance/request_profiling.md
@@ -5,9 +5,9 @@
1. Grab the profiling token from **Monitoring > Requests Profiles** admin page
(highlighted in a blue in the image below).
![Profile token](img/request_profiling_token.png)
-1. Pass the header `X-Profile-Token: <token>` to the request you want to profile. You can use:
+1. Pass the header `X-Profile-Token: <token>` and `X-Profile-Mode: <mode>`(where `<mode>` can be `execution` or `memory`) to the request you want to profile. You can use:
- Browser extensions. For example, [ModHeader](https://chrome.google.com/webstore/detail/modheader/idgpnmonknjnojddfkpgkljpfnnfcklj) Chrome extension.
- - `curl`. For example, `curl --header 'X-Profile-Token: <token>' https://gitlab.example.com/group/project`.
+ - `curl`. For example, `curl --header 'X-Profile-Token: <token>' --header 'X-Profile-Mode: <mode>' https://gitlab.example.com/group/project`.
1. Once request is finished (which will take a little longer than usual), you can
view the profiling output from **Monitoring > Requests Profiles** admin page.
![Profiling output](img/request_profile_result.png)
diff --git a/doc/administration/monitoring/prometheus/gitlab_metrics.md b/doc/administration/monitoring/prometheus/gitlab_metrics.md
index 89501f20d99..4809bee0df6 100644
--- a/doc/administration/monitoring/prometheus/gitlab_metrics.md
+++ b/doc/administration/monitoring/prometheus/gitlab_metrics.md
@@ -1,7 +1,7 @@
# GitLab Prometheus metrics
>**Note:**
-Available since [Omnibus GitLab 9.3][29118]. For
+Available since [Omnibus GitLab 9.3](https://gitlab.com/gitlab-org/gitlab-ce/issues/29118). For
installations from source you'll have to configure it yourself.
To enable the GitLab Prometheus metrics:
@@ -9,123 +9,202 @@ To enable the GitLab Prometheus metrics:
1. Log into GitLab as an administrator, and go to the Admin area.
1. Click on the gear, then click on Settings.
1. Find the `Metrics - Prometheus` section, and click `Enable Prometheus Metrics`
-1. [Restart GitLab][restart] for the changes to take effect
+1. [Restart GitLab](../../restart_gitlab.md#omnibus-gitlab-restart) for the changes to take effect
## Collecting the metrics
GitLab monitors its own internal service metrics, and makes them available at the
-`/-/metrics` endpoint. Unlike other [Prometheus] exporters, in order to access
-it, the client IP needs to be [included in a whitelist][whitelist].
+`/-/metrics` endpoint. Unlike other [Prometheus](https://prometheus.io) exporters, in order to access
+it, the client IP needs to be [included in a whitelist](../ip_whitelist.md).
For Omnibus and Chart installations, these metrics are automatically enabled and collected as of [GitLab 9.4](https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests/1702). For source installations or earlier versions, these metrics will need to be enabled manually and collected by a Prometheus server.
-## Unicorn Metrics available
+## Metrics available
The following metrics are available:
-| Metric | Type | Since | Description |
-|:--------------------------------- |:--------- |:----- |:----------- |
-| db_ping_timeout | Gauge | 9.4 | Whether or not the last database ping timed out |
-| db_ping_success | Gauge | 9.4 | Whether or not the last database ping succeeded |
-| db_ping_latency_seconds | Gauge | 9.4 | Round trip time of the database ping |
-| filesystem_access_latency_seconds | Gauge | 9.4 | Latency in accessing a specific filesystem |
-| filesystem_accessible | Gauge | 9.4 | Whether or not a specific filesystem is accessible |
-| filesystem_write_latency_seconds | Gauge | 9.4 | Write latency of a specific filesystem |
-| filesystem_writable | Gauge | 9.4 | Whether or not the filesystem is writable |
-| filesystem_read_latency_seconds | Gauge | 9.4 | Read latency of a specific filesystem |
-| filesystem_readable | Gauge | 9.4 | Whether or not the filesystem is readable |
-| http_requests_total | Counter | 9.4 | Rack request count |
-| http_request_duration_seconds | Histogram | 9.4 | HTTP response time from rack middleware |
-| pipelines_created_total | Counter | 9.4 | Counter of pipelines created |
-| rack_uncaught_errors_total | Counter | 9.4 | Rack connections handling uncaught errors count |
-| redis_ping_timeout | Gauge | 9.4 | Whether or not the last redis ping timed out |
-| redis_ping_success | Gauge | 9.4 | Whether or not the last redis ping succeeded |
-| redis_ping_latency_seconds | Gauge | 9.4 | Round trip time of the redis ping |
-| user_session_logins_total | Counter | 9.4 | Counter of how many users have logged in |
-| upload_file_does_not_exist | Counter | 10.7 in EE, 11.5 in CE | Number of times an upload record could not find its file |
-| failed_login_captcha_total | Gauge | 11.0 | Counter of failed CAPTCHA attempts during login |
-| successful_login_captcha_total | Gauge | 11.0 | Counter of successful CAPTCHA attempts during login |
-| unicorn_active_connections | Gauge | 11.0 | The number of active Unicorn connections (workers) |
-| unicorn_queued_connections | Gauge | 11.0 | The number of queued Unicorn connections |
-| unicorn_workers | Gauge | 12.0 | The number of Unicorn workers |
+| Metric | Type | Since | Description | Labels |
+|:---------------------------------------------------------------|:----------|-----------------------:|:----------------------------------------------------------------------------------------------------|:----------------------------------------------------|
+| `gitlab_banzai_cached_render_real_duration_seconds` | Histogram | 9.4 | Duration of rendering markdown into HTML when cached output exists | controller, action |
+| `gitlab_banzai_cacheless_render_real_duration_seconds` | Histogram | 9.4 | Duration of rendering markdown into HTML when cached outupt does not exist | controller, action |
+| `gitlab_cache_misses_total` | Counter | 10.2 | Cache read miss | controller, action |
+| `gitlab_cache_operation_duration_seconds` | Histogram | 10.2 | Cache access time | |
+| `gitlab_cache_operations_total` | Counter | 12.2 | Cache operations by controller/action | controller, action, operation |
+| `gitlab_database_transaction_seconds` | Histogram | 12.1 | Time spent in database transactions, in seconds | |
+| `gitlab_method_call_duration_seconds` | Histogram | 10.2 | Method calls real duration | controller, action, module, method |
+| `gitlab_rails_queue_duration_seconds` | Histogram | 9.4 | Measures latency between GitLab Workhorse forwarding a request to Rails | |
+| `gitlab_sql_duration_seconds` | Histogram | 10.2 | SQL execution time, excluding SCHEMA operations and BEGIN / COMMIT | |
+| `gitlab_transaction_allocated_memory_bytes` | Histogram | 10.2 | Allocated memory for all transactions (gitlab_transaction_* metrics) | |
+| `gitlab_transaction_cache_<key>_count_total` | Counter | 10.2 | Counter for total Rails cache calls (per key) | |
+| `gitlab_transaction_cache_<key>_duration_total` | Counter | 10.2 | Counter for total time (seconds) spent in Rails cache calls (per key) | |
+| `gitlab_transaction_cache_count_total` | Counter | 10.2 | Counter for total Rails cache calls (aggregate) | |
+| `gitlab_transaction_cache_duration_total` | Counter | 10.2 | Counter for total time (seconds) spent in Rails cache calls (aggregate) | |
+| `gitlab_transaction_cache_read_hit_count_total` | Counter | 10.2 | Counter for cache hits for Rails cache calls | controller, action |
+| `gitlab_transaction_cache_read_miss_count_total` | Counter | 10.2 | Counter for cache misses for Rails cache calls | controller, action |
+| `gitlab_transaction_duration_seconds` | Histogram | 10.2 | Duration for all transactions (gitlab_transaction_* metrics) | controller, action |
+| `gitlab_transaction_event_build_found_total` | Counter | 9.4 | Counter for build found for api /jobs/request | |
+| `gitlab_transaction_event_build_invalid_total` | Counter | 9.4 | Counter for build invalid due to concurrency conflict for api /jobs/request | |
+| `gitlab_transaction_event_build_not_found_cached_total` | Counter | 9.4 | Counter for cached response of build not found for api /jobs/request | |
+| `gitlab_transaction_event_build_not_found_total` | Counter | 9.4 | Counter for build not found for api /jobs/request | |
+| `gitlab_transaction_event_change_default_branch_total` | Counter | 9.4 | Counter when default branch is changed for any repository | |
+| `gitlab_transaction_event_create_repository_total` | Counter | 9.4 | Counter when any repository is created | |
+| `gitlab_transaction_event_etag_caching_cache_hit_total` | Counter | 9.4 | Counter for etag cache hit. | endpoint |
+| `gitlab_transaction_event_etag_caching_header_missing_total` | Counter | 9.4 | Counter for etag cache miss - header missing | endpoint |
+| `gitlab_transaction_event_etag_caching_key_not_found_total` | Counter | 9.4 | Counter for etag cache miss - key not found | endpoint |
+| `gitlab_transaction_event_etag_caching_middleware_used_total` | Counter | 9.4 | Counter for etag middleware accessed | endpoint |
+| `gitlab_transaction_event_etag_caching_resource_changed_total` | Counter | 9.4 | Counter for etag cache miss - resource changed | endpoint |
+| `gitlab_transaction_event_fork_repository_total` | Counter | 9.4 | Counter for repository forks (RepositoryForkWorker). Only incremented when source repository exists | |
+| `gitlab_transaction_event_import_repository_total` | Counter | 9.4 | Counter for repository imports (RepositoryImportWorker) | |
+| `gitlab_transaction_event_push_branch_total` | Counter | 9.4 | Counter for all branch pushes | |
+| `gitlab_transaction_event_push_commit_total` | Counter | 9.4 | Counter for commits | branch |
+| `gitlab_transaction_event_push_tag_total` | Counter | 9.4 | Counter for tag pushes | |
+| `gitlab_transaction_event_rails_exception_total` | Counter | 9.4 | Counter for number of rails exceptions | |
+| `gitlab_transaction_event_receive_email_total` | Counter | 9.4 | Counter for recieved emails | handler |
+| `gitlab_transaction_event_remote_mirrors_failed_total` | Counter | 10.8 | Counter for failed remote mirrors | |
+| `gitlab_transaction_event_remote_mirrors_finished_total` | Counter | 10.8 | Counter for finished remote mirrors | |
+| `gitlab_transaction_event_remote_mirrors_running_total` | Counter | 10.8 | Counter for running remote mirrors | |
+| `gitlab_transaction_event_remove_branch_total` | Counter | 9.4 | Counter when a branch is removed for any repository | |
+| `gitlab_transaction_event_remove_repository_total` | Counter | 9.4 | Counter when a repository is removed | |
+| `gitlab_transaction_event_remove_tag_total` | Counter | 9.4 | Counter when a tag is remove for any repository | |
+| `gitlab_transaction_event_sidekiq_exception_total` | Counter | 9.4 | Counter of sidekiq exceptions | |
+| `gitlab_transaction_event_stuck_import_jobs_total` | Counter | 9.4 | Count of stuck import jobs | projects_without_jid_count, projects_with_jid_count |
+| `gitlab_transaction_event_update_build_total` | Counter | 9.4 | Counter for update build for api /jobs/request/:id | |
+| `gitlab_transaction_new_redis_connections_total` | Counter | 9.4 | Counter for new redis connections | |
+| `gitlab_transaction_queue_duration_total` | Counter | 9.4 | Duration jobs were enqueued before processing | |
+| `gitlab_transaction_rails_queue_duration_total` | Counter | 9.4 | Measures latency between GitLab Workhorse forwarding a request to Rails | controller, action |
+| `gitlab_transaction_view_duration_total` | Counter | 9.4 | Duration for views | controller, action, view |
+| `gitlab_view_rendering_duration_seconds` | Histogram | 10.2 | Duration for views (histogram) | controller, action, view |
+| `http_requests_total` | Counter | 9.4 | Rack request count | method |
+| `http_request_duration_seconds` | Histogram | 9.4 | HTTP response time from rack middleware | method, status |
+| `pipelines_created_total` | Counter | 9.4 | Counter of pipelines created | |
+| `rack_uncaught_errors_total` | Counter | 9.4 | Rack connections handling uncaught errors count | |
+| `user_session_logins_total` | Counter | 9.4 | Counter of how many users have logged in | |
+| `upload_file_does_not_exist` | Counter | 10.7 in EE, 11.5 in CE | Number of times an upload record could not find its file | |
+| `failed_login_captcha_total` | Gauge | 11.0 | Counter of failed CAPTCHA attempts during login | |
+| `successful_login_captcha_total` | Gauge | 11.0 | Counter of successful CAPTCHA attempts during login | |
+
+## Metrics controlled by a feature flag
+
+The following metrics can be controlled by feature flags:
+
+| Metric | Feature Flag |
+|:---------------------------------------------------------------|:-------------------------------------------------------------------|
+| `gitlab_method_call_duration_seconds` | `prometheus_metrics_method_instrumentation` |
+| `gitlab_transaction_allocated_memory_bytes` | `prometheus_metrics_transaction_allocated_memory` |
+| `gitlab_transaction_event_build_found_total` | `prometheus_transaction_event_build_found_total` |
+| `gitlab_transaction_event_build_invalid_total` | `prometheus_transaction_event_build_invalid_total` |
+| `gitlab_transaction_event_build_not_found_cached_total` | `prometheus_transaction_event_build_not_found_cached_total` |
+| `gitlab_transaction_event_build_not_found_total` | `prometheus_transaction_event_build_not_found_total` |
+| `gitlab_transaction_event_change_default_branch_total` | `prometheus_transaction_event_change_default_branch_total` |
+| `gitlab_transaction_event_create_repository_total` | `prometheus_transaction_event_create_repository_total` |
+| `gitlab_transaction_event_etag_caching_cache_hit_total` | `prometheus_transaction_event_etag_caching_cache_hit_total` |
+| `gitlab_transaction_event_etag_caching_header_missing_total` | `prometheus_transaction_event_etag_caching_header_missing_total` |
+| `gitlab_transaction_event_etag_caching_key_not_found_total` | `prometheus_transaction_event_etag_caching_key_not_found_total` |
+| `gitlab_transaction_event_etag_caching_middleware_used_total` | `prometheus_transaction_event_etag_caching_middleware_used_total` |
+| `gitlab_transaction_event_etag_caching_resource_changed_total` | `prometheus_transaction_event_etag_caching_resource_changed_total` |
+| `gitlab_transaction_event_fork_repository_total` | `prometheus_transaction_event_fork_repository_total` |
+| `gitlab_transaction_event_import_repository_total` | `prometheus_transaction_event_import_repository_total` |
+| `gitlab_transaction_event_push_branch_total` | `prometheus_transaction_event_push_branch_total` |
+| `gitlab_transaction_event_push_commit_total` | `prometheus_transaction_event_push_commit_total` |
+| `gitlab_transaction_event_push_tag_total` | `prometheus_transaction_event_push_tag_total` |
+| `gitlab_transaction_event_rails_exception_total` | `prometheus_transaction_event_rails_exception_total` |
+| `gitlab_transaction_event_receive_email_total` | `prometheus_transaction_event_receive_email_total` |
+| `gitlab_transaction_event_remote_mirrors_failed_total` | `prometheus_transaction_event_remote_mirrors_failed_total` |
+| `gitlab_transaction_event_remote_mirrors_finished_total` | `prometheus_transaction_event_remote_mirrors_finished_total` |
+| `gitlab_transaction_event_remote_mirrors_running_total` | `prometheus_transaction_event_remote_mirrors_running_total` |
+| `gitlab_transaction_event_remove_branch_total` | `prometheus_transaction_event_remove_branch_total` |
+| `gitlab_transaction_event_remove_repository_total` | `prometheus_transaction_event_remove_repository_total` |
+| `gitlab_transaction_event_remove_tag_total` | `prometheus_transaction_event_remove_tag_total` |
+| `gitlab_transaction_event_sidekiq_exception_total` | `prometheus_transaction_event_sidekiq_exception_total` |
+| `gitlab_transaction_event_stuck_import_jobs_total` | `prometheus_transaction_event_stuck_import_jobs_total` |
+| `gitlab_transaction_event_update_build_total` | `prometheus_transaction_event_update_build_total` |
+| `gitlab_view_rendering_duration_seconds` | `prometheus_metrics_view_instrumentation` |
## Sidekiq Metrics available for Geo **(PREMIUM)**
Sidekiq jobs may also gather metrics, and these metrics can be accessed if the Sidekiq exporter is enabled (e.g. via
the `monitoring.sidekiq_exporter` configuration option in `gitlab.yml`.
-| Metric | Type | Since | Description | Labels |
-|:-------------------------------------------- |:------- |:----- |:----------- |:------ |
-| geo_db_replication_lag_seconds | Gauge | 10.2 | Database replication lag (seconds) | url
-| geo_repositories | Gauge | 10.2 | Total number of repositories available on primary | url
-| geo_repositories_synced | Gauge | 10.2 | Number of repositories synced on secondary | url
-| geo_repositories_failed | Gauge | 10.2 | Number of repositories failed to sync on secondary | url
-| geo_lfs_objects | Gauge | 10.2 | Total number of LFS objects available on primary | url
-| geo_lfs_objects_synced | Gauge | 10.2 | Number of LFS objects synced on secondary | url
-| geo_lfs_objects_failed | Gauge | 10.2 | Number of LFS objects failed to sync on secondary | url
-| geo_attachments | Gauge | 10.2 | Total number of file attachments available on primary | url
-| geo_attachments_synced | Gauge | 10.2 | Number of attachments synced on secondary | url
-| geo_attachments_failed | Gauge | 10.2 | Number of attachments failed to sync on secondary | url
-| geo_last_event_id | Gauge | 10.2 | Database ID of the latest event log entry on the primary | url
-| geo_last_event_timestamp | Gauge | 10.2 | UNIX timestamp of the latest event log entry on the primary | url
-| geo_cursor_last_event_id | Gauge | 10.2 | Last database ID of the event log processed by the secondary | url
-| geo_cursor_last_event_timestamp | Gauge | 10.2 | Last UNIX timestamp of the event log processed by the secondary | url
-| geo_status_failed_total | Counter | 10.2 | Number of times retrieving the status from the Geo Node failed | url
-| geo_last_successful_status_check_timestamp | Gauge | 10.2 | Last timestamp when the status was successfully updated | url
-| geo_lfs_objects_synced_missing_on_primary | Gauge | 10.7 | Number of LFS objects marked as synced due to the file missing on the primary | url
-| geo_job_artifacts_synced_missing_on_primary | Gauge | 10.7 | Number of job artifacts marked as synced due to the file missing on the primary | url
-| geo_attachments_synced_missing_on_primary | Gauge | 10.7 | Number of attachments marked as synced due to the file missing on the primary | url
-| geo_repositories_checksummed_count | Gauge | 10.7 | Number of repositories checksummed on primary | url
-| geo_repositories_checksum_failed_count | Gauge | 10.7 | Number of repositories failed to calculate the checksum on primary | url
-| geo_wikis_checksummed_count | Gauge | 10.7 | Number of wikis checksummed on primary | url
-| geo_wikis_checksum_failed_count | Gauge | 10.7 | Number of wikis failed to calculate the checksum on primary | url
-| geo_repositories_verified_count | Gauge | 10.7 | Number of repositories verified on secondary | url
-| geo_repositories_verification_failed_count | Gauge | 10.7 | Number of repositories failed to verify on secondary | url
-| geo_repositories_checksum_mismatch_count | Gauge | 10.7 | Number of repositories that checksum mismatch on secondary | url
-| geo_wikis_verified_count | Gauge | 10.7 | Number of wikis verified on secondary | url
-| geo_wikis_verification_failed_count | Gauge | 10.7 | Number of wikis failed to verify on secondary | url
-| geo_wikis_checksum_mismatch_count | Gauge | 10.7 | Number of wikis that checksum mismatch on secondary | url
-| geo_repositories_checked_count | Gauge | 11.1 | Number of repositories that have been checked via `git fsck` | url
-| geo_repositories_checked_failed_count | Gauge | 11.1 | Number of repositories that have a failure from `git fsck` | url
-| geo_repositories_retrying_verification_count | Gauge | 11.2 | Number of repositories verification failures that Geo is actively trying to correct on secondary | url
-| geo_wikis_retrying_verification_count | Gauge | 11.2 | Number of wikis verification failures that Geo is actively trying to correct on secondary | url
+| Metric | Type | Since | Description | Labels |
+|:---------------------------------------------- |:------- |:----- |:----------- |:------ |
+| `geo_db_replication_lag_seconds` | Gauge | 10.2 | Database replication lag (seconds) | url |
+| `geo_repositories` | Gauge | 10.2 | Total number of repositories available on primary | url |
+| `geo_repositories_synced` | Gauge | 10.2 | Number of repositories synced on secondary | url |
+| `geo_repositories_failed` | Gauge | 10.2 | Number of repositories failed to sync on secondary | url |
+| `geo_lfs_objects` | Gauge | 10.2 | Total number of LFS objects available on primary | url |
+| `geo_lfs_objects_synced` | Gauge | 10.2 | Number of LFS objects synced on secondary | url |
+| `geo_lfs_objects_failed` | Gauge | 10.2 | Number of LFS objects failed to sync on secondary | url |
+| `geo_attachments` | Gauge | 10.2 | Total number of file attachments available on primary | url |
+| `geo_attachments_synced` | Gauge | 10.2 | Number of attachments synced on secondary | url |
+| `geo_attachments_failed` | Gauge | 10.2 | Number of attachments failed to sync on secondary | url |
+| `geo_last_event_id` | Gauge | 10.2 | Database ID of the latest event log entry on the primary | url |
+| `geo_last_event_timestamp` | Gauge | 10.2 | UNIX timestamp of the latest event log entry on the primary | url |
+| `geo_cursor_last_event_id` | Gauge | 10.2 | Last database ID of the event log processed by the secondary | url |
+| `geo_cursor_last_event_timestamp` | Gauge | 10.2 | Last UNIX timestamp of the event log processed by the secondary | url |
+| `geo_status_failed_total` | Counter | 10.2 | Number of times retrieving the status from the Geo Node failed | url |
+| `geo_last_successful_status_check_timestamp` | Gauge | 10.2 | Last timestamp when the status was successfully updated | url |
+| `geo_lfs_objects_synced_missing_on_primary` | Gauge | 10.7 | Number of LFS objects marked as synced due to the file missing on the primary | url |
+| `geo_job_artifacts_synced_missing_on_primary` | Gauge | 10.7 | Number of job artifacts marked as synced due to the file missing on the primary | url |
+| `geo_attachments_synced_missing_on_primary` | Gauge | 10.7 | Number of attachments marked as synced due to the file missing on the primary | url |
+| `geo_repositories_checksummed_count` | Gauge | 10.7 | Number of repositories checksummed on primary | url |
+| `geo_repositories_checksum_failed_count` | Gauge | 10.7 | Number of repositories failed to calculate the checksum on primary | url |
+| `geo_wikis_checksummed_count` | Gauge | 10.7 | Number of wikis checksummed on primary | url |
+| `geo_wikis_checksum_failed_count` | Gauge | 10.7 | Number of wikis failed to calculate the checksum on primary | url |
+| `geo_repositories_verified_count` | Gauge | 10.7 | Number of repositories verified on secondary | url |
+| `geo_repositories_verification_failed_count` | Gauge | 10.7 | Number of repositories failed to verify on secondary | url |
+| `geo_repositories_checksum_mismatch_count` | Gauge | 10.7 | Number of repositories that checksum mismatch on secondary | url |
+| `geo_wikis_verified_count` | Gauge | 10.7 | Number of wikis verified on secondary | url |
+| `geo_wikis_verification_failed_count` | Gauge | 10.7 | Number of wikis failed to verify on secondary | url |
+| `geo_wikis_checksum_mismatch_count` | Gauge | 10.7 | Number of wikis that checksum mismatch on secondary | url |
+| `geo_repositories_checked_count` | Gauge | 11.1 | Number of repositories that have been checked via `git fsck` | url |
+| `geo_repositories_checked_failed_count` | Gauge | 11.1 | Number of repositories that have a failure from `git fsck` | url |
+| `geo_repositories_retrying_verification_count` | Gauge | 11.2 | Number of repositories verification failures that Geo is actively trying to correct on secondary | url |
+| `geo_wikis_retrying_verification_count` | Gauge | 11.2 | Number of wikis verification failures that Geo is actively trying to correct on secondary | url |
### Ruby metrics
Some basic Ruby runtime metrics are available:
-| Metric | Type | Since | Description |
-|:-------------------------------------- |:--------- |:----- |:----------- |
-| ruby_gc_duration_seconds_total | Counter | 11.1 | Time spent by Ruby in GC |
-| ruby_gc_stat_... | Gauge | 11.1 | Various metrics from [GC.stat] |
-| ruby_file_descriptors | Gauge | 11.1 | File descriptors per process |
-| ruby_memory_bytes | Gauge | 11.1 | Memory usage by process |
-| ruby_sampler_duration_seconds_total | Counter | 11.1 | Time spent collecting stats |
-| ruby_process_cpu_seconds_total | Gauge | 12.0 | Total amount of CPU time per process |
-| ruby_process_max_fds | Gauge | 12.0 | Maximum number of open file descriptors per process |
-| ruby_process_resident_memory_bytes | Gauge | 12.0 | Memory usage by process, measured in bytes |
-| ruby_process_start_time_seconds | Gauge | 12.0 | UNIX timestamp of process start time |
+| Metric | Type | Since | Description |
+|:------------------------------------ |:--------- |:----- |:----------- |
+| `ruby_gc_duration_seconds` | Counter | 11.1 | Time spent by Ruby in GC |
+| `ruby_gc_stat_...` | Gauge | 11.1 | Various metrics from [GC.stat] |
+| `ruby_file_descriptors` | Gauge | 11.1 | File descriptors per process |
+| `ruby_memory_bytes` | Gauge | 11.1 | Memory usage by process |
+| `ruby_sampler_duration_seconds` | Counter | 11.1 | Time spent collecting stats |
+| `ruby_process_cpu_seconds_total` | Gauge | 12.0 | Total amount of CPU time per process |
+| `ruby_process_max_fds` | Gauge | 12.0 | Maximum number of open file descriptors per process |
+| `ruby_process_resident_memory_bytes` | Gauge | 12.0 | Memory usage by process, measured in bytes |
+| `ruby_process_start_time_seconds` | Gauge | 12.0 | UNIX timestamp of process start time |
-[GC.stat]: https://ruby-doc.org/core-2.3.0/GC.html#method-c-stat
+[GC.stat]: https://ruby-doc.org/core-2.6.3/GC.html#method-c-stat
+
+## Unicorn Metrics
+
+Unicorn specific metrics, when Unicorn is used.
+
+| Metric | Type | Since | Description |
+|:-----------------------------|:------|:------|:---------------------------------------------------|
+| `unicorn_active_connections` | Gauge | 11.0 | The number of active Unicorn connections (workers) |
+| `unicorn_queued_connections` | Gauge | 11.0 | The number of queued Unicorn connections |
+| `unicorn_workers` | Gauge | 12.0 | The number of Unicorn workers |
## Puma Metrics **(EXPERIMENTAL)**
-When Puma is used instead of Unicorn, following metrics are available:
-
-| Metric | Type | Since | Description |
-|:-------------------------------------------- |:------- |:----- |:----------- |
-| puma_workers | Gauge | 12.0 | Total number of workers |
-| puma_running_workers | Gauge | 12.0 | Number of booted workers |
-| puma_stale_workers | Gauge | 12.0 | Number of old workers |
-| puma_phase | Gauge | 12.0 | Phase number (increased during phased restarts) |
-| puma_running | Gauge | 12.0 | Number of running threads |
-| puma_queued_connections | Gauge | 12.0 | Number of connections in that worker's "todo" set waiting for a worker thread |
-| puma_active_connections | Gauge | 12.0 | Number of threads processing a request |
-| puma_pool_capacity | Gauge | 12.0 | Number of requests the worker is capable of taking right now |
-| puma_max_threads | Gauge | 12.0 | Maximum number of worker threads |
-| puma_idle_threads | Gauge | 12.0 | Number of spawned threads which are not processing a request |
-| rack_state_total | Gauge | 12.0 | Number of requests in a given rack state |
-| puma_killer_terminations_total | Gauge | 12.0 | Number of workers terminated by PumaWorkerKiller |
+When Puma is used instead of Unicorn, the following metrics are available:
+
+| Metric | Type | Since | Description |
+|:---------------------------------------------- |:------- |:----- |:----------- |
+| `puma_workers` | Gauge | 12.0 | Total number of workers |
+| `puma_running_workers` | Gauge | 12.0 | Number of booted workers |
+| `puma_stale_workers` | Gauge | 12.0 | Number of old workers |
+| `puma_running` | Gauge | 12.0 | Number of running threads |
+| `puma_queued_connections` | Gauge | 12.0 | Number of connections in that worker's "todo" set waiting for a worker thread |
+| `puma_active_connections` | Gauge | 12.0 | Number of threads processing a request |
+| `puma_pool_capacity` | Gauge | 12.0 | Number of requests the worker is capable of taking right now |
+| `puma_max_threads` | Gauge | 12.0 | Maximum number of worker threads |
+| `puma_idle_threads` | Gauge | 12.0 | Number of spawned threads which are not processing a request |
+| `puma_killer_terminations_total` | Gauge | 12.0 | Number of workers terminated by PumaWorkerKiller |
## Metrics shared directory
@@ -142,9 +221,3 @@ If GitLab is installed using Omnibus and `tmpfs` is available then metrics
directory will be automatically configured.
[← Back to the main Prometheus page](index.md)
-
-[29118]: https://gitlab.com/gitlab-org/gitlab-ce/issues/29118
-[Prometheus]: https://prometheus.io
-[restart]: ../../restart_gitlab.md#omnibus-gitlab-restart
-[whitelist]: ../ip_whitelist.md
-[reconfigure]: ../../restart_gitlab.md#omnibus-gitlab-reconfigure
diff --git a/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md b/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md
index f68b03d1ade..9aa4dfa5ab7 100644
--- a/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md
+++ b/doc/administration/monitoring/prometheus/gitlab_monitor_exporter.md
@@ -12,9 +12,9 @@ To enable the GitLab monitor exporter:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
- ```ruby
- gitlab_monitor['enable'] = true
- ```
+ ```ruby
+ gitlab_monitor['enable'] = true
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect
diff --git a/doc/administration/monitoring/prometheus/index.md b/doc/administration/monitoring/prometheus/index.md
index ce65d77274b..c8968c51393 100644
--- a/doc/administration/monitoring/prometheus/index.md
+++ b/doc/administration/monitoring/prometheus/index.md
@@ -39,9 +39,9 @@ To disable Prometheus and all of its exporters, as well as any added in the futu
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `false`:
- ```ruby
- prometheus_monitoring['enable'] = false
- ```
+ ```ruby
+ prometheus_monitoring['enable'] = false
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect.
@@ -61,19 +61,19 @@ To change the address/port that Prometheus listens on:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line:
- ```ruby
- prometheus['listen_address'] = 'localhost:9090'
- ```
+ ```ruby
+ prometheus['listen_address'] = 'localhost:9090'
+ ```
- Replace `localhost:9090` with the address/port you want Prometheus to
- listen on. If you would like to allow access to Prometheus to hosts other
- than `localhost`, leave out the host, or use `0.0.0.0` to allow public access:
+ Replace `localhost:9090` with the address/port you want Prometheus to
+ listen on. If you would like to allow access to Prometheus to hosts other
+ than `localhost`, leave out the host, or use `0.0.0.0` to allow public access:
- ```ruby
- prometheus['listen_address'] = ':9090'
- # or
- prometheus['listen_address'] = '0.0.0.0:9090'
- ```
+ ```ruby
+ prometheus['listen_address'] = ':9090'
+ # or
+ prometheus['listen_address'] = '0.0.0.0:9090'
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect
@@ -90,22 +90,22 @@ To use an external Prometheus server:
1. Edit `/etc/gitlab/gitlab.rb`.
1. Disable the bundled Prometheus:
- ```ruby
- prometheus['enable'] = false
- ```
+ ```ruby
+ prometheus['enable'] = false
+ ```
1. Set each bundled service's [exporter](#bundled-software-metrics) to listen on a network address, for example:
- ```ruby
- gitlab_monitor['listen_address'] = '0.0.0.0'
- sidekiq['listen_address'] = '0.0.0.0'
- gitlab_monitor['listen_port'] = '9168'
- node_exporter['listen_address'] = '0.0.0.0:9100'
- redis_exporter['listen_address'] = '0.0.0.0:9121'
- postgres_exporter['listen_address'] = '0.0.0.0:9187'
- gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
- gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
- ```
+ ```ruby
+ gitlab_monitor['listen_address'] = '0.0.0.0'
+ sidekiq['listen_address'] = '0.0.0.0'
+ gitlab_monitor['listen_port'] = '9168'
+ node_exporter['listen_address'] = '0.0.0.0:9100'
+ redis_exporter['listen_address'] = '0.0.0.0:9121'
+ postgres_exporter['listen_address'] = '0.0.0.0:9187'
+ gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
+ gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
+ ```
1. Install and set up a dedicated Prometheus instance, if necessary, using the [official installation instructions](https://prometheus.io/docs/prometheus/latest/installation/).
1. Add the Prometheus server IP address to the [monitoring IP whitelist](../ip_whitelist.html). For example:
@@ -117,14 +117,14 @@ To use an external Prometheus server:
1. To scrape nginx metrics, you'll also need to configure nginx to allow the Prometheus server
IP. For example:
- ```ruby
- nginx['status']['options'] = {
- "server_tokens" => "off",
- "access_log" => "off",
- "allow" => "192.168.0.1",
- "deny" => "all",
- }
- ```
+ ```ruby
+ nginx['status']['options'] = {
+ "server_tokens" => "off",
+ "access_log" => "off",
+ "allow" => "192.168.0.1",
+ "deny" => "all",
+ }
+ ```
1. [Reconfigure GitLab][reconfigure] to apply the changes
1. Edit the Prometheus server's configuration file.
@@ -132,19 +132,59 @@ To use an external Prometheus server:
[scrape target configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cscrape_config%3E).
For example, a sample snippet using `static_configs`:
- ```yaml
- scrape_configs:
- - job_name: 'gitlab_exporters'
- static_configs:
- - targets: ['1.1.1.1:9168', '1.1.1.1:9236', '1.1.1.1:9236', '1.1.1.1:9100', '1.1.1.1:9121', '1.1.1.1:9187']
-
- - job_name: 'gitlab_metrics'
- metrics_path: /-/metrics
- static_configs:
- - targets: ['1.1.1.1:443']
- ```
-
-1. Restart the Prometheus server.
+ ```yaml
+ scrape_configs:
+ - job_name: nginx
+ static_configs:
+ - targets:
+ - 1.1.1.1:8060
+ - job_name: redis
+ static_configs:
+ - targets:
+ - 1.1.1.1:9121
+ - job_name: postgres
+ static_configs:
+ - targets:
+ - 1.1.1.1:9187
+ - job_name: node
+ static_configs:
+ - targets:
+ - 1.1.1.1:9100
+ - job_name: gitlab-workhorse
+ static_configs:
+ - targets:
+ - 1.1.1.1:9229
+ - job_name: gitlab-rails
+ metrics_path: "/-/metrics"
+ static_configs:
+ - targets:
+ - 1.1.1.1:8080
+ - job_name: gitlab-sidekiq
+ static_configs:
+ - targets:
+ - 1.1.1.1:8082
+ - job_name: gitlab_monitor_database
+ metrics_path: "/database"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitlab_monitor_sidekiq
+ metrics_path: "/sidekiq"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitlab_monitor_process
+ metrics_path: "/process"
+ static_configs:
+ - targets:
+ - 1.1.1.1:9168
+ - job_name: gitaly
+ static_configs:
+ - targets:
+ - 1.1.1.1:9236
+ ```
+
+1. Reload the Prometheus server.
## Viewing performance metrics
@@ -241,9 +281,9 @@ To disable the monitoring of Kubernetes:
1. Edit `/etc/gitlab/gitlab.rb`.
1. Add or find and uncomment the following line and set it to `false`:
- ```ruby
- prometheus['monitor_kubernetes'] = false
- ```
+ ```ruby
+ prometheus['monitor_kubernetes'] = false
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect.
diff --git a/doc/administration/monitoring/prometheus/node_exporter.md b/doc/administration/monitoring/prometheus/node_exporter.md
index aef7758a88f..bcacfaa3be5 100644
--- a/doc/administration/monitoring/prometheus/node_exporter.md
+++ b/doc/administration/monitoring/prometheus/node_exporter.md
@@ -13,9 +13,9 @@ To enable the node exporter:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
- ```ruby
- node_exporter['enable'] = true
- ```
+ ```ruby
+ node_exporter['enable'] = true
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect
diff --git a/doc/administration/monitoring/prometheus/pgbouncer_exporter.md b/doc/administration/monitoring/prometheus/pgbouncer_exporter.md
index d76834fdbea..6183594c69c 100644
--- a/doc/administration/monitoring/prometheus/pgbouncer_exporter.md
+++ b/doc/administration/monitoring/prometheus/pgbouncer_exporter.md
@@ -12,9 +12,9 @@ To enable the PgBouncer exporter:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
- ```ruby
- pgbouncer_exporter['enable'] = true
- ```
+ ```ruby
+ pgbouncer_exporter['enable'] = true
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect.
diff --git a/doc/administration/monitoring/prometheus/postgres_exporter.md b/doc/administration/monitoring/prometheus/postgres_exporter.md
index 8e2d3162f88..3ad15b65497 100644
--- a/doc/administration/monitoring/prometheus/postgres_exporter.md
+++ b/doc/administration/monitoring/prometheus/postgres_exporter.md
@@ -12,9 +12,9 @@ To enable the postgres exporter:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
- ```ruby
- postgres_exporter['enable'] = true
- ```
+ ```ruby
+ postgres_exporter['enable'] = true
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect
diff --git a/doc/administration/monitoring/prometheus/redis_exporter.md b/doc/administration/monitoring/prometheus/redis_exporter.md
index d54d409dbb6..1520115a38d 100644
--- a/doc/administration/monitoring/prometheus/redis_exporter.md
+++ b/doc/administration/monitoring/prometheus/redis_exporter.md
@@ -13,9 +13,9 @@ To enable the Redis exporter:
1. Edit `/etc/gitlab/gitlab.rb`
1. Add or find and uncomment the following line, making sure it's set to `true`:
- ```ruby
- redis_exporter['enable'] = true
- ```
+ ```ruby
+ redis_exporter['enable'] = true
+ ```
1. Save the file and [reconfigure GitLab][reconfigure] for the changes to
take effect
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md
index b329abdca08..e787af798bc 100644
--- a/doc/administration/operations/fast_ssh_key_lookup.md
+++ b/doc/administration/operations/fast_ssh_key_lookup.md
@@ -71,10 +71,10 @@ sudo service sshd reload
Confirm that SSH is working by removing your user's SSH key in the UI, adding a
new one, and attempting to pull a repo.
-> **Note:** For Omnibus Docker, `AuthorizedKeysCommand` is setup by default in
+NOTE: **Note:** For Omnibus Docker, `AuthorizedKeysCommand` is setup by default in
GitLab 11.11 and later.
-> **Warning:** Do not disable writes until SSH is confirmed to be working
+CAUTION: **Caution:** Do not disable writes until SSH is confirmed to be working
perfectly, because the file will quickly become out-of-date.
In the case of lookup failures (which are common), the `authorized_keys`
@@ -117,81 +117,81 @@ the database. The following instructions can be used to build OpenSSH 7.5:
1. First, download the package and install the required packages:
- ```
- sudo su -
- cd /tmp
- curl --remote-name https://mirrors.evowise.com/pub/OpenBSD/OpenSSH/portable/openssh-7.5p1.tar.gz
- tar xzvf openssh-7.5p1.tar.gz
- yum install rpm-build gcc make wget openssl-devel krb5-devel pam-devel libX11-devel xmkmf libXt-devel
- ```
+ ```
+ sudo su -
+ cd /tmp
+ curl --remote-name https://mirrors.evowise.com/pub/OpenBSD/OpenSSH/portable/openssh-7.5p1.tar.gz
+ tar xzvf openssh-7.5p1.tar.gz
+ yum install rpm-build gcc make wget openssl-devel krb5-devel pam-devel libX11-devel xmkmf libXt-devel
+ ```
1. Prepare the build by copying files to the right place:
- ```
- mkdir -p /root/rpmbuild/{SOURCES,SPECS}
- cp ./openssh-7.5p1/contrib/redhat/openssh.spec /root/rpmbuild/SPECS/
- cp openssh-7.5p1.tar.gz /root/rpmbuild/SOURCES/
- cd /root/rpmbuild/SPECS
- ```
+ ```
+ mkdir -p /root/rpmbuild/{SOURCES,SPECS}
+ cp ./openssh-7.5p1/contrib/redhat/openssh.spec /root/rpmbuild/SPECS/
+ cp openssh-7.5p1.tar.gz /root/rpmbuild/SOURCES/
+ cd /root/rpmbuild/SPECS
+ ```
1. Next, set the spec settings properly:
- ```
- sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" openssh.spec
- sed -i -e "s/%define no_x11_askpass 0/%define no_x11_askpass 1/g" openssh.spec
- sed -i -e "s/BuildPreReq/BuildRequires/g" openssh.spec
- ```
+ ```
+ sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" openssh.spec
+ sed -i -e "s/%define no_x11_askpass 0/%define no_x11_askpass 1/g" openssh.spec
+ sed -i -e "s/BuildPreReq/BuildRequires/g" openssh.spec
+ ```
1. Build the RPMs:
- ```
- rpmbuild -bb openssh.spec
- ```
+ ```
+ rpmbuild -bb openssh.spec
+ ```
1. Ensure the RPMs were built:
- ```
- ls -al /root/rpmbuild/RPMS/x86_64/
- ```
+ ```
+ ls -al /root/rpmbuild/RPMS/x86_64/
+ ```
- You should see something as the following:
+ You should see something as the following:
- ```
- total 1324
- drwxr-xr-x. 2 root root 4096 Jun 20 19:37 .
- drwxr-xr-x. 3 root root 19 Jun 20 19:37 ..
- -rw-r--r--. 1 root root 470828 Jun 20 19:37 openssh-7.5p1-1.x86_64.rpm
- -rw-r--r--. 1 root root 490716 Jun 20 19:37 openssh-clients-7.5p1-1.x86_64.rpm
- -rw-r--r--. 1 root root 17020 Jun 20 19:37 openssh-debuginfo-7.5p1-1.x86_64.rpm
- -rw-r--r--. 1 root root 367516 Jun 20 19:37 openssh-server-7.5p1-1.x86_64.rpm
- ```
+ ```
+ total 1324
+ drwxr-xr-x. 2 root root 4096 Jun 20 19:37 .
+ drwxr-xr-x. 3 root root 19 Jun 20 19:37 ..
+ -rw-r--r--. 1 root root 470828 Jun 20 19:37 openssh-7.5p1-1.x86_64.rpm
+ -rw-r--r--. 1 root root 490716 Jun 20 19:37 openssh-clients-7.5p1-1.x86_64.rpm
+ -rw-r--r--. 1 root root 17020 Jun 20 19:37 openssh-debuginfo-7.5p1-1.x86_64.rpm
+ -rw-r--r--. 1 root root 367516 Jun 20 19:37 openssh-server-7.5p1-1.x86_64.rpm
+ ```
1. Install the packages. OpenSSH packages will replace `/etc/pam.d/sshd`
with its own version, which may prevent users from logging in, so be sure
that the file is backed up and restored after installation:
- ```
- timestamp=$(date +%s)
- cp /etc/pam.d/sshd pam-ssh-conf-$timestamp
- rpm -Uvh /root/rpmbuild/RPMS/x86_64/*.rpm
- yes | cp pam-ssh-conf-$timestamp /etc/pam.d/sshd
- ```
+ ```
+ timestamp=$(date +%s)
+ cp /etc/pam.d/sshd pam-ssh-conf-$timestamp
+ rpm -Uvh /root/rpmbuild/RPMS/x86_64/*.rpm
+ yes | cp pam-ssh-conf-$timestamp /etc/pam.d/sshd
+ ```
1. Verify the installed version. In another window, attempt to login to the server:
- ```
- ssh -v <your-centos-machine>
- ```
+ ```
+ ssh -v <your-centos-machine>
+ ```
- You should see a line that reads: "debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5"
+ You should see a line that reads: "debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5"
- If not, you may need to restart sshd (e.g. `systemctl restart sshd.service`).
+ If not, you may need to restart sshd (e.g. `systemctl restart sshd.service`).
-1. *IMPORTANT!* Open a new SSH session to your server before exiting to make
- sure everything is working! If you need to downgrade, simple install the
- older package:
+1. *IMPORTANT!* Open a new SSH session to your server before exiting to make
+ sure everything is working! If you need to downgrade, simple install the
+ older package:
- ```
- # Only run this if you run into a problem logging in
- yum downgrade openssh-server openssh openssh-clients
- ```
+ ```
+ # Only run this if you run into a problem logging in
+ yum downgrade openssh-server openssh openssh-clients
+ ```
diff --git a/doc/administration/operations/filesystem_benchmarking.md b/doc/administration/operations/filesystem_benchmarking.md
index c0c242733a2..b5922d9d99d 100644
--- a/doc/administration/operations/filesystem_benchmarking.md
+++ b/doc/administration/operations/filesystem_benchmarking.md
@@ -78,34 +78,37 @@ executed, and then read the same 1,000 files.
[repository storage path](../repository_storage_paths.md).
1. Create a temporary directory for the test so it's easy to remove the files later:
- ```sh
- mkdir test; cd test
- ```
+ ```sh
+ mkdir test; cd test
+ ```
+
1. Run the command:
- ```sh
- time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
- ```
-1. To benchmark read performance, run the command:
+ ```sh
+ time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
+ ```
- ```sh
- time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
- ```
-1. Remove the test files:
+1. To benchmark read performance, run the command:
```sh
- cd ../; rm -rf test
+ time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
```
+1. Remove the test files:
+
+ ```sh
+ cd ../; rm -rf test
+ ```
+
The output of the `time for ...` commands will look similar to the following. The
important metric is the `real` time.
```sh
$ time for i in {0..1000}; do echo 'test' > "test${i}.txt"; done
-real 0m0.116s
-user 0m0.025s
-sys 0m0.091s
+real 0m0.116s
+user 0m0.025s
+sys 0m0.091s
$ time for i in {0..1000}; do cat "test${i}.txt" > /dev/null; done
diff --git a/doc/administration/operations/img/sidekiq-cluster.png b/doc/administration/operations/img/sidekiq-cluster.png
index 4eb1849010e..3899385eb8f 100644
--- a/doc/administration/operations/img/sidekiq-cluster.png
+++ b/doc/administration/operations/img/sidekiq-cluster.png
Binary files differ
diff --git a/doc/administration/operations/unicorn.md b/doc/administration/operations/unicorn.md
index ae67d7f08d6..8178cb243f3 100644
--- a/doc/administration/operations/unicorn.md
+++ b/doc/administration/operations/unicorn.md
@@ -69,7 +69,7 @@ unicorn['worker_memory_limit_min'] = "400 * 1 << 20"
unicorn['worker_memory_limit_max'] = "650 * 1 << 20"
```
-Otherwise, you can set the `GITLAB_UNICORN_MEMORY_MIN` and `GITLAB_UNICORN_MEMORY_MIN`
+Otherwise, you can set the `GITLAB_UNICORN_MEMORY_MIN` and `GITLAB_UNICORN_MEMORY_MAX`
[environment variables](../environment_variables.md).
This is what a Unicorn worker memory restart looks like in unicorn_stderr.log.
diff --git a/doc/administration/packages.md b/doc/administration/packages.md
index c0f8777a8c0..1628b6d6f91 100644
--- a/doc/administration/packages.md
+++ b/doc/administration/packages.md
@@ -55,7 +55,7 @@ local location or even use object storage.
The packages for Omnibus GitLab installations are stored under
`/var/opt/gitlab/gitlab-rails/shared/packages/` and for source
-installations under `shared/packages/` (relative to the git homedir).
+installations under `shared/packages/` (relative to the Git homedir).
To change the local storage path:
**Omnibus GitLab installations**
diff --git a/doc/administration/pages/img/lets_encrypt_integration_v12_1.png b/doc/administration/pages/img/lets_encrypt_integration_v12_1.png
new file mode 100644
index 00000000000..0f3ca25ce55
--- /dev/null
+++ b/doc/administration/pages/img/lets_encrypt_integration_v12_1.png
Binary files differ
diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md
index 102656b01ec..774e7056845 100644
--- a/doc/administration/pages/index.md
+++ b/doc/administration/pages/index.md
@@ -124,9 +124,9 @@ The Pages daemon doesn't listen to the outside world.
1. Set the external URL for GitLab Pages in `/etc/gitlab/gitlab.rb`:
- ```shell
- pages_external_url 'http://example.io'
- ```
+ ```shell
+ pages_external_url 'http://example.io'
+ ```
1. [Reconfigure GitLab][reconfigure].
@@ -149,16 +149,16 @@ outside world.
1. Place the certificate and key inside `/etc/gitlab/ssl`
1. In `/etc/gitlab/gitlab.rb` specify the following configuration:
- ```shell
- pages_external_url 'https://example.io'
+ ```shell
+ pages_external_url 'https://example.io'
- pages_nginx['redirect_http_to_https'] = true
- pages_nginx['ssl_certificate'] = "/etc/gitlab/ssl/pages-nginx.crt"
- pages_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/pages-nginx.key"
- ```
+ pages_nginx['redirect_http_to_https'] = true
+ pages_nginx['ssl_certificate'] = "/etc/gitlab/ssl/pages-nginx.crt"
+ pages_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/pages-nginx.key"
+ ```
- where `pages-nginx.crt` and `pages-nginx.key` are the SSL cert and key,
- respectively.
+ where `pages-nginx.crt` and `pages-nginx.key` are the SSL cert and key,
+ respectively.
1. [Reconfigure GitLab][reconfigure].
@@ -171,9 +171,9 @@ behavior:
1. Edit `/etc/gitlab/gitlab.rb`.
1. Set the `inplace_chroot` to `true` for GitLab Pages:
- ```shell
- gitlab_pages['inplace_chroot'] = true
- ```
+ ```shell
+ gitlab_pages['inplace_chroot'] = true
+ ```
1. [Reconfigure GitLab][reconfigure].
@@ -206,16 +206,16 @@ world. Custom domains are supported, but no TLS.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```shell
- pages_external_url "http://example.io"
- nginx['listen_addresses'] = ['192.0.2.1']
- pages_nginx['enable'] = false
- gitlab_pages['external_http'] = ['192.0.2.2:80', '[2001::2]:80']
- ```
+ ```shell
+ pages_external_url "http://example.io"
+ nginx['listen_addresses'] = ['192.0.2.1']
+ pages_nginx['enable'] = false
+ gitlab_pages['external_http'] = ['192.0.2.2:80', '[2001::2]:80']
+ ```
- where `192.0.2.1` is the primary IP address that GitLab is listening to and
- `192.0.2.2` and `2001::2` are the secondary IPs the GitLab Pages daemon
- listens on. If you don't have IPv6, you can omit the IPv6 address.
+ where `192.0.2.1` is the primary IP address that GitLab is listening to and
+ `192.0.2.2` and `2001::2` are the secondary IPs the GitLab Pages daemon
+ listens on. If you don't have IPv6, you can omit the IPv6 address.
1. [Reconfigure GitLab][reconfigure].
@@ -237,26 +237,26 @@ world. Custom domains and TLS are supported.
1. Edit `/etc/gitlab/gitlab.rb`:
- ```shell
- pages_external_url "https://example.io"
- nginx['listen_addresses'] = ['192.0.2.1']
- pages_nginx['enable'] = false
- gitlab_pages['cert'] = "/etc/gitlab/ssl/example.io.crt"
- gitlab_pages['cert_key'] = "/etc/gitlab/ssl/example.io.key"
- gitlab_pages['external_http'] = ['192.0.2.2:80', '[2001::2]:80']
- gitlab_pages['external_https'] = ['192.0.2.2:443', '[2001::2]:443']
- ```
+ ```shell
+ pages_external_url "https://example.io"
+ nginx['listen_addresses'] = ['192.0.2.1']
+ pages_nginx['enable'] = false
+ gitlab_pages['cert'] = "/etc/gitlab/ssl/example.io.crt"
+ gitlab_pages['cert_key'] = "/etc/gitlab/ssl/example.io.key"
+ gitlab_pages['external_http'] = ['192.0.2.2:80', '[2001::2]:80']
+ gitlab_pages['external_https'] = ['192.0.2.2:443', '[2001::2]:443']
+ ```
- where `192.0.2.1` is the primary IP address that GitLab is listening to and
- `192.0.2.2` and `2001::2` are the secondary IPs where the GitLab Pages daemon
- listens on. If you don't have IPv6, you can omit the IPv6 address.
+ where `192.0.2.1` is the primary IP address that GitLab is listening to and
+ `192.0.2.2` and `2001::2` are the secondary IPs where the GitLab Pages daemon
+ listens on. If you don't have IPv6, you can omit the IPv6 address.
1. [Reconfigure GitLab][reconfigure].
### Custom domain verification
To prevent malicious users from hijacking domains that don't belong to them,
-GitLab supports [custom domain verification](../../user/project/pages/getting_started_part_three.md#dns-txt-record).
+GitLab supports [custom domain verification](../../user/project/pages/custom_domains_ssl_tls_certification/index.md#steps).
When adding a custom domain, users will be required to prove they own it by
adding a GitLab-controlled verification code to the DNS records for that domain.
@@ -265,6 +265,23 @@ verification requirement. Navigate to `Admin area ➔ Settings` and uncheck
**Require users to prove ownership of custom domains** in the Pages section.
This setting is enabled by default.
+### Let's Encrypt integration
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/28996) in GitLab 12.1.
+
+[GitLab Pages' Let's Encrypt integration](../../user/project/pages/custom_domains_ssl_tls_certification/lets_encrypt_integration.md)
+allows users to add Let's Encrypt SSL certificates for GitLab Pages
+sites served under a custom domain.
+
+To enable it, you'll need to:
+
+1. Choose an email on which you will recieve notifications about expiring domains.
+1. Navigate to your instance's **Admin Area > Settings > Preferences** and expand **Pages** settings.
+1. Enter the email for receiving notifications and accept Let's Encrypt's Terms of Service as shown below.
+1. Click **Save changes**.
+
+![Let's Encrypt settings](img/lets_encrypt_integration_v12_1.png)
+
### Access control
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/33422) in GitLab 11.5.
@@ -287,9 +304,9 @@ Pages access control is disabled by default. To enable it:
1. Enable it in `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_pages['access_control'] = true
- ```
+ ```ruby
+ gitlab_pages['access_control'] = true
+ ```
1. [Reconfigure GitLab][reconfigure].
1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core-only).
@@ -302,9 +319,9 @@ pages:
1. Configure in `/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_pages['http_proxy'] = 'http://example:8080'
- ```
+ ```ruby
+ gitlab_pages['http_proxy'] = 'http://example:8080'
+ ```
1. [Reconfigure Gitlab][reconfigure] for the changes to take effect.
@@ -319,9 +336,9 @@ Follow the steps below to configure verbose logging of GitLab Pages daemon.
If you wish to make it log events with level `DEBUG` you must configure this in
`/etc/gitlab/gitlab.rb`:
- ```shell
- gitlab_pages['log_verbose'] = true
- ```
+ ```shell
+ gitlab_pages['log_verbose'] = true
+ ```
1. [Reconfigure GitLab][reconfigure].
@@ -334,9 +351,9 @@ are stored.
If you wish to store them in another location you must set it up in
`/etc/gitlab/gitlab.rb`:
- ```shell
- gitlab_rails['pages_path'] = "/mnt/storage/pages"
- ```
+ ```shell
+ gitlab_rails['pages_path'] = "/mnt/storage/pages"
+ ```
1. [Reconfigure GitLab][reconfigure].
@@ -347,19 +364,19 @@ Omnibus GitLab 11.1.
1. By default the listener is configured to listen for requests on `localhost:8090`.
- If you wish to disable it you must configure this in
- `/etc/gitlab/gitlab.rb`:
+ If you wish to disable it you must configure this in
+ `/etc/gitlab/gitlab.rb`:
- ```shell
- gitlab_pages['listen_proxy'] = nil
- ```
+ ```shell
+ gitlab_pages['listen_proxy'] = nil
+ ```
- If you wish to make it listen on a different port you must configure this also in
- `/etc/gitlab/gitlab.rb`:
+ If you wish to make it listen on a different port you must configure this also in
+ `/etc/gitlab/gitlab.rb`:
- ```shell
- gitlab_pages['listen_proxy'] = "localhost:10080"
- ```
+ ```shell
+ gitlab_pages['listen_proxy'] = "localhost:10080"
+ ```
1. [Reconfigure GitLab][reconfigure].
@@ -381,28 +398,29 @@ Follow the steps below to configure GitLab Pages in a separate server.
1. On `app2` install GitLab omnibus and modify `/etc/gitlab/gitlab.rb` this way:
- ```shell
- external_url 'http://<ip-address-of-the-server>'
- pages_external_url "http://<your-pages-domain>"
- postgresql['enable'] = false
- redis['enable'] = false
- prometheus['enable'] = false
- unicorn['enable'] = false
- sidekiq['enable'] = false
- gitlab_workhorse['enable'] = false
- gitaly['enable'] = false
- alertmanager['enable'] = false
- node_exporter['enable'] = false
- gitlab_rails['auto_migrate'] = false
- ```
+ ```shell
+ external_url 'http://<ip-address-of-the-server>'
+ pages_external_url "http://<your-pages-domain>"
+ postgresql['enable'] = false
+ redis['enable'] = false
+ prometheus['enable'] = false
+ unicorn['enable'] = false
+ sidekiq['enable'] = false
+ gitlab_workhorse['enable'] = false
+ gitaly['enable'] = false
+ alertmanager['enable'] = false
+ node_exporter['enable'] = false
+ gitlab_rails['auto_migrate'] = false
+ ```
+
1. Run `sudo gitlab-ctl reconfigure`.
1. On `app1` apply the following changes to `/etc/gitlab/gitlab.rb`:
- ```shell
- gitlab_pages['enable'] = false
- pages_external_url "http://<your-pages-domain>"
- gitlab_rails['pages_path'] = "/mnt/pages"
- ```
+ ```shell
+ gitlab_pages['enable'] = false
+ pages_external_url "http://<your-pages-domain>"
+ gitlab_rails['pages_path'] = "/mnt/pages"
+ ```
1. Run `sudo gitlab-ctl reconfigure`.
diff --git a/doc/administration/pages/source.md b/doc/administration/pages/source.md
index 2100f7cd707..c77a1a9638f 100644
--- a/doc/administration/pages/source.md
+++ b/doc/administration/pages/source.md
@@ -24,8 +24,6 @@ SNI and exposes pages using HTTP2 by default.
You are encouraged to read its [README][pages-readme] to fully understand how
it works.
----
-
In the case of [custom domains](#custom-domains) (but not
[wildcard domains](#wildcard-domains)), the Pages daemon needs to listen on
ports `80` and/or `443`. For that reason, there is some flexibility in the way
@@ -92,8 +90,6 @@ since that is needed in all configurations.
- [Wildcard DNS setup](#dns-configuration)
----
-
URL scheme: `http://page.example.io`
This is the minimum setup that you can use Pages with. It is the base for all
@@ -102,90 +98,89 @@ The Pages daemon doesn't listen to the outside world.
1. Install the Pages daemon:
- ```
- cd /home/git
- sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
- cd gitlab-pages
- sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
- sudo -u git -H make
- ```
+ ```
+ cd /home/git
+ sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
+ cd gitlab-pages
+ sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
+ sudo -u git -H make
+ ```
1. Go to the GitLab installation directory:
- ```bash
- cd /home/git/gitlab
- ```
+ ```bash
+ cd /home/git/gitlab
+ ```
1. Edit `gitlab.yml` and under the `pages` setting, set `enabled` to `true` and
the `host` to the FQDN under which GitLab Pages will be served:
- ```yaml
- ## GitLab Pages
- pages:
- enabled: true
- # The location where pages are stored (default: shared/pages).
- # path: shared/pages
+ ```yaml
+ ## GitLab Pages
+ pages:
+ enabled: true
+ # The location where pages are stored (default: shared/pages).
+ # path: shared/pages
- host: example.io
- port: 80
- https: false
- ```
+ host: example.io
+ port: 80
+ https: false
+ ```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
`-pages-domain` must match the `host` setting that you set above.
- ```
- gitlab_pages_enabled=true
- gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090"
- ```
+ ```
+ gitlab_pages_enabled=true
+ gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090"
+ ```
1. Copy the `gitlab-pages` Nginx configuration file:
- ```bash
- sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
- sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages.conf
- ```
+ ```bash
+ sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
+ sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages.conf
+ ```
1. Restart NGINX
1. [Restart GitLab][restart]
### Wildcard domains with TLS support
-> **Requirements:**
-> - [Wildcard DNS setup](#dns-configuration)
-> - Wildcard TLS certificate
->
-> ---
->
-> URL scheme: `https://page.example.io`
+**Requirements:**
+
+- [Wildcard DNS setup](#dns-configuration)
+- Wildcard TLS certificate
+
+URL scheme: `https://page.example.io`
Nginx will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
1. Install the Pages daemon:
- ```
- cd /home/git
- sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
- cd gitlab-pages
- sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
- sudo -u git -H make
- ```
+ ```
+ cd /home/git
+ sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
+ cd gitlab-pages
+ sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
+ sudo -u git -H make
+ ```
1. In `gitlab.yml`, set the port to `443` and https to `true`:
- ```bash
- ## GitLab Pages
- pages:
- enabled: true
- # The location where pages are stored (default: shared/pages).
- # path: shared/pages
+ ```bash
+ ## GitLab Pages
+ pages:
+ enabled: true
+ # The location where pages are stored (default: shared/pages).
+ # path: shared/pages
- host: example.io
- port: 443
- https: true
- ```
+ host: example.io
+ port: 443
+ https: true
+ ```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
@@ -193,17 +188,17 @@ outside world.
The `-root-cert` and `-root-key` settings are the wildcard TLS certificates
of the `example.io` domain:
- ```
- gitlab_pages_enabled=true
- gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
- ```
+ ```
+ gitlab_pages_enabled=true
+ gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
+ ```
1. Copy the `gitlab-pages-ssl` Nginx configuration file:
- ```bash
- sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
- sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages-ssl.conf
- ```
+ ```bash
+ sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
+ sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages-ssl.conf
+ ```
1. Restart NGINX
1. [Restart GitLab][restart]
@@ -217,13 +212,12 @@ that without TLS certificates.
### Custom domains
-> **Requirements:**
-> - [Wildcard DNS setup](#dns-configuration)
-> - Secondary IP
->
-> ---
->
-> URL scheme: `http://page.example.io` and `http://domain.com`
+**Requirements:**
+
+- [Wildcard DNS setup](#dns-configuration)
+- Secondary IP
+
+URL scheme: `http://page.example.io` and `http://domain.com`
In that case, the pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -231,48 +225,48 @@ world. Custom domains are supported, but no TLS.
1. Install the Pages daemon:
- ```
- cd /home/git
- sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
- cd gitlab-pages
- sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
- sudo -u git -H make
- ```
+ ```
+ cd /home/git
+ sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
+ cd gitlab-pages
+ sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
+ sudo -u git -H make
+ ```
1. Edit `gitlab.yml` to look like the example below. You need to change the
`host` to the FQDN under which GitLab Pages will be served. Set
`external_http` to the secondary IP on which the pages daemon will listen
for connections:
- ```yaml
- pages:
- enabled: true
- # The location where pages are stored (default: shared/pages).
- # path: shared/pages
+ ```yaml
+ pages:
+ enabled: true
+ # The location where pages are stored (default: shared/pages).
+ # path: shared/pages
- host: example.io
- port: 80
- https: false
+ host: example.io
+ port: 80
+ https: false
- external_http: 192.0.2.2:80
- ```
+ external_http: 192.0.2.2:80
+ ```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
`-pages-domain` and `-listen-http` must match the `host` and `external_http`
settings that you set above respectively:
- ```
- gitlab_pages_enabled=true
- gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80"
- ```
+ ```
+ gitlab_pages_enabled=true
+ gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80"
+ ```
1. Copy the `gitlab-pages-ssl` Nginx configuration file:
- ```bash
- sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
- sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages.conf
- ```
+ ```bash
+ sudo cp lib/support/nginx/gitlab-pages /etc/nginx/sites-available/gitlab-pages.conf
+ sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages.conf
+ ```
1. Edit all GitLab related configs in `/etc/nginx/site-available/` and replace
`0.0.0.0` with `192.0.2.1`, where `192.0.2.1` the primary IP where GitLab
@@ -282,14 +276,13 @@ world. Custom domains are supported, but no TLS.
### Custom domains with TLS support
-> **Requirements:**
-> - [Wildcard DNS setup](#dns-configuration)
-> - Wildcard TLS certificate
-> - Secondary IP
->
-> ---
->
-> URL scheme: `https://page.example.io` and `https://domain.com`
+**Requirements:**
+
+- [Wildcard DNS setup](#dns-configuration)
+- Wildcard TLS certificate
+- Secondary IP
+
+URL scheme: `https://page.example.io` and `https://domain.com`
In that case, the pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
@@ -297,33 +290,33 @@ world. Custom domains and TLS are supported.
1. Install the Pages daemon:
- ```
- cd /home/git
- sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
- cd gitlab-pages
- sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
- sudo -u git -H make
- ```
+ ```
+ cd /home/git
+ sudo -u git -H git clone https://gitlab.com/gitlab-org/gitlab-pages.git
+ cd gitlab-pages
+ sudo -u git -H git checkout v$(</home/git/gitlab/GITLAB_PAGES_VERSION)
+ sudo -u git -H make
+ ```
1. Edit `gitlab.yml` to look like the example below. You need to change the
`host` to the FQDN under which GitLab Pages will be served. Set
`external_http` and `external_https` to the secondary IP on which the pages
daemon will listen for connections:
- ```yaml
- ## GitLab Pages
- pages:
- enabled: true
- # The location where pages are stored (default: shared/pages).
- # path: shared/pages
+ ```yaml
+ ## GitLab Pages
+ pages:
+ enabled: true
+ # The location where pages are stored (default: shared/pages).
+ # path: shared/pages
- host: example.io
- port: 443
- https: true
+ host: example.io
+ port: 443
+ https: true
- external_http: 192.0.2.2:80
- external_https: 192.0.2.2:443
- ```
+ external_http: 192.0.2.2:80
+ external_https: 192.0.2.2:443
+ ```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
@@ -332,17 +325,17 @@ world. Custom domains and TLS are supported.
The `-root-cert` and `-root-key` settings are the wildcard TLS certificates
of the `example.io` domain:
- ```
- gitlab_pages_enabled=true
- gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80 -listen-https 192.0.2.2:443 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
- ```
+ ```
+ gitlab_pages_enabled=true
+ gitlab_pages_options="-pages-domain example.io -pages-root $app_root/shared/pages -listen-proxy 127.0.0.1:8090 -listen-http 192.0.2.2:80 -listen-https 192.0.2.2:443 -root-cert /path/to/example.io.crt -root-key /path/to/example.io.key
+ ```
1. Copy the `gitlab-pages-ssl` Nginx configuration file:
- ```bash
- sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
- sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages-ssl.conf
- ```
+ ```bash
+ sudo cp lib/support/nginx/gitlab-pages-ssl /etc/nginx/sites-available/gitlab-pages-ssl.conf
+ sudo ln -sf /etc/nginx/sites-{available,enabled}/gitlab-pages-ssl.conf
+ ```
1. Edit all GitLab related configs in `/etc/nginx/site-available/` and replace
`0.0.0.0` with `192.0.2.1`, where `192.0.2.1` the primary IP where GitLab
@@ -359,9 +352,9 @@ are stored.
If you wish to store them in another location you must set it up in
`/etc/gitlab/gitlab.rb`:
- ```ruby
- gitlab_rails['pages_path'] = "/mnt/storage/pages"
- ```
+ ```ruby
+ gitlab_rails['pages_path'] = "/mnt/storage/pages"
+ ```
1. [Reconfigure GitLab][reconfigure]
@@ -414,10 +407,10 @@ Pages access control is disabled by default. To enable it:
1. Modify your `config/gitlab.yml` file:
- ```yaml
- pages:
- access_control: true
- ```
+ ```yaml
+ pages:
+ access_control: true
+ ```
1. [Restart GitLab][restart].
1. Create a new [system OAuth application](../../integration/oauth_provider.md#adding-an-application-through-the-profile).
@@ -426,12 +419,12 @@ Pages access control is disabled by default. To enable it:
application, but it does need the "api" scope.
1. Start the Pages daemon with the following additional arguments:
- ```shell
- -auth-client-secret <OAuth code generated by GitLab> \
- -auth-redirect-uri http://projects.example.io/auth \
- -auth-secret <40 random hex characters> \
- -auth-server <URL of the GitLab instance>
- ```
+ ```shell
+ -auth-client-secret <OAuth code generated by GitLab> \
+ -auth-redirect-uri http://projects.example.io/auth \
+ -auth-secret <40 random hex characters> \
+ -auth-server <URL of the GitLab instance>
+ ```
1. Users can now configure it in their [projects' settings](../../user/project/pages/introduction.md#gitlab-pages-access-control-core-only).
@@ -444,12 +437,12 @@ are stored.
If you wish to store them in another location you must set it up in
`gitlab.yml` under the `pages` section:
- ```yaml
- pages:
- enabled: true
- # The location where pages are stored (default: shared/pages).
- path: /mnt/storage/pages
- ```
+ ```yaml
+ pages:
+ enabled: true
+ # The location where pages are stored (default: shared/pages).
+ path: /mnt/storage/pages
+ ```
1. [Restart GitLab][restart]
diff --git a/doc/administration/plugins.md b/doc/administration/plugins.md
index 4302667caf5..92a4d56ca63 100644
--- a/doc/administration/plugins.md
+++ b/doc/administration/plugins.md
@@ -39,7 +39,7 @@ Follow the steps below to set up a custom hook:
1. Inside the `plugins` directory, create a file with a name of your choice,
without spaces or special characters.
-1. Make the hook file executable and make sure it's owned by the git user.
+1. Make the hook file executable and make sure it's owned by the Git user.
1. Write the code to make the plugin function as expected. That can be
in any language, and ensure the 'shebang' at the top properly reflects the
language type. For example, if the script is in Ruby the shebang will
@@ -52,7 +52,37 @@ as appropriate. The plugins file list is updated for each event, there is no
need to restart GitLab to apply a new plugin.
If a plugin executes with non-zero exit code or GitLab fails to execute it, a
-message will be logged to `plugin.log`.
+message will be logged to:
+
+- `gitlab-rails/plugin.log` in an Omnibus installation.
+- `log/plugin.log` in a source installation.
+
+## Creating plugins
+
+Below is an example that will only response on the event `project_create` and
+will inform the admins from the GitLab instance that a new project has been created.
+
+```ruby
+# By using the embedded ruby version we eliminate the possibility that our chosen language
+# would be unavailable from
+#!/opt/gitlab/embedded/bin/ruby
+require 'json'
+require 'mail'
+
+# The incoming variables are in JSON format so we need to parse it first.
+ARGS = JSON.parse(STDIN.read)
+
+# We only want to trigger this plugin on the event project_create
+return unless ARGS['event_name'] == 'project_create'
+
+# We will inform our admins of our gitlab instance that a new project is created
+Mail.deliver do
+ from 'info@gitlab_instance.com'
+ to 'admin@gitlab_instance.com'
+ subject "new project " + ARGS['name']
+ body ARGS['owner_name'] + 'created project ' + ARGS['name']
+end
+```
## Validation
@@ -82,4 +112,4 @@ Validating plugins from /plugins directory
[system hooks]: ../system_hooks/system_hooks.md
[webhooks]: ../user/project/integrations/webhooks.md
-[highly available]: ./high_availability/README.md \ No newline at end of file
+[highly available]: ./high_availability/README.md
diff --git a/doc/administration/pseudonymizer.md b/doc/administration/pseudonymizer.md
index 78b2751da13..716a4259a64 100644
--- a/doc/administration/pseudonymizer.md
+++ b/doc/administration/pseudonymizer.md
@@ -65,8 +65,8 @@ To configure the pseudonymizer, you need to:
```yaml
pseudonymizer:
- manifest: config/pseudonymizer.yml
- upload:
+ manifest: config/pseudonymizer.yml
+ upload:
remote_directory: 'gitlab-elt' # bucket name
connection:
provider: AWS
diff --git a/doc/administration/raketasks/geo.md b/doc/administration/raketasks/geo.md
index 435aed8c413..691d34ab7fa 100644
--- a/doc/administration/raketasks/geo.md
+++ b/doc/administration/raketasks/geo.md
@@ -2,7 +2,7 @@
## Git housekeeping
-There are few tasks you can run to schedule a git housekeeping to start at the
+There are few tasks you can run to schedule a git housekeeping to start at the
next repository sync in a **Secondary node**:
### Incremental Repack
@@ -23,7 +23,7 @@ sudo -u git -H bundle exec rake geo:git:housekeeping:incremental_repack RAILS_EN
### Full Repack
-This is equivalent of running `git repack -d -A --pack-kept-objects` on a
+This is equivalent of running `git repack -d -A --pack-kept-objects` on a
_bare_ repository which will optionally, write a reachability bitmap index
when this is enabled in GitLab.
diff --git a/doc/administration/raketasks/ldap.md b/doc/administration/raketasks/ldap.md
index 91fc0133d56..e880d76e756 100644
--- a/doc/administration/raketasks/ldap.md
+++ b/doc/administration/raketasks/ldap.md
@@ -28,6 +28,31 @@ limit by passing a number to the check task:
rake gitlab:ldap:check[50]
```
+## Run a Group Sync
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/14735) in [GitLab Starter](https://about.gitlab.com/pricing/) 12.3.
+
+The following task will run a [group sync](../auth/ldap-ee.md#group-sync) immediately. This is valuable
+when you'd like to update all configured group memberships against LDAP without
+waiting for the next scheduled group sync to be run.
+
+NOTE: **NOTE:**
+If you'd like to change the frequency at which a group sync is performed,
+[adjust the cron schedule](../auth/ldap-ee.md#adjusting-ldap-group-sync-schedule)
+instead.
+
+**Omnibus Installation**
+
+```
+sudo gitlab-rake gitlab:ldap:group_sync
+```
+
+**Source Installation**
+
+```bash
+bundle exec rake gitlab:ldap:group_sync
+```
+
## Rename a provider
If you change the LDAP server ID in `gitlab.yml` or `gitlab.rb` you will need
diff --git a/doc/administration/raketasks/maintenance.md b/doc/administration/raketasks/maintenance.md
index 2b31233d429..8d0b5b42515 100644
--- a/doc/administration/raketasks/maintenance.md
+++ b/doc/administration/raketasks/maintenance.md
@@ -251,3 +251,22 @@ sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:*]
# to clear a lease for repository garbage collection in a specific project: (id=4)
sudo gitlab-rake gitlab:exclusive_lease:clear[project_housekeeping:4]
```
+
+## Display status of database migrations
+
+To check the status of migrations, you can use the following rake task:
+
+```bash
+sudo gitlab-rake db:migrate:status
+```
+
+This will output a table with a `Status` of `up` or `down` for
+each Migration ID.
+
+```bash
+database: gitlabhq_production
+
+ Status Migration ID Migration Name
+--------------------------------------------------
+ up migration_id migration_name
+``` \ No newline at end of file
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index 0599e12b913..138db4dfbb1 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -2,14 +2,14 @@
>**Note:**
>
-> - [Introduced][ce-3050] in GitLab 8.9.
-> - Importing will not be possible if the import instance version is lower
-> than that of the exporter.
-> - For existing installations, the project import option has to be enabled in
-> application settings (`/admin/application_settings`) under 'Import sources'.
-> - The exports are stored in a temporary [shared directory][tmp] and are deleted
-> every 24 hours by a specific worker.
-> - ImportExport can use object storage automatically starting from GitLab 11.3
+> - [Introduced][ce-3050] in GitLab 8.9.
+> - Importing will not be possible if the import instance version is lower
+> than that of the exporter.
+> - For existing installations, the project import option has to be enabled in
+> application settings (`/admin/application_settings`) under 'Import sources'.
+> - The exports are stored in a temporary [shared directory][tmp] and are deleted
+> every 24 hours by a specific worker.
+> - ImportExport can use object storage automatically starting from GitLab 11.3
The GitLab Import/Export version can be checked by using:
diff --git a/doc/administration/raketasks/storage.md b/doc/administration/raketasks/storage.md
index 42a1a1c2e60..1198f3414c5 100644
--- a/doc/administration/raketasks/storage.md
+++ b/doc/administration/raketasks/storage.md
@@ -1,17 +1,17 @@
# Repository Storage Rake Tasks
-This is a collection of rake tasks you can use to help you list and migrate
-existing projects and attachments associated with it from Legacy storage to
+This is a collection of rake tasks you can use to help you list and migrate
+existing projects and attachments associated with it from Legacy storage to
the new Hashed storage type.
You can read more about the storage types [here][storage-types].
## Migrate existing projects to Hashed storage
-Before migrating your existing projects, you should
+Before migrating your existing projects, you should
[enable hashed storage][storage-migration] for the new projects as well.
-This task will schedule all your existing projects and attachments associated with it to be migrated to the
+This task will schedule all your existing projects and attachments associated with it to be migrated to the
**Hashed** storage type:
**Omnibus Installation**
@@ -30,15 +30,15 @@ They both also accept a range as environment variable:
```bash
# to migrate any non migrated project from ID 20 to 50.
-export ID_FROM=20
+export ID_FROM=20
export ID_TO=50
```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
-There is a specific Queue you can watch to see how long it will take to finish:
+There is a specific Queue you can watch to see how long it will take to finish:
`hashed_storage:hashed_storage_project_migrate`
-After it reaches zero, you can confirm every project has been migrated by running the commands bellow.
+After it reaches zero, you can confirm every project has been migrated by running the commands bellow.
If you find it necessary, you can run this migration script again to schedule missing projects.
Any error or warning will be logged in Sidekiq's log file.
@@ -55,7 +55,7 @@ If you need to rollback the storage migration for any reason, you can follow the
NOTE: **Note:** Hashed Storage will be required in future version of GitLab.
-To prevent new projects from being created in the Hashed storage,
+To prevent new projects from being created in the Hashed storage,
you need to undo the [enable hashed storage][storage-migration] changes.
This task will schedule all your existing projects and associated attachments to be rolled back to the
@@ -77,15 +77,14 @@ Both commands accept a range as environment variable:
```bash
# to rollback any migrated project from ID 20 to 50.
-export ID_FROM=20
+export ID_FROM=20
export ID_TO=50
```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_rollback` queue to see how long the process will take to finish.
-
-After it reaches zero, you can confirm every project has been rolled back by running the commands bellow.
+After it reaches zero, you can confirm every project has been rolled back by running the commands bellow.
If some projects weren't rolled back, you can run this rollback script again to schedule further rollbacks.
Any error or warning will be logged in Sidekiq's log file.
@@ -106,8 +105,6 @@ sudo gitlab-rake gitlab:storage:legacy_projects
sudo -u git -H bundle exec rake gitlab:storage:legacy_projects RAILS_ENV=production
```
-------
-
To list projects using **Legacy** storage:
**Omnibus Installation**
@@ -139,8 +136,6 @@ sudo gitlab-rake gitlab:storage:hashed_projects
sudo -u git -H bundle exec rake gitlab:storage:hashed_projects RAILS_ENV=production
```
-------
-
To list projects using **Hashed** storage:
**Omnibus Installation**
@@ -171,8 +166,6 @@ sudo gitlab-rake gitlab:storage:legacy_attachments
sudo -u git -H bundle exec rake gitlab:storage:legacy_attachments RAILS_ENV=production
```
-------
-
To list project attachments using **Legacy** storage:
**Omnibus Installation**
@@ -203,8 +196,6 @@ sudo gitlab-rake gitlab:storage:hashed_attachments
sudo -u git -H bundle exec rake gitlab:storage:hashed_attachments RAILS_ENV=production
```
-------
-
To list project attachments using **Hashed** storage:
**Omnibus Installation**
diff --git a/doc/administration/raketasks/uploads/migrate.md b/doc/administration/raketasks/uploads/migrate.md
index fd8ea8d3162..86e8b763f51 100644
--- a/doc/administration/raketasks/uploads/migrate.md
+++ b/doc/administration/raketasks/uploads/migrate.md
@@ -103,3 +103,13 @@ sudo -u git -H bundle exec rake "gitlab:uploads:migrate[NamespaceFileUploader, S
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
```
+
+## Migrate legacy uploads out of deprecated paths
+
+> Introduced in GitLab 12.3.
+
+To migrate all uploads created by legacy uploaders, run:
+
+```shell
+bundle exec rake gitlab:uploads:legacy:migrate
+```
diff --git a/doc/administration/raketasks/uploads/sanitize.md b/doc/administration/raketasks/uploads/sanitize.md
index 54a423b9571..ae5ccfb9e37 100644
--- a/doc/administration/raketasks/uploads/sanitize.md
+++ b/doc/administration/raketasks/uploads/sanitize.md
@@ -4,16 +4,16 @@
You need `exiftool` installed on your system. If you installed GitLab:
-- Using the Omnibus package, you're all set.
-- From source, make sure `exiftool` is installed:
+- Using the Omnibus package, you're all set.
+- From source, make sure `exiftool` is installed:
- ```sh
- # Debian/Ubuntu
- sudo apt-get install libimage-exiftool-perl
+ ```sh
+ # Debian/Ubuntu
+ sudo apt-get install libimage-exiftool-perl
- # RHEL/CentOS
- sudo yum install perl-Image-ExifTool
- ```
+ # RHEL/CentOS
+ sudo yum install perl-Image-ExifTool
+ ```
## Remove EXIF data from existing uploads
diff --git a/doc/administration/reply_by_email_postfix_setup.md b/doc/administration/reply_by_email_postfix_setup.md
index 406f7e8a034..56cd23b2eb8 100644
--- a/doc/administration/reply_by_email_postfix_setup.md
+++ b/doc/administration/reply_by_email_postfix_setup.md
@@ -18,7 +18,7 @@ The instructions make the assumption that you will be using the email address `i
sudo apt-get install postfix
```
- When asked about the environment, select 'Internet Site'. When asked to confirm the hostname, make sure it matches gitlab.example.com`.
+ When asked about the environment, select 'Internet Site'. When asked to confirm the hostname, make sure it matches `gitlab.example.com`.
1. Install the `mailutils` package.
@@ -331,7 +331,7 @@ Courier, which we will install later to add IMAP authentication, requires mailbo
a logout
```
-## Done!
+## Done
If all the tests were successful, Postfix is all set up and ready to receive email! Continue with the [incoming email] guide to configure GitLab.
diff --git a/doc/administration/repository_checks.md b/doc/administration/repository_checks.md
index 7cf8f20a9dc..ab911c1cf0e 100644
--- a/doc/administration/repository_checks.md
+++ b/doc/administration/repository_checks.md
@@ -3,7 +3,7 @@
> [Introduced][ce-3232] in GitLab 8.7. It is OFF by default because it still
causes too many false alarms.
-Git has a built-in mechanism, [git fsck][git-fsck], to verify the
+Git has a built-in mechanism, [`git fsck`][git-fsck], to verify the
integrity of all data committed to a repository. GitLab administrators
can trigger such a check for a project via the project page under the
admin panel. The checks run asynchronously so it may take a few minutes
@@ -33,8 +33,8 @@ in `repocheck.log`:
- in the [admin panel](logs.md#repochecklog)
- or on disk, see:
- - `/var/log/gitlab/gitlab-rails` for Omnibus installations
- - `/home/git/gitlab/log` for installations from source
+ - `/var/log/gitlab/gitlab-rails` for Omnibus installations
+ - `/home/git/gitlab/log` for installations from source
If for some reason the periodic repository check caused a lot of false
alarms you can choose to clear *all* repository check states by
diff --git a/doc/administration/repository_storage_paths.md b/doc/administration/repository_storage_paths.md
index 3de860f9240..b1a870210a8 100644
--- a/doc/administration/repository_storage_paths.md
+++ b/doc/administration/repository_storage_paths.md
@@ -55,14 +55,12 @@ storage2:
> don't specify the full repository path but the parent path), but not for source
> installations.
----
-
Now that you've read that big fat warning above, let's edit the configuration
files and add the full paths of the alternative repository storage paths. In
-the example below, we add two more mountpoints that are named `nfs` and `cephfs`
+the example below, we add two more mountpoints that are named `nfs_1` and `nfs_2`
respectively.
-NOTE: **Note:** This example uses NFS and CephFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
+NOTE: **Note:** This example uses NFS. We do not recommend using EFS for storage as it may impact GitLab's performance. See the [relevant documentation](high_availability/nfs.md#avoid-using-awss-elastic-file-system-efs) for more details.
**For installations from source**
@@ -75,10 +73,10 @@ NOTE: **Note:** This example uses NFS and CephFS. We do not recommend using EFS
storages: # You must have at least a 'default' storage path.
default:
path: /home/git/repositories
- nfs:
- path: /mnt/nfs/repositories
- cephfs:
- path: /mnt/cephfs/repositories
+ nfs_1:
+ path: /mnt/nfs1/repositories
+ nfs_2:
+ path: /mnt/nfs2/repositories
```
1. [Restart GitLab][restart-gitlab] for the changes to take effect.
@@ -90,8 +88,6 @@ are upgrading from a version prior to 8.10, make sure to add the configuration
as described in the step above. After you make the changes and confirm they are
working, you can remove the `repos_path` line.
----
-
**For Omnibus installations**
1. Edit `/etc/gitlab/gitlab.rb` by appending the rest of the paths to the
@@ -100,8 +96,8 @@ working, you can remove the `repos_path` line.
```ruby
git_data_dirs({
"default" => { "path" => "/var/opt/gitlab/git-data" },
- "nfs" => { "path" => "/mnt/nfs/git-data" },
- "cephfs" => { "path" => "/mnt/cephfs/git-data" }
+ "nfs_1" => { "path" => "/mnt/nfs1/git-data" },
+ "nfs_2" => { "path" => "/mnt/nfs2/git-data" }
})
```
diff --git a/doc/administration/repository_storage_types.md b/doc/administration/repository_storage_types.md
index 9dea6074a3f..3ee8673b297 100644
--- a/doc/administration/repository_storage_types.md
+++ b/doc/administration/repository_storage_types.md
@@ -82,19 +82,17 @@ by another folder with the next 2 characters. They are both stored in a special
> [Introduced](https://gitlab.com/gitlab-org/gitaly/issues/1606) in GitLab 12.1.
-Forks of public projects are deduplicated by creating a third repository, the object pool, containing the objects from the source project. Using `objects/info/alternates`, the source project and forks use the object pool for shared objects. Objects are moved from the source project to the object pool when housekeeping is run on the source project.
+Forks of public projects are deduplicated by creating a third repository, the
+object pool, containing the objects from the source project. Using
+`objects/info/alternates`, the source project and forks use the object pool for
+shared objects. Objects are moved from the source project to the object pool
+when housekeeping is run on the source project.
```ruby
# object pool paths
"@pools/#{hash[0..1]}/#{hash[2..3]}/#{hash}.git"
```
-Object pools can be disabled using the `object_pools` feature flag, and can be
-disabled for individual projects by executing
-`Feature.disable(:object_pools, Project.find(<id>))`. Disabling object pools
-will not change existing deduplicated forks, but will prevent new forks from
-being deduplicated.
-
DANGER: **Danger:**
Do not run `git prune` or `git gc` in pool repositories! This can
cause data loss in "real" repositories that depend on the pool in
@@ -105,7 +103,7 @@ question.
To start a migration, enable Hashed Storage for new projects:
1. Go to **Admin > Settings > Repository** and expand the **Repository Storage** section.
-2. Select the **Use hashed storage paths for newly created and renamed projects** checkbox.
+1. Select the **Use hashed storage paths for newly created and renamed projects** checkbox.
Check if the change breaks any existing integration you may have that
either runs on the same machine as your repositories are located, or may login to that machine
@@ -118,7 +116,7 @@ If you do have any existing integration, you may want to do a small rollout firs
to validate. You can do so by specifying a range with the operation.
This is an example of how to limit the rollout to Project IDs 50 to 100, running in
-an Omnibus Gitlab installation:
+an Omnibus GitLab installation:
```bash
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
@@ -133,13 +131,13 @@ Similar to the migration, to disable Hashed Storage for new
projects:
1. Go to **Admin > Settings > Repository** and expand the **Repository Storage** section.
-2. Uncheck the **Use hashed storage paths for newly created and renamed projects** checkbox.
+1. Uncheck the **Use hashed storage paths for newly created and renamed projects** checkbox.
To schedule a complete rollback, see the
[rake task documentation for storage rollback](raketasks/storage.md#rollback-from-hashed-storage-to-legacy-storage) for instructions.
The rollback task also supports specifying a range of Project IDs. Here is an example
-of limiting the rollout to Project IDs 50 to 100, in an Omnibus Gitlab installation:
+of limiting the rollout to Project IDs 50 to 100, in an Omnibus GitLab installation:
```bash
sudo gitlab-rake gitlab:storage:rollback_to_legacy ID_FROM=50 ID_TO=100
@@ -185,7 +183,7 @@ CI Artifacts are S3 compatible since **9.4** (GitLab Premium), and available in
##### LFS Objects
-LFS Objects implements a similar storage pattern using 2 chars, 2 level folders, following git own implementation:
+LFS Objects implements a similar storage pattern using 2 chars, 2 level folders, following Git own implementation:
```ruby
"shared/lfs-objects/#{oid[0..1}/#{oid[2..3]}/#{oid[4..-1]}"
diff --git a/doc/administration/smime_signing_email.md b/doc/administration/smime_signing_email.md
new file mode 100644
index 00000000000..b2e3bf8487b
--- /dev/null
+++ b/doc/administration/smime_signing_email.md
@@ -0,0 +1,49 @@
+# Signing outgoing email with S/MIME
+
+Notification emails sent by Gitlab can be signed with S/MIME for improved
+security.
+
+> **Note:**
+Please be aware that S/MIME certificates and TLS/SSL certificates are not the
+same and are used for different purposes: TLS creates a secure channel, whereas
+S/MIME signs and/or encrypts the message itself
+
+## Enable S/MIME signing
+
+This setting must be explicitly enabled and a single pair of key and certificate
+files must be provided in `gitlab.rb` or `gitlab.yml` if you are using Omnibus
+GitLab or installed GitLab from source respectively:
+
+```yaml
+email_smime:
+ enabled: true
+ key_file: /etc/pki/smime/private/gitlab.key
+ cert_file: /etc/pki/smime/certs/gitlab.crt
+```
+
+- Both files must be provided PEM-encoded.
+- The key file must be unencrypted so that Gitlab can read it without user
+ intervention.
+
+NOTE: **Note:** Be mindful of the access levels for your private keys and visibility to
+third parties.
+
+### How to convert S/MIME PKCS#12 / PFX format to PEM encoding
+
+Typically S/MIME certificates are handled in binary PKCS#12 format (`.pfx` or `.p12`
+extensions), which contain the following in a single encrypted file:
+
+- Server certificate
+- Intermediate certificates (if any)
+- Private key
+
+In order to export the required files in PEM encoding from the PKCS#12 file,
+the `openssl` command can be used:
+
+```bash
+#-- Extract private key in PEM encoding (no password, unencrypted)
+$ openssl pkcs12 -in gitlab.p12 -nocerts -nodes -out gitlab.key
+
+#-- Extract certificates in PEM encoding (full certs chain including CA)
+$ openssl pkcs12 -in gitlab.p12 -nokeys -out gitlab.crt
+```
diff --git a/doc/administration/troubleshooting/debug.md b/doc/administration/troubleshooting/debug.md
index 8f7280d5128..604dff5983d 100644
--- a/doc/administration/troubleshooting/debug.md
+++ b/doc/administration/troubleshooting/debug.md
@@ -10,43 +10,43 @@ an SMTP server, but you're not seeing mail delivered. Here's how to check the se
1. Run a Rails console:
- ```sh
- sudo gitlab-rails console production
- ```
+ ```sh
+ sudo gitlab-rails console production
+ ```
- or for source installs:
+ or for source installs:
- ```sh
- bundle exec rails console production
- ```
+ ```sh
+ bundle exec rails console production
+ ```
1. Look at the ActionMailer `delivery_method` to make sure it matches what you
intended. If you configured SMTP, it should say `:smtp`. If you're using
Sendmail, it should say `:sendmail`:
- ```ruby
- irb(main):001:0> ActionMailer::Base.delivery_method
- => :smtp
- ```
+ ```ruby
+ irb(main):001:0> ActionMailer::Base.delivery_method
+ => :smtp
+ ```
1. If you're using SMTP, check the mail settings:
- ```ruby
- irb(main):002:0> ActionMailer::Base.smtp_settings
- => {:address=>"localhost", :port=>25, :domain=>"localhost.localdomain", :user_name=>nil, :password=>nil, :authentication=>nil, :enable_starttls_auto=>true}```
- ```
+ ```ruby
+ irb(main):002:0> ActionMailer::Base.smtp_settings
+ => {:address=>"localhost", :port=>25, :domain=>"localhost.localdomain", :user_name=>nil, :password=>nil, :authentication=>nil, :enable_starttls_auto=>true}```
+ ```
- In the example above, the SMTP server is configured for the local machine. If this is intended, you may need to check your local mail
- logs (e.g. `/var/log/mail.log`) for more details.
+ In the example above, the SMTP server is configured for the local machine. If this is intended, you may need to check your local mail
+ logs (e.g. `/var/log/mail.log`) for more details.
-1. Send a test message via the console.
+1. Send a test message via the console.
- ```ruby
- irb(main):003:0> Notify.test_email('youremail@email.com', 'Hello World', 'This is a test message').deliver_now
- ```
+ ```ruby
+ irb(main):003:0> Notify.test_email('youremail@email.com', 'Hello World', 'This is a test message').deliver_now
+ ```
- If you do not receive an e-mail and/or see an error message, then check
- your mail server settings.
+ If you do not receive an e-mail and/or see an error message, then check
+ your mail server settings.
## Advanced Issues
@@ -103,37 +103,37 @@ downtime. Otherwise skip to the next section.
1. Run `sudo gdb -p <PID>` to attach to the unicorn process.
1. In the gdb window, type:
- ```
- call (void) rb_backtrace()
- ```
+ ```
+ call (void) rb_backtrace()
+ ```
1. This forces the process to generate a Ruby backtrace. Check
`/var/log/gitlab/unicorn/unicorn_stderr.log` for the backtace. For example, you may see:
- ```ruby
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `loop'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:36:in `block (2 levels) in start'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:44:in `sample'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `sample_objects'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each_with_object'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `block in sample_objects'
- from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `name'
- ```
+ ```ruby
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `block in start'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:33:in `loop'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:36:in `block (2 levels) in start'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:44:in `sample'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `sample_objects'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each_with_object'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:68:in `each'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `block in sample_objects'
+ from /opt/gitlab/embedded/service/gitlab-rails/lib/gitlab/metrics/sampler.rb:69:in `name'
+ ```
1. To see the current threads, run:
- ```
- thread apply all bt
- ```
+ ```
+ thread apply all bt
+ ```
1. Once you're done debugging with `gdb`, be sure to detach from the process and exit:
- ```
- detach
- exit
- ```
+ ```
+ detach
+ exit
+ ```
Note that if the unicorn process terminates before you are able to run these
commands, gdb will report an error. To buy more time, you can always raise the
@@ -162,21 +162,21 @@ separate Rails process to debug the issue:
1. Create a Personal Access Token for your user (Profile Settings -> Access Tokens).
1. Bring up the GitLab Rails console. For omnibus users, run:
- ```
- sudo gitlab-rails console
- ```
+ ```
+ sudo gitlab-rails console
+ ```
1. At the Rails console, run:
- ```ruby
- [1] pry(main)> app.get '<URL FROM STEP 2>/?private_token=<TOKEN FROM STEP 3>'
- ```
+ ```ruby
+ [1] pry(main)> app.get '<URL FROM STEP 2>/?private_token=<TOKEN FROM STEP 3>'
+ ```
- For example:
+ For example:
- ```ruby
- [1] pry(main)> app.get 'https://gitlab.com/gitlab-org/gitlab-ce/issues/1?private_token=123456'
- ```
+ ```ruby
+ [1] pry(main)> app.get 'https://gitlab.com/gitlab-org/gitlab-ce/issues/1?private_token=123456'
+ ```
1. In a new window, run `top`. It should show this ruby process using 100% CPU. Write down the PID.
1. Follow step 2 from the previous section on using gdb.
@@ -209,7 +209,7 @@ ps auwx | grep unicorn | awk '{ print " -p " $2}' | xargs strace -tt -T -f -s 10
The output in `/tmp/unicorn.txt` may help diagnose the root cause.
-# More information
+## More information
- [Debugging Stuck Ruby Processes](https://blog.newrelic.com/2013/04/29/debugging-stuck-ruby-processes-what-to-do-before-you-kill-9/)
- [Cheatsheet of using gdb and ruby processes](gdb-stuck-ruby.txt)
diff --git a/doc/administration/troubleshooting/diagnostics_tools.md b/doc/administration/troubleshooting/diagnostics_tools.md
new file mode 100644
index 00000000000..ab3b25f0e97
--- /dev/null
+++ b/doc/administration/troubleshooting/diagnostics_tools.md
@@ -0,0 +1,27 @@
+---
+type: reference
+---
+
+# Diagnostics tools
+
+These are some of the diagnostics tools the GitLab Support team uses during troubleshooting.
+They are listed here for transparency, and they may be useful for users with experience
+with troubleshooting GitLab. If you are currently having an issue with GitLab, you
+may want to check your [support options](https://about.gitlab.com/support/) first,
+before attempting to use these tools.
+
+## gitlabsos
+
+The [gitlabsos](https://gitlab.com/gitlab-com/support/toolbox/gitlabsos/) utility
+provides a unified method of gathering info and logs from GitLab and the system it's
+running on.
+
+## strace-parser
+
+[strace-parser](https://gitlab.com/wchandler/strace-parser) is a small tool to analyze
+and summarize raw strace data.
+
+## Pritaly
+
+[Pritaly](https://gitlab.com/wchandler/pritaly) takes Gitaly logs and colorizes output
+or converts the logs to JSON.
diff --git a/doc/administration/troubleshooting/elasticsearch.md b/doc/administration/troubleshooting/elasticsearch.md
new file mode 100644
index 00000000000..c4a7ba01fae
--- /dev/null
+++ b/doc/administration/troubleshooting/elasticsearch.md
@@ -0,0 +1,345 @@
+# Troubleshooting ElasticSearch
+
+Troubleshooting ElasticSearch requires:
+
+- Knowledge of common terms.
+- Establishing within which category the problem fits.
+
+## Common terminology
+
+- **Lucene**: A full-text search library written in Java.
+- **Near Realtime (NRT)**: Refers to the slight latency from the time to index a
+ document to the time when it becomes searchable.
+- **Cluster**: A collection of one or more nodes that work together to hold all
+ the data, providing indexing and search capabilities.
+- **Node**: A single server that works as part of a cluster.
+- **Index**: A collection of documents that have somewhat similar characteristics.
+- **Document**: A basic unit of information that can be indexed.
+- **Shards**: Fully-functional and independent subdivisions of indices. Each shard is actually
+ a Lucene index.
+- **Replicas**: Failover mechanisms that duplicate indices.
+
+## Troubleshooting workflows
+
+The type of problem will determine what steps to take. The possible troubleshooting workflows are for:
+
+- Search results.
+- Indexing.
+- Integration.
+- Performance.
+
+### Search Results workflow
+
+The following workflow is for ElasticSearch search results issues:
+
+```mermaid
+graph TD;
+ B --> |No| B1
+ B --> |Yes| B4
+ B1 --> B2
+ B2 --> B3
+ B4 --> B5
+ B5 --> |Yes| B6
+ B5 --> |No| B7
+ B7 --> B8
+ B{Is GitLab using<br>ElasticSearch for<br>searching?}
+ B1[Check Admin Area > Integrations<br>to ensure the settings are correct]
+ B2[Perform a search via<br>the rails console]
+ B3[If all settings are correct<br>and it still doesn't show ElasticSearch<br>doing the searches, escalate<br>to GitLab support.]
+ B4[Perform<br>the same search via the<br>ElasticSearch API]
+ B5{Are the results<br>the same?}
+ B6[This means it is working as intended.<br>Speak with GitLab support<br>to confirm if the issue lies with<br>the filters.]
+ B7[Check the index status of the project<br>containing the missing search<br>results.]
+ B8(Indexing Troubleshooting)
+```
+
+### Indexing workflow
+
+The following workflow is for ElasticSearch indexing issues:
+
+```mermaid
+graph TD;
+ C --> |Yes| C1
+ C1 --> |Yes| C2
+ C1 --> |No| C3
+ C3 --> |Yes| C4
+ C3 --> |No| C5
+ C --> |No| C6
+ C6 --> |No| C10
+ C7 --> |GitLab| C8
+ C7 --> |ElasticSearch| C9
+ C6 --> |Yes| C7
+ C10 --> |No| C12
+ C10 --> |Yes| C11
+ C12 --> |Yes| C13
+ C12 --> |No| C14
+ C14 --> |Yes| C15
+ C14 --> |No| C16
+ C{Is the problem with<br>creating an empty<br>index?}
+ C1{Does the gitlab-production<br>index exist on the<br>ElasticSearch instance?}
+ C2(Try to manually<br>delete the index on the<br>ElasticSearch instance and<br>retry creating an empty index.)
+ C3{Can indices be made<br>manually on the ElasticSearch<br>instance?}
+ C4(Retry the creation of an empty index)
+ C5(It is best to speak with an<br>ElasticSearch admin concerning the<br>instance's inability to create indices.)
+ C6{Is the indexer presenting<br>errors during indexing?}
+ C7{Is the error a GitLab<br>error or an ElasticSearch<br>error?}
+ C8[Escalate to<br>GitLab support]
+ C9[You will want<br>to speak with an<br>ElasticSearch admin.]
+ C10{Does the index status<br>show 100%?}
+ C11[Escalate to<br>GitLab support]
+ C12{Does re-indexing the project<br> present any GitLab errors?}
+ C13[Rectify the GitLab errors and<br>restart troubleshooting, or<br>escalate to GitLab support.]
+ C14{Does re-indexing the project<br>present errors on the <br>ElasticSearch instance?}
+ C15[It would be best<br>to speak with an<br>ElasticSearch admin.]
+ C16[This is likely a bug/issue<br>in GitLab and will require<br>deeper investigation. Escalate<br>to GitLab support.]
+```
+
+### Integration workflow
+
+The following workflow is for ElasticSearch integration issues:
+
+```mermaid
+graph TD;
+ D --> |No| D1
+ D --> |Yes| D2
+ D2 --> |No| D3
+ D2 --> |Yes| D4
+ D4 --> |No| D5
+ D4 --> |Yes| D6
+ D{Is the error concerning<br>the beta indexer?}
+ D1[It would be best<br>to speak with an<br>ElasticSearch admin.]
+ D2{Is the ICU development<br>package installed?}
+ D3>This package is required.<br>Install the package<br>and retry.]
+ D4{Is the error stemming<br>from the indexer?}
+ D5[This would indicate an OS level<br> issue. It would be best to<br>contact your sysadmin.]
+ D6[This is likely a bug/issue<br>in GitLab and will require<br>deeper investigation. Escalate<br>to GitLab support.]
+```
+
+### Performance workflow
+
+The following workflow is for ElasticSearch performance issues:
+
+```mermaid
+graph TD;
+ F --> |Yes| F1
+ F --> |No| F2
+ F2 --> |No| F3
+ F2 --> |Yes| F4
+ F4 --> F5
+ F5 --> |No| F6
+ F5 --> |Yes| F7
+ F{Is the ElasticSearch instance<br>running on the same server<br>as the GitLab instance?}
+ F1(This is not advised and will cause issues.<br>We recommend moving the ElasticSearch<br>instance to a different server.)
+ F2{Does the ElasticSearch<br>server have at least 8<br>GB of RAM and 2 CPU<br>cores?}
+ F3(According to ElasticSearch, a non-prod<br>server needs these as a base requirement.<br>Production often requires more. We recommend<br>you increase the server specifications.)
+ F4(Obtain the <br>cluster health information)
+ F5(Does it show the<br>status as green?)
+ F6(We recommend you speak with<br>an ElasticSearch admin<br>about implementing sharding.)
+ F7(Escalate to<br>GitLab support.)
+```
+
+## Troubleshooting walkthrough
+
+Most ElasticSearch troubleshooting can be broken down into 4 categories:
+
+- [Troubleshooting search results](#troubleshooting-search-results)
+- [Troubleshooting indexing](#troubleshooting-indexing)
+- [Troubleshooting integration](#troubleshooting-integration)
+- [Troubleshooting performance](#troubleshooting-performance)
+
+Generally speaking, if it does not fall into those four categories, it is either:
+
+- Something GitLab support needs to look into.
+- Not a true ElasticSearch issue.
+
+Exercise caution. Issues that appear to be ElasticSearch problems can be OS-level issues.
+
+### Troubleshooting search results
+
+Troubleshooting search result issues is rather straight forward on ElasticSearch.
+
+The first step is to confirm GitLab is using ElasticSearch for the search function.
+To do this:
+
+1. Confirm the integration is enabled in **Admin Area > Settings > Integrations**.
+1. Confirm searches utilize ElasticSearch by accessing the rails console
+ (`sudo gitlab-rails console`) and running the following commands:
+
+ ```rails
+ u = User.find_by_email('email_of_user_doing_search')
+ s = SearchService.new(u, {:search => 'search_term'})
+ pp s.search_objects.class.name
+ ```
+
+The ouput from the last command is the key here. If it shows:
+
+- `ActiveRecord::Relation`, **it is not** using ElasticSearch.
+- `Kaminari::PaginatableArray`, **it is** using ElasticSearch.
+
+| Not using ElasticSearch | Using ElasticSearch |
+|--------------------------|------------------------------|
+| `ActiveRecord::Relation` | `Kaminari::PaginatableArray` |
+
+If all the settings look correct and it is still not using ElasticSearch for the search function, it is best to escalate to GitLab support. This could be a bug/issue.
+
+Moving past that, it is best to attempt the same search using the [ElasticSearch Search API](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html) and compare the results from what you see in GitLab.
+
+If the results:
+
+- Sync up, then there is not a technical "issue" per se. Instead, it might be a problem
+ with the ElasticSearch filters we are using. This can be complicated, so it is best to
+ escalate to GitLab support to check these and guide you on the potential on whether or
+ not a feature request is needed.
+- Do not match up, this indicates a problem with the documents generated from the
+ project. It is best to re-index that project and proceed with
+ [Troubleshooting indexing](#troubleshooting-indexing).
+
+### Troubleshooting indexing
+
+Troubleshooting indexing issues can be tricky. It can pretty quickly go to either GitLab
+support or your ElasticSearch admin.
+
+The best place to start is to determine if the issue is with creating an empty index.
+If it is, check on the ElasticSearch side to determine if the `gitlab-production` (the
+name for the GitLab index) exists. If it exists, manually delete it on the ElasticSearch
+side and attempt to recreate it from the
+[`create_empty_index`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks)
+rake task.
+
+If you still encounter issues, try creating an index manually on the ElasticSearch
+instance. The details of the index aren't important here, as we want to test if indices
+can be made. If the indices:
+
+- Cannot be made, speak with your ElasticSearch admin.
+- Can be made, Escalate this to GitLab support.
+
+If the issue is not with creating an empty index, the next step is to check for errors
+during the indexing of projects. If errors do occur, they will either stem from the indexing:
+
+- On the GitLab side. You need to rectify those. If they are not
+ something you are familiar with, contact GitLab support for guidance.
+- Within the ElasticSearch instance itself. See if the error is [documented and has a fix](../../integration/elasticsearch.md#troubleshooting). If not, speak with your ElasticSearch admin.
+
+If the indexing process does not present errors, you will want to check the status of the indexed projects. You can do this via the following rake tasks:
+
+- [`sudo gitlab-rake gitlab:elastic:index_projects_status`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks) (shows the overall status)
+- [`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks) (shows specific projects that are not indexed)
+
+If:
+
+- Everything is showing at 100%, escalate to GitLab support. This could be a potential
+ bug/issue.
+- You do see something not at 100%, attempt to reindex that project. To do this,
+ run `sudo gitlab-rake gitlab:elastic:index_projects ID_FROM=<project ID> ID_TO=<project ID>`.
+
+If reindexing the project shows:
+
+- Errors on the GitLab side, escalate those to GitLab support.
+- ElasticSearch errors or doesn't present any errors at all, reach out to your
+ ElasticSearch admin to check the instance.
+
+### Troubleshooting integration
+
+Troubleshooting integration tends to be pretty straight forward, as there really isn't
+much to "integrate" here.
+
+If the issue is:
+
+- Not concerning the beta indexer, it is almost always an
+ ElasticSearch-side issue. This means you should reach out to your ElasticSearch admin
+ regarding the error(s) you are seeing. If you are unsure here, it never hurts to reach
+ out to GitLab support.
+- With the beta indexer, check if the ICU development package is installed.
+ This is a required package so make sure you install it.
+
+Beyond that, you will want to review the error. If it is:
+
+- Specifically from the indexer, this could be a bug/issue and should be escalated to
+ GitLab support.
+- An OS issue, you will want to reach out to your systems administrator.
+
+### Troubleshooting performance
+
+Troubleshooting performance can be difficult on ElasticSearch. There is a ton of tuning
+that *can* be done, but the majority of this falls on shoulders of a skilled
+ElasticSearch administrator.
+
+Generally speaking, ensure:
+
+* The ElasticSearch server **is not** running on the same node as GitLab.
+* The ElasticSearch server have enough RAM and CPU cores.
+* That sharding **is** being used.
+
+Going into some more detail here, if ElasticSearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, ElasticSearch, which requires ample resources, should be running on its own server (maybe coupled with logstash and kibana).
+
+When it comes to ElasticSearch, RAM is the key resource. ElasticSearch themselves recommend:
+
+- **At least** 8 GB of RAM for a non-production instance.
+- **At least** 16 GB of RAM for a production instance.
+- Ideally, 64 GB of RAM.
+
+For CPU, ElasticSearch recommends at least 2 CPU cores, but ElasticSearch states common
+setups use up to 8 cores. For more details on server specs, check out
+[ElasticSearch's hardware guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html).
+
+Beyond the obvious, sharding comes into play. Sharding is a core part of ElasticSearch.
+It allows for horizontal scaling of indices, which is helpful when you are dealing with
+a large amount of data.
+
+With the way GitLab does indexing, there is a **huge** amount of documents being
+indexed. By utilizing sharding, you can speed up ElasticSearch's ability to locate
+data, since each shard is a Lucene index.
+
+If you are not using sharding, you are likely to hit issues when you start using
+ElasticSearch in a production environment.
+
+Keep in mind that an index with only one shard has **no scale factor** and will
+likely encounter issues when called upon with some frequency.
+
+If you need to know how many shards, read
+[ElasticSearch's documentation on capacity planning](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/capacity-planning.html),
+as the answer is not straight forward.
+
+The easiest way to determine if sharding is in use is to check the output of the
+[ElasticSearch Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html):
+
+- Red means the cluster is down.
+- Yellow means it is up with no sharding/replication.
+- Green means it is healthy (up, sharding, replicating).
+
+For production use, it should always be green.
+
+Beyond these steps, you get into some of the more complicated things to check,
+such as merges and caching. These can get complicated and it takes some time to
+learn them, so it is best to escalate/pair with an ElasticSearch expert if you need to
+dig further into these.
+
+Feel free to reach out to GitLab support, but this is likely to be something a skilled
+ElasticSearch admin has more experience with.
+
+## Common issues
+
+All common issues [should be documented](../../integration/elasticsearch.md#troubleshooting). If not,
+feel free to update that page with issues you encounter and solutions.
+
+## Replication
+
+Setting up ElasticSearch isn't too bad, but it can be a bit finnicky and time consuming.
+
+The eastiest method is to spin up a docker container with the required version and
+bind ports 9200/9300 so it can be used.
+
+The following is an example of running a docker container of ElasticSearch v7.2.0:
+
+```bash
+docker pull docker.elastic.co/elasticsearch/elasticsearch:7.2.0
+docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.2.0
+```
+
+From here, you can:
+
+- Grab the IP of the docker container (use `docker inspect <container_id>`)
+- Use `<IP.add.re.ss:9200>` to communicate with it.
+
+This is a quick method to test out ElasticSearch, but by no means is this a
+production solution.
diff --git a/doc/administration/troubleshooting/kubernetes_cheat_sheet.md b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
new file mode 100644
index 00000000000..260af333e8e
--- /dev/null
+++ b/doc/administration/troubleshooting/kubernetes_cheat_sheet.md
@@ -0,0 +1,261 @@
+---
+type: reference
+---
+
+# Kubernetes, GitLab and You
+
+This is a list of useful information regarding Kubernetes that the GitLab Support
+Team sometimes uses while troubleshooting. GitLab is making this public, so that anyone
+can make use of the Support team's collected knowledge
+
+CAUTION: **Caution:**
+These commands **can alter or break** your Kubernetes components so use these at your own risk.
+
+If you are on a [paid tier](https://about.gitlab.com/pricing/) and are not sure how
+to use these commands, it is best to [contact Support](https://about.gitlab.com/support/)
+and they will assist you with any issues you are having.
+
+## Generic kubernetes commands
+
+- How to authorize to your GCP project (can be especially useful if you have projects
+ under different GCP accounts):
+
+ ```bash
+ gcloud auth login
+ ```
+
+- How to access Kubernetes dashboard:
+
+ ```bash
+ # for minikube:
+ minikube dashboard —url
+ # for non-local installations if access via Kubectl is configured:
+ kubectl proxy
+ ```
+
+- How to ssh to a Kubernetes node and enter the container as root
+ <https://github.com/kubernetes/kubernetes/issues/30656>:
+
+ - For GCP, you may find the node name and run `gcloud compute ssh node-name`.
+ - List containers using `docker ps`.
+ - Enter container using `docker exec --user root -ti container-id bash`.
+
+- How to copy a file from local machine to a pod:
+
+ ```bash
+ kubectl cp file-name pod-name:./destination-path
+ ```
+
+- What to do with pods in `CrashLoopBackoff` status:
+
+ - Check logs via Kubernetes dashboard.
+ - Check logs via Kubectl:
+
+ ```bash
+ kubectl logs <unicorn pod> -c dependencies
+ ```
+
+- How to tail all Kubernetes cluster events in real time:
+
+ ```bash
+ kubectl get events -w --all-namespaces
+ ```
+
+- How to get logs of the previously terminated pod instance:
+
+ ```bash
+ kubectl logs <pod-name> --previous
+ ```
+
+ NOTE: **Note:**
+ No logs are kept in the containers/pods themselves, everything is written to stdout.
+ This is the principle of Kubernetes, read [Twelve-factor app](https://12factor.net/)
+ for details.
+
+## GitLab-specific kubernetes information
+
+- Minimal config that can be used to test a Kubernetes helm chart can be found
+ [here](https://gitlab.com/charts/gitlab/issues/620).
+
+- Tailing logs of a separate pod. An example for a unicorn pod:
+
+ ```bash
+ kubectl logs gitlab-unicorn-7656fdd6bf-jqzfs -c unicorn
+ ```
+
+- Tail and follow all pods that share a label (in this case, `unicorn`):
+
+ ```bash
+ # all containers in the unicorn pods
+ kubectl logs -f -l app=unicorn --all-containers=true --max-log-requests=50
+
+ # only the unicorn containers in all unicorn pods
+ kubectl logs -f -l app=unicorn -c unicorn --max-log-requests=50
+ ```
+
+- One can stream logs from all containers at once, similar to the Omnibus
+ command `gitlab-ctl tail`:
+
+ ```bash
+ kubectl logs -f -l release=gitlab --all-containers=true --max-log-requests=100
+ ```
+
+- Check all events in the `gitlab` namespace (the namespace name can be different if you
+ specified a different one when deploying the helm chart):
+
+ ```bash
+ kubectl get events -w --namespace=gitlab
+ ```
+
+- Most of the useful GitLab tools (console, rake tasks, etc) are found in the task-runner
+ pod. You may enter it and run commands inside or run them from the outside:
+
+ ```bash
+ # find the pod
+ kubectl get pods | grep task-runner
+
+ # enter it
+ kubectl exec -it <task-runner-pod-name> bash
+
+ # open rails console
+ # rails console can be also called from other GitLab pods
+ /srv/gitlab/bin/rails console
+
+ # source-style commands should also work
+ /srv/gitlab && bundle exec rake gitlab:check RAILS_ENV=production
+
+ # run GitLab check. Note that the output can be confusing and invalid because of the specific structure of GitLab installed via helm chart
+ /usr/local/bin/gitlab-rake gitlab:check
+
+ # open console without entering pod
+ kubectl exec -it <task-runner-pod-name> /srv/gitlab/bin/rails console
+
+ # check the status of DB migrations
+ kubectl exec -it <task-runner-pod-name> /usr/local/bin/gitlab-rake db:migrate:status
+ ```
+
+ You can also use `gitlab-rake`, instead of `/usr/local/bin/gitlab-rake`.
+
+- Troubleshooting **Operations > Kubernetes** integration:
+
+ - Check the output of `kubectl get events -w --all-namespaces`.
+ - Check the logs of pods within `gitlab-managed-apps` namespace.
+ - On the side of GitLab check sidekiq log and kubernetes log. When GitLab is installed
+ via Helm Chart, `kubernetes.log` can be found inside the sidekiq pod.
+
+- How to get your initial admin password <https://docs.gitlab.com/charts/installation/deployment.html#initial-login>:
+
+ ```bash
+ # find the name of the secret containing the password
+ kubectl get secrets | grep initial-root
+ # decode it
+ kubectl get secret <secret-name> -ojsonpath={.data.password} | base64 --decode ; echo
+ ```
+
+- How to connect to a GitLab Postgres database:
+
+ ```bash
+ kubectl exec -it <task-runner-pod-name> -- /srv/gitlab/bin/rails dbconsole -p
+ ```
+
+- How to get info about Helm installation status:
+
+ ```bash
+ helm status name-of-installation
+ ```
+
+- How to update GitLab installed using Helm Chart:
+
+ ```bash
+ helm repo upgrade
+
+ # get current values and redirect them to yaml file (analogue of gitlab.rb values)
+ helm get values <release name> > gitlab.yaml
+
+ # run upgrade itself
+ helm upgrade <release name> <chart path> -f gitlab.yaml
+ ```
+
+ After <https://canary.gitlab.com/charts/gitlab/issues/780> is fixed, it should
+ be possible to use [Updating GitLab using the Helm Chart](https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.html#updating-gitlab-using-the-helm-chart)
+ for upgrades.
+
+- How to apply changes to GitLab config:
+
+ - Modify the `gitlab.yaml` file.
+ - Run the following command to apply changes:
+
+ ```bash
+ helm upgrade <release name> <chart path> -f gitlab.yaml
+ ```
+
+## Installation of minimal GitLab config via Minukube on macOS
+
+This section is based on [Developing for Kubernetes with Minikube](https://gitlab.com/charts/gitlab/blob/master/doc/minikube/index.md)
+and [Helm](https://gitlab.com/charts/gitlab/blob/master/doc/helm/index.md). Refer
+to those documents for details.
+
+- Install Kubectl via Homebrew:
+
+ ```bash
+ brew install kubernetes-cli
+ ```
+
+- Install Minikube via Homebrew:
+
+ ```bash
+ brew cask install minikube
+ ```
+
+- Start Minikube and configure it. If Minikube cannot start, try running `minikube delete && minikube start`
+ and repeat the steps:
+
+ ```bash
+ minikube start --cpus 3 --memory 8192 # minimum amount for GitLab to work
+ minikube addons enable ingress
+ minikube addons enable kube-dns
+ ```
+
+- Install Helm via Homebrew and initialize it:
+
+ ```bash
+ brew install kubernetes-helm
+ helm init --service-account tiller
+ ```
+
+- Copy the file <https://gitlab.com/charts/gitlab/raw/master/examples/values-minikube-minimum.yaml>
+ to your workstation.
+
+- Find the IP address in the output of `minikube ip` and update the yaml file with
+ this IP address.
+
+- Install the GitLab Helm Chart:
+
+ ```bash
+ helm repo add gitlab https://charts.gitlab.io
+ helm install --name gitlab -f <path-to-yaml-file> gitlab/gitlab
+ ```
+
+ If you want to modify some GitLab settings, you can use the above-mentioned config
+ as a base and create your own yaml file.
+
+- Monitor the installation progress via `helm status gitlab` and `minikube dashboard`.
+ The installation could take up to 20-30 minutes depending on the amount of resources
+ on your workstation.
+
+- When all the pods show either a `Running` or `Completed` status, get the GitLab password as
+ described in [Initial login](https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.html#initial-login),
+ and log in to GitLab via the UI. It will be accessible via `https://gitlab.domain`
+ where `domain` is the value provided in the yaml file.
+
+<!-- ## Troubleshooting
+
+Include any troubleshooting steps that you can foresee. If you know beforehand what issues
+one might have when setting this up, or when something is changed, or on upgrading, it's
+important to describe those, too. Think of things that may go wrong and include them here.
+This is important to minimize requests for support, and to avoid doc comments with
+questions that you know someone might ask.
+
+Each scenario can be a third-level heading, e.g. `### Getting error message X`.
+If you have none to add when creating a doc, leave this section in place
+but commented out to help encourage others to add to it in the future. -->
diff --git a/doc/administration/troubleshooting/sidekiq.md b/doc/administration/troubleshooting/sidekiq.md
index 7067958ecb4..9b016c64e29 100644
--- a/doc/administration/troubleshooting/sidekiq.md
+++ b/doc/administration/troubleshooting/sidekiq.md
@@ -169,3 +169,121 @@ The PostgreSQL wiki has details on the query you can run to see blocking
queries. The query is different based on PostgreSQL version. See
[Lock Monitoring](https://wiki.postgresql.org/wiki/Lock_Monitoring) for
the query details.
+
+## Managing Sidekiq queues
+
+It is possible to use [Sidekiq API](https://github.com/mperham/sidekiq/wiki/API)
+to perform a number of troubleshoting on Sidekiq.
+
+These are the administrative commands and it should only be used if currently
+admin interface is not suitable due to scale of installation.
+
+All this commands should be run using `gitlab-rails console`.
+
+### View the queue size
+
+```ruby
+Sidekiq::Queue.new("pipeline_processing:build_queue").size
+```
+
+### Enumerate all enqueued jobs
+
+```ruby
+queue = Sidekiq::Queue.new("chaos:chaos_sleep")
+queue.each do |job|
+ # job.klass # => 'MyWorker'
+ # job.args # => [1, 2, 3]
+ # job.jid # => jid
+ # job.queue # => chaos:chaos_sleep
+ # job["retry"] # => 3
+ # job.item # => {
+ # "class"=>"Chaos::SleepWorker",
+ # "args"=>[1000],
+ # "retry"=>3,
+ # "queue"=>"chaos:chaos_sleep",
+ # "backtrace"=>true,
+ # "queue_namespace"=>"chaos",
+ # "jid"=>"39bc482b823cceaf07213523",
+ # "created_at"=>1566317076.266069,
+ # "correlation_id"=>"c323b832-a857-4858-b695-672de6f0e1af",
+ # "enqueued_at"=>1566317076.26761},
+ # }
+
+ # job.delete if job.jid == 'abcdef1234567890'
+end
+```
+
+### Enumerate currently running jobs
+
+```ruby
+workers = Sidekiq::Workers.new
+workers.each do |process_id, thread_id, work|
+ # process_id is a unique identifier per Sidekiq process
+ # thread_id is a unique identifier per thread
+ # work is a Hash which looks like:
+ # {"queue"=>"chaos:chaos_sleep",
+ # "payload"=>
+ # { "class"=>"Chaos::SleepWorker",
+ # "args"=>[1000],
+ # "retry"=>3,
+ # "queue"=>"chaos:chaos_sleep",
+ # "backtrace"=>true,
+ # "queue_namespace"=>"chaos",
+ # "jid"=>"b2a31e3eac7b1a99ff235869",
+ # "created_at"=>1566316974.9215662,
+ # "correlation_id"=>"e484fb26-7576-45f9-bf21-b99389e1c53c",
+ # "enqueued_at"=>1566316974.9229589},
+ # "run_at"=>1566316974}],
+end
+```
+
+### Remove sidekiq jobs for given parameters (destructive)
+
+```ruby
+# for jobs like this:
+# RepositoryImportWorker.new.perform_async(100)
+id_list = [100]
+
+queue = Sidekiq::Queue.new('repository_import')
+queue.each do |job|
+ job.delete if id_list.include?(job.args[0])
+end
+```
+
+### Remove specific job ID (destructive)
+
+```ruby
+queue = Sidekiq::Queue.new('repository_import')
+queue.each do |job|
+ job.delete if job.jid == 'my-job-id'
+end
+```
+
+## Canceling running jobs (destructive)
+
+> Introduced in GitLab 12.3.
+
+This is highly risky operation and use it as last resort.
+Doing that might result in data corruption, as the job
+is interrupted mid-execution and it is not guaranteed
+that proper rollback of transactions is implemented.
+
+```ruby
+Gitlab::SidekiqMonitor.cancel_job('job-id')
+```
+
+> This requires the Sidekiq to be run with `SIDEKIQ_MONITOR_WORKER=1`
+> environment variable.
+
+To perform of the interrupt we use `Thread.raise` which
+has number of drawbacks, as mentioned in [Why Ruby’s Timeout is dangerous (and Thread.raise is terrifying)](https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/):
+
+> This is where the implications get interesting, and terrifying. This means that an exception can get raised:
+>
+> * during a network request (ok, as long as the surrounding code is prepared to catch Timeout::Error)
+> * during the cleanup for the network request
+> * during a rescue block
+> * while creating an object to save to the database afterwards
+> * in any of your code, regardless of whether it could have possibly raised an exception before
+>
+> Nobody writes code to defend against an exception being raised on literally any line. That’s not even possible. So Thread.raise is basically like a sneak attack on your code that could result in almost anything. It would probably be okay if it were pure-functional code that did not modify any state. But this is Ruby, so that’s unlikely :)
diff --git a/doc/administration/uploads.md b/doc/administration/uploads.md
index c6529812ec3..a4bed72b965 100644
--- a/doc/administration/uploads.md
+++ b/doc/administration/uploads.md
@@ -1,21 +1,18 @@
# Uploads administration
->**Notes:**
Uploads represent all user data that may be sent to GitLab as a single file. As an example, avatars and notes' attachments are uploads. Uploads are integral to GitLab functionality, and therefore cannot be disabled.
## Using local storage
->**Notes:**
+NOTE: **Note:**
This is the default configuration
To change the location where the uploads are stored locally, follow the steps
below.
----
-
**In Omnibus installations:**
->**Notes:**
+NOTE: **Note:**
For historical reasons, uploads are stored into a base directory, which by default is `uploads/-/system`. It is strongly discouraged to change this configuration option on an existing GitLab installation.
_The uploads are stored by default in `/var/opt/gitlab/gitlab-rails/uploads`._
@@ -30,8 +27,6 @@ _The uploads are stored by default in `/var/opt/gitlab/gitlab-rails/uploads`._
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
----
-
**In installations from source:**
_The uploads are stored by default in
@@ -42,8 +37,8 @@ _The uploads are stored by default in
```yaml
uploads:
- storage_path: /mnt/storage
- base_dir: uploads
+ storage_path: /mnt/storage
+ base_dir: uploads
```
1. Save the file and [restart GitLab][] for the changes to take effect.
@@ -62,7 +57,7 @@ This configuration relies on valid AWS credentials to be configured already.
## Object Storage Settings
-For source installations the following settings are nested under `uploads:` and then `object_store:`. On omnibus installs they are prefixed by `uploads_object_store_`.
+For source installations the following settings are nested under `uploads:` and then `object_store:`. On Omnibus GitLab installs they are prefixed by `uploads_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
@@ -83,6 +78,7 @@ The connection settings match those provided by [Fog](https://github.com/fog), a
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `aws_signature_version` | AWS signature version to use. 2 or 4 are valid options. Digital Ocean Spaces and other providers may need 2. | 4 |
+| `enable_signature_v4_streaming` | Set to true to enable HTTP chunked transfers with [AWS v4 signatures](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html). Oracle Cloud S3 needs this to be false | true |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
@@ -108,8 +104,8 @@ _The uploads are stored by default in
}
```
- >**Note:**
- >If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs.
+ NOTE: **Note:**
+ If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs.
```ruby
gitlab_rails['uploads_object_store_connection'] = {
@@ -122,8 +118,6 @@ _The uploads are stored by default in
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
1. Migrate any existing local uploads to the object storage using [`gitlab:uploads:migrate` rake task](raketasks/uploads/migrate.md).
----
-
**In installations from source:**
_The uploads are stored by default in
@@ -147,6 +141,91 @@ _The uploads are stored by default in
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Migrate any existing local uploads to the object storage using [`gitlab:uploads:migrate` rake task](raketasks/uploads/migrate.md).
+### Oracle Cloud S3 connection settings
+
+Note that Oracle Cloud S3 must be sure to use the following settings:
+
+| Setting | Value |
+|---------|-------|
+| `enable_signature_v4_streaming` | false |
+| `path_style` | true |
+
+If `enable_signature_v4_streaming` is set to `true`, you may see the
+following error:
+
+```
+STREAMING-AWS4-HMAC-SHA256-PAYLOAD is not supported
+```
+
+### OpenStack compatible connection settings
+
+The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
+
+| Setting | Description | Default |
+|---------|-------------|---------|
+| `provider` | Always `OpenStack` for compatible hosts | OpenStack |
+| `openstack_username` | OpenStack username | |
+| `openstack_api_key` | OpenStack api key | |
+| `openstack_temp_url_key` | OpenStack key for generating temporary urls | |
+| `openstack_auth_url` | OpenStack authentication endpont | |
+| `openstack_region` | OpenStack region | |
+| `openstack_tenant` | OpenStack tenant ID |
+
+**In Omnibus installations:**
+
+_The uploads are stored by default in
+`/var/opt/gitlab/gitlab-rails/public/uploads/-/system`._
+
+1. Edit `/etc/gitlab/gitlab.rb` and add the following lines by replacing with
+ the values you want:
+
+ ```ruby
+ gitlab_rails['uploads_object_store_remote_directory'] = "OPENSTACK_OBJECT_CONTAINER_NAME"
+ gitlab_rails['uploads_object_store_connection'] = {
+ 'provider' => 'OpenStack',
+ 'openstack_username' => 'OPENSTACK_USERNAME',
+ 'openstack_api_key' => 'OPENSTACK_PASSWORD',
+ 'openstack_temp_url_key' => 'OPENSTACK_TEMP_URL_KEY',
+ 'openstack_auth_url' => 'https://auth.cloud.ovh.net/v2.0/',
+ 'openstack_region' => 'DE1',
+ 'openstack_tenant' => 'TENANT_ID',
+ }
+ ```
+
+1. Save the file and [reconfigure GitLab][] for the changes to take effect.
+1. Migrate any existing local uploads to the object storage using [`gitlab:uploads:migrate` rake task](raketasks/uploads/migrate.md).
+
+---
+
+**In installations from source:**
+
+_The uploads are stored by default in
+`/home/git/gitlab/public/uploads/-/system`._
+
+1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
+ lines:
+
+ ```yaml
+ uploads:
+ object_store:
+ enabled: true
+ direct_upload: false
+ background_upload: true
+ proxy_download: false
+ remote_directory: OPENSTACK_OBJECT_CONTAINER_NAME
+ connection:
+ provider: OpenStack
+ openstack_username: OPENSTACK_USERNAME
+ openstack_api_key: OPENSTACK_PASSWORD
+ openstack_temp_url_key: OPENSTACK_TEMP_URL_KEY
+ openstack_auth_url: 'https://auth.cloud.ovh.net/v2.0/'
+ openstack_region: DE1
+ openstack_tenant: 'TENANT_ID'
+ ```
+
+1. Save the file and [reconfigure GitLab][] for the changes to take effect.
+1. Migrate any existing local uploads to the object storage using [`gitlab:uploads:migrate` rake task](raketasks/uploads/migrate.md).
+
[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
[restart gitlab]: restart_gitlab.md#installations-from-source "How to restart GitLab"
[eep]: https://about.gitlab.com/gitlab-ee/ "GitLab Premium"