diff options
Diffstat (limited to 'doc/administration/operations')
-rw-r--r-- | doc/administration/operations/extra_sidekiq_processes.md | 157 | ||||
-rw-r--r-- | doc/administration/operations/fast_ssh_key_lookup.md | 10 | ||||
-rw-r--r-- | doc/administration/operations/index.md | 1 | ||||
-rw-r--r-- | doc/administration/operations/puma.md | 10 | ||||
-rw-r--r-- | doc/administration/operations/sidekiq_memory_killer.md | 8 | ||||
-rw-r--r-- | doc/administration/operations/ssh_certificates.md | 6 | ||||
-rw-r--r-- | doc/administration/operations/unicorn.md | 4 |
7 files changed, 114 insertions, 82 deletions
diff --git a/doc/administration/operations/extra_sidekiq_processes.md b/doc/administration/operations/extra_sidekiq_processes.md index 1c92a429982..8f54b82c325 100644 --- a/doc/administration/operations/extra_sidekiq_processes.md +++ b/doc/administration/operations/extra_sidekiq_processes.md @@ -1,13 +1,13 @@ -# Running multiple Sidekiq processes **(CORE ONLY)** - -NOTE: **Note:** -The information in this page applies only to Omnibus GitLab. +# Run multiple Sidekiq processes **(CORE ONLY)** GitLab allows you to start multiple Sidekiq processes. These processes can be used to consume a dedicated set of queues. This can be used to ensure certain queues always have dedicated workers, no matter the number of jobs that need to be processed. +NOTE: **Note:** +The information in this page applies only to Omnibus GitLab. + ## Available Sidekiq queues For a list of the existing Sidekiq queues, check the following files: @@ -18,28 +18,27 @@ For a list of the existing Sidekiq queues, check the following files: Each entry in the above files represents a queue on which Sidekiq processes can be started. -## Starting multiple processes +## Start multiple processes -To start multiple Sidekiq processes, you must enable `sidekiq-cluster`: +> - [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4006) in GitLab 12.10, starting multiple processes with Sidekiq cluster. +> - [Sidekiq cluster moved](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/181) to GitLab [Core](https://about.gitlab.com/pricing/#self-managed) in GitLab 12.10. +> - [Sidekiq cluster became default](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4140) in GitLab 13.0. -1. Edit `/etc/gitlab/gitlab.rb` and add: +To start multiple processes: - ```ruby - sidekiq_cluster['enable'] = true - ``` - -1. You will then need to specify how many additional processes to create via `sidekiq-cluster` - and which queue they should handle via the `sidekiq_cluster['queue_groups']` - array setting. Each item in the array equates to one additional Sidekiq +1. Using the `sidekiq['queue_groups']` array setting, specify how many processes to + create using `sidekiq-cluster` and which queue they should handle. + Each item in the array equates to one additional Sidekiq process, and values in each item determine the queues it works on. - For example, the following setting adds additional Sidekiq processes to two - queues, one to `elastic_indexer` and one to `mailers`: + For example, the following setting creates three Sidekiq processes, one to run on + `elastic_indexer`, one to run on `mailers`, and one process running all on queues: ```ruby - sidekiq_cluster['queue_groups'] = [ + sidekiq['queue_groups'] = [ "elastic_indexer", - "mailers" + "mailers", + "*" ] ``` @@ -47,9 +46,10 @@ To start multiple Sidekiq processes, you must enable `sidekiq-cluster`: queue names to its item delimited by commas. For example: ```ruby - sidekiq_cluster['queue_groups'] = [ + sidekiq['queue_groups'] = [ "elastic_indexer, elastic_commit_indexer", - "mailers" + "mailers", + "*" ] ``` @@ -58,7 +58,7 @@ To start multiple Sidekiq processes, you must enable `sidekiq-cluster`: processes, each handling all queues: ```ruby - sidekiq_cluster['queue_groups'] = [ + sidekiq['queue_groups'] = [ "*", "*" ] @@ -67,27 +67,35 @@ To start multiple Sidekiq processes, you must enable `sidekiq-cluster`: `*` cannot be combined with concrete queue names - `*, mailers` will just handle the `mailers` queue. + When `sidekiq-cluster` is only running on a single node, make sure that at least + one process is running on all queues using `*`. This means a process will + automatically pick up jobs in queues created in the future. + + If `sidekiq-cluster` is running on more than one node, you can also use + [`--negate`](#negate-settings) and list all the queues that are already being + processed. + 1. Save the file and reconfigure GitLab for the changes to take effect: ```shell sudo gitlab-ctl reconfigure ``` -Once the extra Sidekiq processes are added, you can visit the -**Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab. +After the extra Sidekiq processes are added, navigate to +**{admin}** **Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab. ![Multiple Sidekiq processes](img/sidekiq-cluster.png) -## Negating settings +## Negate settings To have the additional Sidekiq processes work on every queue **except** the ones you list: -1. After you follow the steps for [starting extra processes](#starting-multiple-processes), +1. After you follow the steps for [starting extra processes](#start-multiple-processes), edit `/etc/gitlab/gitlab.rb` and add: ```ruby - sidekiq_cluster['negate'] = true + sidekiq['negate'] = true ``` 1. Save the file and reconfigure GitLab for the changes to take effect: @@ -177,9 +185,9 @@ entire queue group selects all queues. In `/etc/gitlab/gitlab.rb`: ```ruby -sidekiq_cluster['enable'] = true -sidekiq_cluster['experimental_queue_selector'] = true -sidekiq_cluster['queue_groups'] = [ +sidekiq['enable'] = true +sidekiq['experimental_queue_selector'] = true +sidekiq['queue_groups'] = [ # Run all non-CPU-bound queues that are high urgency 'resource_boundary!=cpu&urgency=high', # Run all continuous integration and pages queues that are not high urgency @@ -189,35 +197,31 @@ sidekiq_cluster['queue_groups'] = [ ] ``` -### Using Sidekiq cluster by default (experimental) - -> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/4006) in GitLab 12.10. +### Disable Sidekiq cluster CAUTION: **Warning:** -This feature is experimental. +Sidekiq cluster is [scheduled](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/240) +to be the only way to start Sidekiq in GitLab 14.0. -We're moving [Sidekiq cluster to -core](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/181) and -plan to make it the default way of starting Sidekiq. - -Set the following to start Sidekiq (cluster) -process for handling for all queues (`/etc/gitlab/gitlab.rb`): +By default, the Sidekiq service will run `sidekiq-cluster`. To disable this behavior, +add the following to the Sidekiq configuration: ```ruby sidekiq['enable'] = true -sidekiq['cluster'] = true +sidekiq['cluster'] = false ``` -All of the aforementioned configuration options for `sidekiq_cluster` -are also available. By default, they will be configured as follows: +All of the aforementioned configuration options for `sidekiq` +are available. By default, they will be configured as follows: ```ruby sidekiq['experimental_queue_selector'] = false sidekiq['interval'] = nil -sidekiq['max_concurrency'] = nil +sidekiq['max_concurrency'] = 50 sidekiq['min_concurrency'] = nil sidekiq['negate'] = false sidekiq['queue_groups'] = ['*'] +sidekiq['shutdown_timeout'] = 25 ``` `sidekiq_cluster` must be disabled if you decide to configure the @@ -231,7 +235,7 @@ setting `sidekiq['cluster'] = true`. When using this feature, the service called `sidekiq` will now be running `sidekiq-cluster`. -The [concurrency](#managing-concurrency) and other options configured +The [concurrency](#manage-concurrency) and other options configured for Sidekiq will be respected. By default, logs for `sidekiq-cluster` go to `/var/log/gitlab/sidekiq` @@ -246,9 +250,9 @@ use all of its resources to perform those operations. To set up a separate 1. Edit `/etc/gitlab/gitlab.rb` and add: ```ruby - sidekiq_cluster['enable'] = true - sidekiq_cluster['negate'] = true - sidekiq_cluster['queue_groups'] = [ + sidekiq['enable'] = true + sidekiq['negate'] = true + sidekiq['queue_groups'] = [ "github_import_advance_stage", "github_importer:github_import_import_diff_note", "github_importer:github_import_import_issue", @@ -274,12 +278,12 @@ use all of its resources to perform those operations. To set up a separate ## Number of threads -Each process defined under `sidekiq_cluster` starts with a +Each process defined under `sidekiq` starts with a number of threads that equals the number of queues, plus one spare thread. For example, a process that handles the `process_commit` and `post_receive` queues will use three threads in total. -## Managing concurrency +## Manage concurrency When setting the maximum concurrency, keep in mind this normally should not exceed the number of CPU cores available. The values in the examples @@ -290,29 +294,15 @@ latency and potentially cause client timeouts. See the [Sidekiq documentation about Redis](https://github.com/mperham/sidekiq/wiki/Using-Redis) for more details. -### When running a single Sidekiq process (default) - -1. Edit `/etc/gitlab/gitlab.rb` and add: - - ```ruby - sidekiq['concurrency'] = 25 - ``` +### When running Sidekiq cluster (default) -1. Save the file and reconfigure GitLab for the changes to take effect: - - ```shell - sudo gitlab-ctl reconfigure - ``` - -This will set the concurrency (number of threads) for the Sidekiq process. - -### When running Sidekiq cluster +Running Sidekiq cluster is the default in GitLab 13.0 and later. 1. Edit `/etc/gitlab/gitlab.rb` and add: ```ruby - sidekiq_cluster['min_concurrency'] = 15 - sidekiq_cluster['max_concurrency'] = 25 + sidekiq['min_concurrency'] = 15 + sidekiq['max_concurrency'] = 25 ``` 1. Save the file and reconfigure GitLab for the changes to take effect: @@ -337,21 +327,44 @@ regardless of the number of queues. When `min_concurrency` is greater than `max_concurrency`, it is treated as being equal to `max_concurrency`. -## Modifying the check interval +### When running a single Sidekiq process + +Running a single Sidekiq process is the default in GitLab 12.10 and earlier. + +CAUTION: **Warning:** +Running Sidekiq directly is scheduled to be removed in GitLab +[14.0](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/240). + +1. Edit `/etc/gitlab/gitlab.rb` and add: + + ```ruby + sidekiq['cluster'] = false + sidekiq['concurrency'] = 25 + ``` + +1. Save the file and reconfigure GitLab for the changes to take effect: + + ```shell + sudo gitlab-ctl reconfigure + ``` + +This will set the concurrency (number of threads) for the Sidekiq process. + +## Modify the check interval To modify the check interval for the additional Sidekiq processes: 1. Edit `/etc/gitlab/gitlab.rb` and add: ```ruby - sidekiq_cluster['interval'] = 5 + sidekiq['interval'] = 5 ``` 1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect. This tells the additional processes how often to check for enqueued jobs. -## Troubleshooting using the CLI +## Troubleshoot using the CLI CAUTION: **Warning:** It's recommended to use `/etc/gitlab/gitlab.rb` to configure the Sidekiq processes. @@ -399,7 +412,7 @@ you'd use the following: /opt/gitlab/embedded/service/gitlab-rails/bin/sidekiq-cluster process_commit,post_receive gitlab_shell ``` -### Monitoring the `sidekiq-cluster` command +### Monitor the `sidekiq-cluster` command The `sidekiq-cluster` command will not terminate once it has started the desired amount of Sidekiq processes. Instead, the process will continue running and @@ -412,7 +425,7 @@ processes will terminate themselves after a few seconds. This ensures you don't end up with zombie Sidekiq processes. All of this makes monitoring the processes fairly easy. Simply hook up -`sidekiq-cluster` to your supervisor of choice (e.g. runit) and you're good to +`sidekiq-cluster` to your supervisor of choice (for example, runit) and you're good to go. If a child process died the `sidekiq-cluster` command will signal all remaining diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md index 2d1e1c5bda8..6759c3f265d 100644 --- a/doc/administration/operations/fast_ssh_key_lookup.md +++ b/doc/administration/operations/fast_ssh_key_lookup.md @@ -68,11 +68,17 @@ sudo service sshd reload ``` Confirm that SSH is working by removing your user's SSH key in the UI, adding a -new one, and attempting to pull a repo. +new one, and attempting to pull a repository. NOTE: **Note:** For Omnibus Docker, `AuthorizedKeysCommand` is setup by default in GitLab 11.11 and later. +NOTE: **Note:** For Installations from source, the command would be located at +`/home/git/gitlab-shell/bin/gitlab-shell-authorized-keys-check` if [the install from source](../../install/installation.md#install-gitlab-shell) instructions were followed. +You might want to consider creating a wrapper script somewhere else since this command needs to be +owned by `root` and not be writable by group or others. You could also consider changing the ownership of this command +as required, but that might require temporary ownership changes during `gitlab-shell` upgrades. + CAUTION: **Caution:** Do not disable writes until SSH is confirmed to be working perfectly, because the file will quickly become out-of-date. @@ -87,7 +93,7 @@ installation. ![Write to authorized keys setting](img/write_to_authorized_keys_setting.png) Again, confirm that SSH is working by removing your user's SSH key in the UI, -adding a new one, and attempting to pull a repo. +adding a new one, and attempting to pull a repository. Then you can backup and delete your `authorized_keys` file for best performance. diff --git a/doc/administration/operations/index.md b/doc/administration/operations/index.md index c27832e67ef..45b8e5ad448 100644 --- a/doc/administration/operations/index.md +++ b/doc/administration/operations/index.md @@ -12,6 +12,7 @@ Keep your GitLab instance up and running smoothly. - [Sidekiq MemoryKiller](sidekiq_memory_killer.md): Configure Sidekiq MemoryKiller to restart Sidekiq. - [Multiple Sidekiq processes](extra_sidekiq_processes.md): Configure multiple Sidekiq processes to ensure certain queues always have dedicated workers, no matter the number of jobs that need to be processed. **(CORE ONLY)** +- [Puma](puma.md): Understand Puma and puma-worker-killer. - [Unicorn](unicorn.md): Understand Unicorn and unicorn-worker-killer. - Speed up SSH operations by [Authorizing SSH users via a fast, indexed lookup to the GitLab database](fast_ssh_key_lookup.md), and/or diff --git a/doc/administration/operations/puma.md b/doc/administration/operations/puma.md index 6f252a7d76e..af559cf00e9 100644 --- a/doc/administration/operations/puma.md +++ b/doc/administration/operations/puma.md @@ -3,7 +3,9 @@ ## Puma As of GitLab 12.9, [Puma](https://github.com/puma/puma) has replaced [Unicorn](https://yhbt.net/unicorn/). -as the default web server. +as the default web server. Starting with 13.0, both all-in-one package based +installations as well as Helm chart based installations will run Puma instead of +Unicorn unless explicitly specified not to. ## Why switch to Puma? @@ -14,6 +16,12 @@ Most Rails applications requests normally include a proportion of I/O wait time. During I/O wait time MRI Ruby will release the GVL (Global VM Lock) to other threads. Multi-threaded Puma can therefore still serve more requests than a single process. +## Configuring Puma to replace Unicorn + +If you are currently running Unicorn and would like to switch to Puma, server configuration +will _not_ carry over automatically. For details on matching Unicorn configuration settings with +the Puma equivalent, where applicable, see [Converting Unicorn settings to Puma](https://docs.gitlab.com/omnibus/settings/puma.html#converting-unicorn-settings-to-puma). + ## Performance caveat when using Puma with Rugged For deployments where NFS is used to store Git repository, we allow GitLab to use diff --git a/doc/administration/operations/sidekiq_memory_killer.md b/doc/administration/operations/sidekiq_memory_killer.md index 6438dbb9dab..fdccfacc8a9 100644 --- a/doc/administration/operations/sidekiq_memory_killer.md +++ b/doc/administration/operations/sidekiq_memory_killer.md @@ -2,13 +2,13 @@ The GitLab Rails application code suffers from memory leaks. For web requests this problem is made manageable using -[`unicorn-worker-killer`](https://github.com/kzk/unicorn-worker-killer) which -restarts Unicorn worker processes in between requests when needed. The Sidekiq +[`puma-worker-killer`](https://github.com/schneems/puma_worker_killer) which +restarts Puma worker processes if it exceeds a memory limit. The Sidekiq MemoryKiller applies the same approach to the Sidekiq processes used by GitLab to process background jobs. -Unlike unicorn-worker-killer, which is enabled by default for all GitLab -installations since GitLab 6.4, the Sidekiq MemoryKiller is enabled by default +Unlike puma-worker-killer, which is enabled by default for all GitLab +installations since GitLab 13.0, the Sidekiq MemoryKiller is enabled by default _only_ for Omnibus packages. The reason for this is that the MemoryKiller relies on runit to restart Sidekiq after a memory-induced shutdown and GitLab installations from source do not all use runit or an equivalent. diff --git a/doc/administration/operations/ssh_certificates.md b/doc/administration/operations/ssh_certificates.md index eaf0e4ab284..b652f282b7b 100644 --- a/doc/administration/operations/ssh_certificates.md +++ b/doc/administration/operations/ssh_certificates.md @@ -50,7 +50,7 @@ the GitLab server itself, but your setup may vary. If the CA is only used for GitLab consider putting this in the `Match User git` section (described below). -The SSH certificates being issued by that CA **MUST** have a "key id" +The SSH certificates being issued by that CA **MUST** have a "key ID" corresponding to that user's username on GitLab, e.g. (some output omitted for brevity): @@ -77,7 +77,7 @@ own `AuthorizedPrincipalsCommand` to do that mapping instead of using our provided default. The important part is that the `AuthorizedPrincipalsCommand` must be -able to map from the "key id" to a GitLab username in some way, the +able to map from the "key ID" to a GitLab username in some way, the default command we ship assumes there's a 1=1 mapping between the two, since the whole point of this is to allow us to extract a GitLab username from the key itself, instead of relying on something like the @@ -122,7 +122,7 @@ into multiple lines of `authorized_keys` output, as described in the Normally when using the `AuthorizedKeysCommand` with OpenSSH the principal is some "group" that's allowed to log into that server. However with GitLab it's only used to appease OpenSSH's -requirement for it, we effectively only care about the "key id" being +requirement for it, we effectively only care about the "key ID" being correct. Once that's extracted GitLab will enforce its own ACLs for that user (e.g. what projects the user can access). diff --git a/doc/administration/operations/unicorn.md b/doc/administration/operations/unicorn.md index bb817e71f5a..50481482f4c 100644 --- a/doc/administration/operations/unicorn.md +++ b/doc/administration/operations/unicorn.md @@ -1,5 +1,9 @@ # Understanding Unicorn and unicorn-worker-killer +NOTE: **Note:** +Starting with GitLab 13.0, Puma is the default web server used in GitLab +all-in-one package based installations as well as GitLab Helm chart deployments. + ## Unicorn GitLab uses [Unicorn](https://yhbt.net/unicorn/), a pre-forking Ruby web |