diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2023-01-18 19:00:14 +0000 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2023-01-18 19:00:14 +0000 |
commit | 05f0ebba3a2c8ddf39e436f412dc2ab5bf1353b2 (patch) | |
tree | 11d0f2a6ec31c7793c184106cedc2ded3d9a2cc5 /doc/administration/operations | |
parent | ec73467c23693d0db63a797d10194da9e72a74af (diff) | |
download | gitlab-ce-05f0ebba3a2c8ddf39e436f412dc2ab5bf1353b2.tar.gz |
Add latest changes from gitlab-org/gitlab@15-8-stable-eev15.8.0-rc42
Diffstat (limited to 'doc/administration/operations')
-rw-r--r-- | doc/administration/operations/fast_ssh_key_lookup.md | 3 | ||||
-rw-r--r-- | doc/administration/operations/moving_repositories.md | 45 | ||||
-rw-r--r-- | doc/administration/operations/puma.md | 78 | ||||
-rw-r--r-- | doc/administration/operations/rails_console.md | 6 |
4 files changed, 66 insertions, 66 deletions
diff --git a/doc/administration/operations/fast_ssh_key_lookup.md b/doc/administration/operations/fast_ssh_key_lookup.md index 7aeb05457c0..a34b21e676a 100644 --- a/doc/administration/operations/fast_ssh_key_lookup.md +++ b/doc/administration/operations/fast_ssh_key_lookup.md @@ -188,7 +188,8 @@ file for the environment, as it isn't generated dynamically. ### Additional documentation -Additional technical documentation for `gitlab-sshd` may be found on the [GitLab Shell](https://gitlab.com/gitlab-org/gitlab-shell/-/blob/main/README.md) documentation page. +Additional technical documentation for `gitlab-sshd` may be found in the +[GitLab Shell documentation](../../development/gitlab_shell/index.md). ## Troubleshooting diff --git a/doc/administration/operations/moving_repositories.md b/doc/administration/operations/moving_repositories.md index 96c1fcc422d..5066f6d99d8 100644 --- a/doc/administration/operations/moving_repositories.md +++ b/doc/administration/operations/moving_repositories.md @@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w You can move all repositories managed by GitLab to another file system or another server. -## Moving data within a GitLab instance +## Moving data in a GitLab instance The GitLab API is the recommended way to move Git repositories: @@ -28,10 +28,10 @@ For more information, see: querying and scheduling group repository moves **(PREMIUM SELF)**. - [Migrate to Gitaly Cluster](../gitaly/index.md#migrate-to-gitaly-cluster). -### Move Repositories +### Moving Repositories GitLab repositories can be associated with projects, groups, and snippets. Each of these types -have a separate API to schedule the respective repositories to move. To move all repositories +has a separate API to schedule the respective repositories to move. To move all repositories on a GitLab instance, each of these types must be scheduled to move for each storage. WARNING: @@ -41,7 +41,7 @@ To move repositories into a [Gitaly Cluster](../gitaly/index.md#gitaly-cluster) WARNING: Repositories can be **permanently deleted** by a call to `/projects/:project_id/repository_storage_moves` that attempts to move a project already stored in a Gitaly Cluster back into that cluster. -See [this issue for more details](https://gitlab.com/gitlab-org/gitaly/-/issues/3752). This was fixed in +See [this issue for more details](https://gitlab.com/gitlab-org/gitaly/-/issues/3752). This issue was fixed in GitLab 14.3.0 and backported to [14.2.4](https://about.gitlab.com/releases/2021/09/17/gitlab-14-2-4-released/), [14.1.6](https://about.gitlab.com/releases/2021/09/27/gitlab-14-1-6-released/), @@ -59,13 +59,16 @@ To move repositories: so that the new storages receives all new projects. This stops new projects from being created on existing storages while the migration is in progress. 1. Schedule repository moves for: - - [Projects](#bulk-schedule-project-moves). - - [Snippets](#bulk-schedule-snippet-moves). - - [Groups](#bulk-schedule-group-moves). **(PREMIUM SELF)** + - [All projects](#move-all-projects) or + [individual projects](../../api/project_repository_storage_moves.md#schedule-a-repository-storage-move-for-a-project). + - [All snippets](#move-all-snippets) or + [individual snippets](../../api/snippet_repository_storage_moves.md#schedule-a-repository-storage-move-for-a-snippet). + - [All groups](#move-all-groups) or + [individual groups](../../api/group_repository_storage_moves.md#schedule-a-repository-storage-move-for-a-group). **(PREMIUM SELF)** -### Bulk schedule project moves +#### Move all projects -Use the API to schedule project moves: +To move all projects by using the API: 1. [Schedule repository storage moves for all projects on a storage shard](../../api/project_repository_storage_moves.md#schedule-repository-storage-moves-for-all-projects-on-a-storage-shard) using the API. For example: @@ -100,9 +103,9 @@ Use the API to schedule project moves: 1. Repeat for each storage as required. -### Bulk schedule snippet moves +#### Move all snippets -Use the API to schedule snippet moves: +To move all snippets by using the API: 1. [Schedule repository storage moves for all snippets on a storage shard](../../api/snippet_repository_storage_moves.md#schedule-repository-storage-moves-for-all-snippets-on-a-storage-shard). For example: @@ -113,8 +116,8 @@ Use the API to schedule snippet moves: "https://gitlab.example.com/api/v4/snippet_repository_storage_moves" ``` -1. [Query the most recent repository moves](../../api/snippet_repository_storage_moves.md#retrieve-all-snippet-repository-storage-moves) -The response indicates either: +1. [Query the most recent repository moves](../../api/snippet_repository_storage_moves.md#retrieve-all-snippet-repository-storage-moves). + The response indicates either: - The moves have completed successfully. The `state` field is `finished`. - The moves are in progress. Re-query the repository move until it completes successfully. - The moves have failed. Most failures are temporary and are solved by rescheduling the move. @@ -129,12 +132,12 @@ The response indicates either: 1. Repeat for each storage as required. -### Bulk schedule group moves **(PREMIUM SELF)** +#### Move all groups **(PREMIUM SELF)** -Use the API to schedule group moves: +To move all groups by using the API: -1. [Schedule repository storage moves for all groups on a storage shard](../../api/group_repository_storage_moves.md#schedule-repository-storage-moves-for-all-groups-on-a-storage-shard) -. For example: +1. [Schedule repository storage moves for all groups on a storage shard](../../api/group_repository_storage_moves.md#schedule-repository-storage-moves-for-all-groups-on-a-storage-shard). + For example: ```shell curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \ @@ -143,8 +146,8 @@ Use the API to schedule group moves: "https://gitlab.example.com/api/v4/group_repository_storage_moves" ``` -1. [Query the most recent repository moves](../../api/group_repository_storage_moves.md#retrieve-all-group-repository-storage-moves) -. The response indicates either: +1. [Query the most recent repository moves](../../api/group_repository_storage_moves.md#retrieve-all-group-repository-storage-moves). + The response indicates either: - The moves have completed successfully. The `state` field is `finished`. - The moves are in progress. Re-query the repository move until it completes successfully. - The moves have failed. Most failures are temporary and are solved by rescheduling the move. @@ -161,7 +164,7 @@ Use the API to schedule group moves: ## Migrating to another GitLab instance -[Using the API](#moving-data-within-a-gitlab-instance) isn't an option if you are migrating to a new +[Using the API](#moving-data-in-a-gitlab-instance) isn't an option if you are migrating to a new GitLab environment, for example: - From a single-node GitLab to a scaled-out architecture. @@ -185,7 +188,7 @@ Each of the approaches we list can or does overwrite data in the target director For either Gitaly or Gitaly Cluster targets, the GitLab [backup and restore capability](../../raketasks/backup_restore.md) should be used. Git repositories are accessed, managed, and stored on GitLab servers by Gitaly as a database. Data loss -can result from directly accessing and copying Gitaly's files using tools like `rsync`. +can result from directly accessing and copying Gitaly files using tools like `rsync`. - From GitLab 13.3, backup performance can be improved by [processing multiple repositories concurrently](../../raketasks/backup_gitlab.md#back-up-git-repositories-concurrently). diff --git a/doc/administration/operations/puma.md b/doc/administration/operations/puma.md index af595cdf297..51d6e9ae1fd 100644 --- a/doc/administration/operations/puma.md +++ b/doc/administration/operations/puma.md @@ -14,24 +14,22 @@ features of GitLab. To reduce memory use, Puma forks worker processes. Each time a worker is created, it shares memory with the primary process. The worker uses additional memory only -when it changes or adds to its memory pages. +when it changes or adds to its memory pages. This can lead to Puma workers using +more physical memory over time as workers handle additional web requests. The amount of memory +used over time depends on the use of GitLab. The more features used by GitLab users, +the higher the expected memory use over time. -Memory use increases over time, but you can use Puma Worker Killer to recover memory. +To stop uncontrolled memory growth, the GitLab Rails application runs a supervision thread +that automatically restarts workers if they exceed a given resident set size (RSS) threshold +for a certain amount of time. -By default: - -- The [Puma Worker Killer](https://github.com/schneems/puma_worker_killer) restarts a worker if it - exceeds a [memory limit](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/cluster/puma_worker_killer_initializer.rb). -- Rolling restarts of Puma workers are performed every 12 hours. - -### Change the memory limit setting - -To change the memory limit setting: +GitLab sets a default of `1200Mb` for the memory limit. To override the default value, +set `per_worker_max_memory_mb` to the new RSS limit in megabytes: 1. Edit `/etc/gitlab/gitlab.rb`: ```ruby - puma['per_worker_max_memory_mb'] = 1024 + puma['per_worker_max_memory_mb'] = 1024 # 1GB ``` 1. Reconfigure GitLab: @@ -40,48 +38,40 @@ To change the memory limit setting: sudo gitlab-ctl reconfigure ``` -When workers are killed and replaced, capacity to run GitLab is reduced, -and CPU is consumed. Set `per_worker_max_memory_mb` to a higher value if the worker killer -is replacing workers too often. +When workers are restarted, capacity to run GitLab is reduced for a short +period of time. Set `per_worker_max_memory_mb` to a higher value if workers are replaced too often. Worker count is calculated based on CPU cores. A small GitLab deployment with 4-8 workers may experience performance issues if workers are being restarted too often (once or more per minute). -A higher value of `1200` or more would be beneficial if the server has free memory. +A higher value of `1200` or more could be beneficial if the server has free memory. -### Monitor worker memory +### Monitor worker restarts -The worker killer checks memory every 20 seconds. +GitLab emits log events if workers are restarted due to high memory use. -To monitor the worker killer, use [the Puma log](../logs/index.md#puma_stdoutlog) `/var/log/gitlab/puma/puma_stdout.log`. -For example: +The following is an example of one of these log events in `/var/log/gitlab/gitlab-rails/application_json.log`: -```plaintext -PumaWorkerKiller: Out of memory. 4 workers consuming total: 4871.23828125 MB -out of max: 4798.08 MB. Sending TERM to pid 26668 consuming 1001.00390625 MB. +```json +{ + "severity": "WARN", + "time": "2023-01-04T09:45:16.173Z", + "correlation_id": null, + "pid": 2725, + "worker_id": "puma_0", + "memwd_handler_class": "Gitlab::Memory::Watchdog::PumaHandler", + "memwd_sleep_time_s": 5, + "memwd_rss_bytes": 1077682176, + "memwd_max_rss_bytes": 629145600, + "memwd_max_strikes": 5, + "memwd_cur_strikes": 6, + "message": "rss memory limit exceeded" +} ``` -From this output: - -- The formula that calculates the maximum memory value results in workers - being killed before they reach the `per_worker_max_memory_mb` value. -- In GitLab 13.4 and earlier, the default values for the formula were 550 MB for the primary - and 850 MB for each worker. -- In GitLab 13.5 and later, the values are primary: 800 MB, worker: 1024 MB. -- The threshold for workers to be killed is set at 98% of the limit: - - ```plaintext - 0.98 * ( 800 + ( worker_processes * 1024MB ) ) - ``` - -- In the log output above, `0.98 * ( 800 + ( 4 * 1024 ) )` returns the - `max: 4798.08 MB` value. - -Increasing the maximum to `1200`, for example, would set a `max: 5488 MB` value. - -Workers use additional memory on top of the shared memory. The amount of memory -depends on a site's use of GitLab. +`memwd_rss_bytes` is the actual amount of memory consumed, and `memwd_max_rss_bytes` is the +RSS limit set through `per_worker_max_memory_mb`. ## Change the worker timeout @@ -146,7 +136,7 @@ for details. When running Puma in single mode, some features are not supported: - [Phased restart](https://gitlab.com/gitlab-org/gitlab/-/issues/300665) -- [Puma Worker Killer](https://gitlab.com/gitlab-org/gitlab/-/issues/300664) +- [Memory killers](#reducing-memory-use) To learn more, visit [epic 5303](https://gitlab.com/groups/gitlab-org/-/epics/5303). diff --git a/doc/administration/operations/rails_console.md b/doc/administration/operations/rails_console.md index efaf480c6df..f2143435755 100644 --- a/doc/administration/operations/rails_console.md +++ b/doc/administration/operations/rails_console.md @@ -31,6 +31,12 @@ Rails experience is useful but not required. sudo gitlab-rails console ``` +**For Docker installations** + +```shell +docker exec -it <container-id> gitlab-rails console +``` + **For installations from source** ```shell |