summaryrefslogtreecommitdiff
path: root/doc/raketasks
diff options
context:
space:
mode:
Diffstat (limited to 'doc/raketasks')
-rw-r--r--doc/raketasks/backup_restore.md245
-rw-r--r--doc/raketasks/index.md3
2 files changed, 225 insertions, 23 deletions
diff --git a/doc/raketasks/backup_restore.md b/doc/raketasks/backup_restore.md
index 676cc529c98..539908cd7bc 100644
--- a/doc/raketasks/backup_restore.md
+++ b/doc/raketasks/backup_restore.md
@@ -12,11 +12,11 @@ An application data backup creates an archive file that contains the database,
all repositories and all attachments.
You can only restore a backup to **exactly the same version and type (CE/EE)**
-of GitLab on which it was created. The best way to migrate your repositories
-from one server to another is through a backup and restore.
+of GitLab on which it was created. The best way to [migrate your projects
+from one server to another](#migrate-to-a-new-server) is through a backup and restore.
WARNING:
-GitLab doesn't back up items that aren't stored in the file system. If you're
+GitLab doesn't back up items that aren't stored on the file system. If you're
using [object storage](../administration/object_storage.md), be sure to enable
backups with your object storage provider, if desired.
@@ -58,16 +58,17 @@ including:
- CI/CD job output logs
- CI/CD job artifacts
- LFS objects
+- Terraform states ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/331806) in GitLab 14.7)
- Container Registry images
- GitLab Pages content
+- Packages ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/332006) in GitLab 14.7)
- Snippets
- [Group wikis](../user/project/wiki/group.md)
Backups do not include:
-- [Terraform state files](../administration/terraform_state.md)
-- [Package registry files](../administration/packages/index.md)
- [Mattermost data](https://docs.mattermost.com/administration/config-settings.html#file-storage)
+- Redis (and thus Sidekiq jobs)
WARNING:
GitLab does not back up any configuration files (`/etc/gitlab`), TLS keys and certificates, or system
@@ -115,12 +116,8 @@ the host, based on your installed version of GitLab:
If you're using the [GitLab Helm chart](https://gitlab.com/gitlab-org/charts/gitlab)
on a Kubernetes cluster, you can run the backup task by using `kubectl` to run the `backup-utility`
-script on the GitLab task runner pod. For more details, see
-[backing up a GitLab installation](https://gitlab.com/gitlab-org/charts/gitlab/blob/master/doc/backup-restore/backup.md#backing-up-a-gitlab-installation).
-
-```shell
-kubectl exec -it <gitlab task-runner pod> -- backup-utility
-```
+script on the GitLab toolbox pod. For more details, see the
+[charts backup documentation](https://docs.gitlab.com/charts/backup-restore/backup.html).
Similar to the Kubernetes case, if you have scaled out your GitLab cluster to
use multiple application servers, you should pick a designated node (that isn't
@@ -276,9 +273,11 @@ You can exclude specific directories from the backup by adding the environment v
- `builds` (CI job output logs)
- `artifacts` (CI job artifacts)
- `lfs` (LFS objects)
+- `terraform_state` (Terraform states)
- `registry` (Container Registry images)
- `pages` (Pages content)
- `repositories` (Git repositories data)
+- `packages` (Packages)
All wikis are backed up as part of the `repositories` group. Non-existent wikis are skipped during a backup.
@@ -985,7 +984,7 @@ your installation is using PgBouncer, for either performance reasons or when usi
Next, restore `/etc/gitlab/gitlab-secrets.json` if necessary,
[as previously mentioned](#restore-prerequisites).
-Reconfigure, restart and check GitLab:
+Reconfigure, restart and [check](../administration/raketasks/maintenance.md#check-gitlab-configuration) GitLab:
```shell
sudo gitlab-ctl reconfigure
@@ -993,7 +992,7 @@ sudo gitlab-ctl restart
sudo gitlab-rake gitlab:check SANITIZE=true
```
-In GitLab 13.1 and later, check [database values can be decrypted](../administration/raketasks/doctor.md)
+In GitLab 13.1 and later, check [database values can be decrypted](../administration/raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets)
especially if `/etc/gitlab/gitlab-secrets.json` was restored, or if a different server is
the target for the restore.
@@ -1001,6 +1000,14 @@ the target for the restore.
sudo gitlab-rake gitlab:doctor:secrets
```
+For added assurance, you can perform [an integrity check on the uploaded files](../administration/raketasks/check.md#uploaded-files-integrity):
+
+```shell
+sudo gitlab-rake gitlab:artifacts:check
+sudo gitlab-rake gitlab:lfs:check
+sudo gitlab-rake gitlab:uploads:check
+```
+
### Restore for Docker image and GitLab Helm chart installations
For GitLab installations using the Docker image or the GitLab Helm chart on a
@@ -1182,7 +1189,7 @@ has a longer discussion explaining the potential problems.
To prevent writes to the Git repository data, there are two possible approaches:
-- Use [maintenance mode](../administration/maintenance_mode/index.md) to place GitLab in a read-only state.
+- Use [maintenance mode](../administration/maintenance_mode/index.md) **(PREMIUM SELF)** to place GitLab in a read-only state.
- Create explicit downtime by stopping all Gitaly services before backing up the repositories:
```shell
@@ -1284,6 +1291,198 @@ sudo GITLAB_BACKUP_PGHOST=192.168.1.10 GITLAB_BACKUP_PGPORT=5432 /opt/gitlab/bin
See the [PostgreSQL documentation](https://www.postgresql.org/docs/12/libpq-envars.html)
for more details on what these parameters do.
+## Migrate to a new server
+
+<!-- some details borrowed from GitLab.com move from Azure to GCP detailed at https://gitlab.com/gitlab-com/migration/-/blob/master/.gitlab/issue_templates/failover.md -->
+
+You can use GitLab backup and restore to migrate your instance to a new server. This section outlines a typical procedure for a GitLab deployment running on a single server.
+If you're running GitLab Geo, an alternative option is [Geo disaster recovery for planned failover](../administration/geo/disaster_recovery/planned_failover.md).
+
+WARNING:
+Avoid uncoordinated data processing by both the new and old servers, where multiple
+servers could connect concurrently and process the same data. For example, when using
+[incoming email](../administration/incoming_email.md), if both GitLab instances are
+processing email at the same time, then both instances will end up missing some data.
+This type of problem can occur with other services as well, such as a
+[non-packaged database](https://docs.gitlab.com/omnibus/settings/database.html#using-a-non-packaged-postgresql-database-management-server),
+a non-packaged Redis instance, or non-packaged Sidekiq.
+
+Prerequisites:
+
+- Some time before your migration, consider notifying your users of upcoming
+ scheduled maintenance with a [broadcast message banner](../user/admin_area/broadcast_messages.md).
+- Ensure your backups are complete and current. Create a complete system-level backup, or
+ take a snapshot of all servers involved in the migration, in case destructive commands
+ (like `rm`) are run incorrectly.
+
+### Prepare the new server
+
+To prepare the new server:
+
+1. Copy the
+ [SSH host keys](https://superuser.com/questions/532040/copy-ssh-keys-from-one-server-to-another-server/532079#532079)
+ from the old server to avoid man-in-the-middle attack warnings.
+1. [Install and configure GitLab](https://about.gitlab.com/install) except
+ [incoming email](../administration/incoming_email.md):
+ 1. Install GitLab.
+ 1. Configure by copying `/etc/gitlab` files from the old server to the new server, and update as necessary.
+ Read the
+ [Omnibus configuration backup and restore instructions](https://docs.gitlab.com/omnibus/settings/backups.html) for more detail.
+ 1. If applicable, disable [incoming email](../administration/incoming_email.md).
+ 1. Block new CI/CD jobs from starting upon initial startup after the backup and restore.
+ Edit `/etc/gitlab/gitlab.rb` and set the following:
+
+ ```ruby
+ nginx['custom_gitlab_server_config'] = "location /api/v4/jobs/request {\n deny all;\n return 503;\n}\n"
+ ```
+
+ 1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Stop GitLab to avoid any potential unnecessary and unintentional data processing:
+
+ ```shell
+ sudo gitlab-ctl stop
+ ```
+
+1. Configure the new server to allow receiving the Redis database and GitLab backup files:
+
+ ```shell
+ sudo rm -f /var/opt/gitlab/redis/dump.rdb
+ sudo chown <your-linux-username> /var/opt/gitlab/redis
+ sudo mkdir /var/opt/gitlab/backups
+ sudo chown <your-linux-username> /var/opt/gitlab/backups
+ ```
+
+### Prepare and transfer content from the old server
+
+1. Ensure you have an up-to-date system-level backup or snapshot of the old server.
+1. Enable [maintenance mode](../administration/maintenance_mode/index.md) **(PREMIUM SELF)**,
+ if supported by your GitLab edition.
+1. Block new CI/CD jobs from starting:
+ 1. Edit `/etc/gitlab/gitlab.rb`, and set the following:
+
+ ```ruby
+ nginx['custom_gitlab_server_config'] = "location /api/v4/jobs/request {\n deny all;\n return 503;\n}\n"
+ ```
+
+ 1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Disable periodic background jobs:
+ 1. On the top bar, select **Menu > Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. Under the Sidekiq dashboard, select **Cron** tab and then
+ **Disable All**.
+1. Wait for the currently running CI/CD jobs to finish, or accept that jobs that have not completed may be lost.
+ To view jobs currently running, on the left sidebar, select **Overviews > Jobs**,
+ and then select **Running**.
+1. Wait for Sidekiq jobs to finish:
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. Under the Sidekiq dashboard, select **Queues** and then **Live Poll**.
+ Wait for **Busy** and **Enqueued** to drop to 0.
+ These queues contain work that has been submitted by your users;
+ shutting down before these jobs complete may cause the work to be lost.
+ Make note of the numbers shown in the Sidekiq dashboard for post-migration verification.
+1. Flush the Redis database to disk, and stop GitLab other than the services needed for migration:
+
+ ```shell
+ sudo /opt/gitlab/embedded/bin/redis-cli -s /var/opt/gitlab/redis/redis.socket save && sudo gitlab-ctl stop && sudo gitlab-ctl start postgresql
+ ```
+
+1. Create a GitLab backup:
+
+ ```shell
+ sudo gitlab-backup create
+ ```
+
+1. Disable the following GitLab services and prevent unintentional restarts by adding the following to the bottom of `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ alertmanager['enable'] = false
+ gitlab_exporter['enable'] = false
+ gitlab_pages['enable'] = false
+ gitlab_workhorse['enable'] = false
+ grafana['enable'] = false
+ logrotate['enable'] = false
+ gitlab_rails['incoming_email_enabled'] = false
+ nginx['enable'] = false
+ node_exporter['enable'] = false
+ postgres_exporter['enable'] = false
+ postgresql['enable'] = false
+ prometheus['enable'] = false
+ puma['enable'] = false
+ redis['enable'] = false
+ redis_exporter['enable'] = false
+ registry['enable'] = false
+ sidekiq['enable'] = false
+ ```
+
+1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Verify everything is stopped, and confirm no services are running:
+
+ ```shell
+ sudo gitlab-ctl status
+ ```
+
+1. Transfer the Redis database and GitLab backups to the new server:
+
+ ```shell
+ sudo scp /var/opt/gitlab/redis/dump.rdb <your-linux-username>@new-server:/var/opt/gitlab/redis
+ sudo scp /var/opt/gitlab/backups/your-backup.tar <your-linux-username>@new-server:/var/opt/gitlab/backups
+ ```
+
+### Restore data on the new server
+
+1. Restore appropriate file system permissions:
+
+ ```shell
+ sudo chown gitlab-redis /var/opt/gitlab/redis
+ sudo chown gitlab-redis:gitlab-redis /var/opt/gitlab/redis/dump.rdb
+ sudo chown git:root /var/opt/gitlab/backups
+ sudo chown git:git /var/opt/gitlab/backups/your-backup.tar
+ ```
+
+1. [Restore the GitLab backup](#restore-gitlab).
+1. Verify that the Redis database restored correctly:
+ 1. On the top bar, select **Menu > Admin**.
+ 1. On the left sidebar, select **Monitoring > Background Jobs**.
+ 1. Under the Sidekiq dashboard, verify that the numbers
+ match with what was shown on the old server.
+ 1. While still under the Sidekiq dashboard, select **Cron** and then **Enable All**
+ to re-enable periodic background jobs.
+1. Test that read-only operations on the GitLab instance work as expected. For example, browse through project repository files, merge requests, and issues.
+1. Disable [Maintenance Mode](../administration/maintenance_mode/index.md) **(PREMIUM SELF)**, if previously enabled.
+1. Test that the GitLab instance is working as expected.
+1. If applicable, re-enable [incoming email](../administration/incoming_email.md) and test it is working as expected.
+1. Update your DNS or load balancer to point at the new server.
+1. Unblock new CI/CD jobs from starting by removing the custom NGINX config
+ you added previously:
+
+ ```ruby
+ # The following line must be removed
+ nginx['custom_gitlab_server_config'] = "location /api/v4/jobs/request {\n deny all;\n return 503;\n}\n"
+ ```
+
+1. Reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. Remove the scheduled maintenance [broadcast message banner](../user/admin_area/broadcast_messages.md).
+
## Additional notes
This documentation is for GitLab Community and Enterprise Edition. We back up
@@ -1308,9 +1507,11 @@ If you're using backup restore procedures, you may encounter the following
warning messages:
```plaintext
-psql:/var/opt/gitlab/backups/db/database.sql:22: ERROR: must be owner of extension plpgsql
-psql:/var/opt/gitlab/backups/db/database.sql:2931: WARNING: no privileges could be revoked for "public" (two occurrences)
-psql:/var/opt/gitlab/backups/db/database.sql:2933: WARNING: no privileges were granted for "public" (two occurrences)
+ERROR: must be owner of extension pg_trgm
+ERROR: must be owner of extension btree_gist
+ERROR: must be owner of extension plpgsql
+WARNING: no privileges could be revoked for "public" (two occurrences)
+WARNING: no privileges were granted for "public" (two occurrences)
```
Be advised that the backup is successfully restored in spite of these warning
@@ -1362,8 +1563,8 @@ Use the information in the following sections at your own risk.
#### Verify that all values can be decrypted
-You can determine if your database contains values that can't be decrypted by using the
-[Secrets Doctor Rake task](../administration/raketasks/doctor.md).
+You can determine if your database contains values that can't be decrypted by using a
+[Rake task](../administration/raketasks/check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
#### Take a backup
@@ -1619,10 +1820,12 @@ There can be
[risks when disabling released features](../administration/feature_flags.md#risks-when-disabling-released-features).
Refer to this feature's version history for more details.
-`gitaly-backup` is used by the backup Rake task to create and restore repository backups from Gitaly.
+The `gitaly-backup` binary is used by the backup Rake task to create and restore repository backups from Gitaly.
`gitaly-backup` replaces the previous backup method that directly calls RPCs on Gitaly from GitLab.
-The backup Rake task must be able to find this executable. It can be configured in Omnibus GitLab packages:
+The backup Rake task must be able to find this executable. In most cases, you don't need to change
+the path to the binary as it should work fine with the default path `/opt/gitlab/embedded/bin/gitaly-backup`.
+If you have a specific reason to change the path, it can be configured in Omnibus GitLab packages:
1. Add the following to `/etc/gitlab/gitlab.rb`:
diff --git a/doc/raketasks/index.md b/doc/raketasks/index.md
index 6227731e807..96c31047d8d 100644
--- a/doc/raketasks/index.md
+++ b/doc/raketasks/index.md
@@ -26,7 +26,6 @@ The following Rake tasks are available for use with GitLab:
| [Back up and restore](backup_restore.md) | Back up, restore, and migrate GitLab instances between servers. |
| [Clean up](cleanup.md) | Clean up unneeded items from GitLab instances. |
| [Development](../development/rake_tasks.md) | Tasks for GitLab contributors. |
-| [Doctor tasks](../administration/raketasks/doctor.md) | Checks for data integrity issues. |
| [Elasticsearch](../integration/elasticsearch.md#gitlab-advanced-search-rake-tasks) | Maintain Elasticsearch in a GitLab instance. |
| [Enable namespaces](features.md) | Enable usernames and namespaces for user projects. |
| [General maintenance](../administration/raketasks/maintenance.md) | General maintenance and self-check tasks. |
@@ -34,7 +33,7 @@ The following Rake tasks are available for use with GitLab:
| [GitHub import](../administration/raketasks/github_import.md) | Retrieve and import repositories from GitHub. |
| [Import repositories](import.md) | Import bare repositories into your GitLab instance. |
| [Import large project exports](../development/import_project.md#importing-via-a-rake-task) | Import large GitLab [project exports](../user/project/settings/import_export.md). |
-| [Integrity checks](../administration/raketasks/check.md) | Check the integrity of repositories, files, and LDAP. |
+| [Integrity checks](../administration/raketasks/check.md) | Check the integrity of repositories, files, LDAP, and more. |
| [LDAP maintenance](../administration/raketasks/ldap.md) | [LDAP](../administration/auth/ldap/index.md)-related tasks. |
| [List repositories](list_repos.md) | List all GitLab-managed Git repositories on disk. |
| [Migrate snippets to Git](migrate_snippets.md) | Migrate GitLab Snippets to Git repositories, and show the migration status. |