diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2020-04-01 06:07:50 +0000 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2020-04-01 06:07:50 +0000 |
commit | e50050a8756a20b6aa118edbad3369674e4c63ba (patch) | |
tree | 0f9ae83c168b01707753e066294f7b55aa0968a5 /doc/administration/geo | |
parent | 1dffba3bd853076efc1107b2dd63e221e75a210c (diff) | |
download | gitlab-ce-e50050a8756a20b6aa118edbad3369674e4c63ba.tar.gz |
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/administration/geo')
10 files changed, 90 insertions, 158 deletions
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md index fe55539dc84..6d6aee08c95 100644 --- a/doc/administration/geo/disaster_recovery/background_verification.md +++ b/doc/administration/geo/disaster_recovery/background_verification.md @@ -18,7 +18,7 @@ If verification succeeds on the **primary** node but fails on the **secondary** this indicates that the object was corrupted during the replication process. Geo actively try to correct verification failures marking the repository to be resynced with a back-off period. If you want to reset the verification for -these failures, so you should follow [these instructions][reset-verification]. +these failures, so you should follow [these instructions](background_verification.md#reset-verification-for-projects-where-verification-has-failed). If verification is lagging significantly behind replication, consider giving the node more time before scheduling a planned failover. @@ -172,8 +172,10 @@ If the **primary** and **secondary** nodes have a checksum verification mismatch Automatic background verification doesn't cover attachments, LFS objects, job artifacts, and user uploads in file storage. You can keep track of the -progress to include them in [ee-1430]. For now, you can verify their integrity -manually by following [these instructions][foreground-verification] on both +progress to include them in [Geo: Verify all replicated data](https://gitlab.com/groups/gitlab-org/-/epics/1430). + +For now, you can verify their integrity +manually by following [these instructions](../../raketasks/check.md) on both nodes, and comparing the output between them. In GitLab EE 12.1, Geo calculates checksums for attachments, LFS objects, and @@ -184,7 +186,3 @@ been synced before GitLab EE 12.1. Data in object storage is **not verified**, as the object store is responsible for ensuring the integrity of the data. - -[reset-verification]: background_verification.md#reset-verification-for-projects-where-verification-has-failed -[foreground-verification]: ../../raketasks/check.md -[ee-1430]: https://gitlab.com/groups/gitlab-org/-/epics/1430 diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md index 96280e4570b..43089237a75 100644 --- a/doc/administration/geo/disaster_recovery/bring_primary_back.md +++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md @@ -14,7 +14,7 @@ If you have any doubts about the consistency of the data on this node, we recomm Since the former **primary** node will be out of sync with the current **primary** node, the first step is to bring the former **primary** node up to date. Note, deletion of data stored on disk like repositories and uploads will not be replayed when bringing the former **primary** node back into sync, which may result in increased disk usage. -Alternatively, you can [set up a new **secondary** GitLab instance][setup-geo] to avoid this. +Alternatively, you can [set up a new **secondary** GitLab instance](../replication/index.md#setup-instructions) to avoid this. To bring the former **primary** node up to date: @@ -25,28 +25,28 @@ To bring the former **primary** node up to date: sudo gitlab-ctl start ``` - NOTE: **Note:** If you [disabled the **primary** node permanently][disaster-recovery-disable-primary], + NOTE: **Note:** If you [disabled the **primary** node permanently](index.md#step-2-permanently-disable-the-primary-node), you need to undo those steps now. For Debian/Ubuntu you just need to run `sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install the GitLab instance from scratch and set it up as a **secondary** node by - following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step. + following [Setup instructions](../replication/index.md#setup-instructions). In this case, you don't need to follow the next step. NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record) for this node during disaster recovery procedure you may need to [block all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node) during this procedure. -1. [Setup database replication][database-replication]. Note that in this +1. [Setup database replication](../replication/database.md). Note that in this case, **primary** node refers to the current **primary** node, and **secondary** node refers to the former **primary** node. If you have lost your original **primary** node, follow the -[setup instructions][setup-geo] to set up a new **secondary** node. +[setup instructions](../replication/index.md#setup-instructions) to set up a new **secondary** node. ## Promote the **secondary** node to **primary** node When the initial replication is complete and the **primary** node and **secondary** node are -closely in sync, you can do a [planned failover]. +closely in sync, you can do a [planned failover](planned_failover.md). ## Restore the **secondary** node @@ -54,8 +54,3 @@ If your objective is to have two nodes again, you need to bring your **secondary node back online as well by repeating the first step ([configure the former **primary** node to be a **secondary** node](#configure-the-former-primary-node-to-be-a-secondary-node)) for the **secondary** node. - -[setup-geo]: ../replication/index.md#setup-instructions -[database-replication]: ../replication/database.md -[disaster-recovery-disable-primary]: index.md#step-2-permanently-disable-the-primary-node -[planned failover]: planned_failover.md diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md index 7ecb4893c88..0a5c39665f4 100644 --- a/doc/administration/geo/disaster_recovery/index.md +++ b/doc/administration/geo/disaster_recovery/index.md @@ -4,11 +4,11 @@ Geo replicates your database, your Git repositories, and few other assets. We will support and replicate more data in the future, that will enable you to failover with minimal effort, in a disaster situation. -See [Geo current limitations][geo-limitations] for more information. +See [Geo current limitations](../replication/index.md#current-limitations) for more information. CAUTION: **Warning:** Disaster recovery for multi-secondary configurations is in **Alpha**. -For the latest updates, check the multi-secondary [Disaster Recovery epic][gitlab-org&65]. +For the latest updates, check the multi-secondary [Disaster Recovery epic](https://gitlab.com/groups/gitlab-org/-/epics/65). ## Promoting a **secondary** Geo node in single-secondary configurations @@ -22,7 +22,7 @@ immediately after following these instructions. ### Step 1. Allow replication to finish if possible If the **secondary** node is still replicating data from the **primary** node, follow -[the planned failover docs][planned-failover] as closely as possible in +[the planned failover docs](planned_failover.md) as closely as possible in order to avoid unnecessary data loss. ### Step 2. Permanently disable the **primary** node @@ -235,7 +235,7 @@ secondary domain, like changing Git remotes and API URLs. Promoting a **secondary** node to **primary** node using the process above does not enable Geo on the new **primary** node. -To bring a new **secondary** node online, follow the [Geo setup instructions][setup-geo]. +To bring a new **secondary** node online, follow the [Geo setup instructions](../replication/index.md#setup-instructions). ### Step 6. (Optional) Removing the secondary's tracking database @@ -284,7 +284,7 @@ and after that you also need two extra steps. gitlab_rails['auto_migrate'] = false ``` - (For more details about these settings you can read [Configure the primary server][configure-the-primary-server]) + (For more details about these settings you can read [Configure the primary server](../replication/database.md#step-1-configure-the-primary-server)) 1. Save the file and reconfigure GitLab for the database listen changes and the replication slot changes to be applied. @@ -317,7 +317,7 @@ and after that you also need two extra steps. ### Step 2. Initiate the replication process Now we need to make each **secondary** node listen to changes on the new **primary** node. To do that you need -to [initiate the replication process][initiate-the-replication-process] again but this time +to [initiate the replication process](../replication/database.md#step-3-initiate-the-replication-process) again but this time for another **primary** node. All the old replication settings will be overwritten. ## Troubleshooting @@ -332,15 +332,6 @@ after a failover. If you still have access to the old **primary** node, you can follow the instructions in the -[Upgrading to GitLab 10.5][updating-geo] +[Upgrading to GitLab 10.5](../replication/version_specific_updates.md#updating-to-gitlab-105) section to resolve the error. Otherwise, the secret is lost and you'll need to -[reset two-factor authentication for all users][sec-tfa]. - -[gitlab-org&65]: https://gitlab.com/groups/gitlab-org/-/epics/65 -[geo-limitations]: ../replication/index.md#current-limitations -[planned-failover]: planned_failover.md -[setup-geo]: ../replication/index.md#setup-instructions -[updating-geo]: ../replication/version_specific_updates.md#updating-to-gitlab-105 -[sec-tfa]: ../../../security/two_factor_authentication.md#disabling-2fa-for-everyone -[initiate-the-replication-process]: ../replication/database.html#step-3-initiate-the-replication-process -[configure-the-primary-server]: ../replication/database.html#step-1-configure-the-primary-server +[reset two-factor authentication for all users](../../../security/two_factor_authentication.md#disabling-2fa-for-everyone). diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md index 8af60a42fbb..4b3b464b710 100644 --- a/doc/administration/geo/disaster_recovery/planned_failover.md +++ b/doc/administration/geo/disaster_recovery/planned_failover.md @@ -12,7 +12,7 @@ length of this window is determined by your replication capacity - once the data loss. This document assumes you already have a fully configured, working Geo setup. -Please read it and the [Disaster Recovery][disaster-recovery] failover +Please read it and the [Disaster Recovery](index.md) failover documentation in full before proceeding. Planned failover is a major operation, and if performed incorrectly, there is a high risk of data loss. Consider rehearsing the procedure until you are comfortable with the necessary steps and @@ -20,7 +20,7 @@ have a high degree of confidence in being able to perform them accurately. ## Not all data is automatically replicated -If you are using any GitLab features that Geo [doesn't support][limitations], +If you are using any GitLab features that Geo [doesn't support](../replication/index.md#current-limitations), you must make separate provisions to ensure that the **secondary** node has an up-to-date copy of any data associated with that feature. This may extend the required scheduled maintenance period significantly. @@ -32,7 +32,7 @@ final transfer inside the maintenance window) will then transfer only the *changes* between the **primary** node and the **secondary** nodes. Repository-centric strategies for using `rsync` effectively can be found in the -[moving repositories][moving-repositories] documentation; these strategies can +[moving repositories](../../operations/moving_repositories.md) documentation; these strategies can be adapted for use with any other file-based data, such as GitLab Pages (to be found in `/var/opt/gitlab/gitlab-rails/shared/pages` if using Omnibus). @@ -44,12 +44,12 @@ will go smoothly. ### Object storage If you have a large GitLab installation or cannot tolerate downtime, consider -[migrating to Object Storage][os-conf] **before** scheduling a planned failover. +[migrating to Object Storage](../replication/object_storage.md) **before** scheduling a planned failover. Doing so reduces both the length of the maintenance window, and the risk of data loss as a result of a poorly executed planned failover. In GitLab 12.4, you can optionally allow GitLab to manage replication of Object Storage for -**secondary** nodes. For more information, see [Object Storage replication][os-conf]. +**secondary** nodes. For more information, see [Object Storage replication](../replication/object_storage.md). ### Review the configuration of each **secondary** node @@ -113,7 +113,7 @@ or removing references to the missing data. ### Verify the integrity of replicated data -This [content was moved to another location][background-verification]. +This [content was moved to another location](background_verification.md). ### Notify users of scheduled maintenance @@ -126,7 +126,7 @@ will take to finish syncing. An example message would be: ## Prevent updates to the **primary** node -Until a [read-only mode][ce-19739] is implemented, updates must be prevented +Until a [read-only mode](https://gitlab.com/gitlab-org/gitlab-foss/issues/19739) is implemented, updates must be prevented from happening manually. Note that your **secondary** node still needs read-only access to the **primary** node during the maintenance window. @@ -186,7 +186,7 @@ access to the **primary** node during the maintenance window. 1. On the **secondary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues** and wait for all the `geo` queues to drop to 0 queued and 0 running jobs. -1. On the **secondary** node, use [these instructions][foreground-verification] +1. On the **secondary** node, use [these instructions](../../raketasks/check.md) to verify the integrity of CI artifacts, LFS objects, and uploads in file storage. @@ -195,24 +195,12 @@ At this point, your **secondary** node will contain an up-to-date copy of everyt ## Promote the **secondary** node -Finally, follow the [Disaster Recovery docs][disaster-recovery] to promote the +Finally, follow the [Disaster Recovery docs](index.md) to promote the **secondary** node to a **primary** node. This process will cause a brief outage on the **secondary** node, and users may need to log in again. Once it is completed, the maintenance window is over! Your new **primary** node will now begin to diverge from the old one. If problems do arise at this point, failing -back to the old **primary** node [is possible][bring-primary-back], but likely to result +back to the old **primary** node [is possible](bring_primary_back.md), but likely to result in the loss of any data uploaded to the new **primary** in the meantime. Don't forget to remove the broadcast message after failover is complete. - -[bring-primary-back]: bring_primary_back.md -[ce-19739]: https://gitlab.com/gitlab-org/gitlab-foss/issues/19739 -[container-registry]: ../replication/container_registry.md -[disaster-recovery]: index.md -[ee-4930]: https://gitlab.com/gitlab-org/gitlab/issues/4930 -[ee-5064]: https://gitlab.com/gitlab-org/gitlab/issues/5064 -[foreground-verification]: ../../raketasks/check.md -[background-verification]: background_verification.md -[limitations]: ../replication/index.md#current-limitations -[moving-repositories]: ../../operations/moving_repositories.md -[os-conf]: ../replication/object_storage.md diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md index ed3af59b7f0..0b076e7ff3c 100644 --- a/doc/administration/geo/replication/configuration.md +++ b/doc/administration/geo/replication/configuration.md @@ -5,7 +5,7 @@ NOTE: **Note:** This is the final step in setting up a **secondary** Geo node. Stages of the setup process must be completed in the documented order. -Before attempting the steps in this stage, [complete all prior stages][setup-geo-omnibus]. +Before attempting the steps in this stage, [complete all prior stages](index.md#using-omnibus-gitlab). The basic steps of configuring a **secondary** node are to: @@ -77,7 +77,7 @@ they must be manually replicated to the **secondary** node. GitLab integrates with the system-installed SSH daemon, designating a user (typically named `git`) through which all access requests are handled. -In a [Disaster Recovery] situation, GitLab system +In a [Disaster Recovery](../disaster_recovery/index.md) situation, GitLab system administrators will promote a **secondary** node to the **primary** node. DNS records for the **primary** domain should also be updated to point to the new **primary** node (previously a **secondary** node). Doing so will avoid the need to update Git remotes and API URLs. @@ -242,7 +242,7 @@ You can safely skip this step if your **primary** node uses a CA-issued HTTPS ce If your **primary** node is using a self-signed certificate for *HTTPS* support, you will need to add that certificate to the **secondary** node's trust store. Retrieve the certificate from the **primary** node and follow -[these instructions][omnibus-ssl] +[these instructions](https://docs.gitlab.com/omnibus/settings/ssl.html) on the **secondary** node. ### Step 6. Enable Git access over HTTP/HTTPS @@ -283,7 +283,7 @@ Please note that disabling a **secondary** node will stop the synchronization pr Please note that if `git_data_dirs` is customized on the **primary** node for multiple repository shards you must duplicate the same configuration on each **secondary** node. -Point your users to the ["Using a Geo Server" guide][using-geo]. +Point your users to the ["Using a Geo Server" guide](using_a_geo_server.md). Currently, this is what is synced: @@ -334,10 +334,3 @@ See the [updating the Geo nodes document](updating_the_geo_nodes.md). ## Troubleshooting See the [troubleshooting document](troubleshooting.md). - -[setup-geo-omnibus]: index.md#using-omnibus-gitlab -[Hashed Storage]: ../../repository_storage_types.md -[Disaster Recovery]: ../disaster_recovery/index.md -[gitlab-com/infrastructure#2821]: https://gitlab.com/gitlab-com/infrastructure/issues/2821 -[omnibus-ssl]: https://docs.gitlab.com/omnibus/settings/ssl.html -[using-geo]: using_a_geo_server.md diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md index f25aa0e5da8..ffdec5a83c7 100644 --- a/doc/administration/geo/replication/database.md +++ b/doc/administration/geo/replication/database.md @@ -8,7 +8,7 @@ configuration steps. In this case, NOTE: **Note:** The stages of the setup process must be completed in the documented order. -Before attempting the steps in this stage, [complete all prior stages][toc]. +Before attempting the steps in this stage, [complete all prior stages](index.md#using-omnibus-gitlab). This document describes the minimal steps you have to take in order to replicate your **primary** GitLab database to a **secondary** node's database. You may @@ -27,7 +27,7 @@ NOTE: **Note:** In database documentation, you may see "**primary**" being referenced as "master" and "**secondary**" as either "slave" or "standby" server (read-only). -We recommend using [PostgreSQL replication slots][replication-slots-article] +We recommend using [PostgreSQL replication slots](https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75) to ensure that the **primary** node retains all the data necessary for the **secondary** nodes to recover. See below for more details. @@ -97,7 +97,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o gitlab_rails['db_password'] = '<your_password_here>' ``` -1. Omnibus GitLab already has a [replication user] +1. Omnibus GitLab already has a [replication user](https://wiki.postgresql.org/wiki/Streaming_Replication) called `gitlab_replicator`. You must set the password for this user manually. You will be prompted to enter a password: @@ -280,7 +280,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o NOTE: **Note:** This step is important so we don't try to execute anything before the node is fully configured. -1. [Check TCP connectivity][rake-maintenance] to the **primary** node's PostgreSQL server: +1. [Check TCP connectivity](../../raketasks/maintenance.md) to the **primary** node's PostgreSQL server: ```shell gitlab-rake gitlab:tcp_check[<primary_node_ip>,5432] @@ -508,8 +508,3 @@ work: ## Troubleshooting Read the [troubleshooting document](troubleshooting.md). - -[replication-slots-article]: https://medium.com/@tk512/replication-slots-in-postgresql-b4b03d277c75 -[replication user]:https://wiki.postgresql.org/wiki/Streaming_Replication -[toc]: index.md#using-omnibus-gitlab -[rake-maintenance]: ../../raketasks/maintenance.md diff --git a/doc/administration/geo/replication/datatypes.md b/doc/administration/geo/replication/datatypes.md index 4c5fe2ebee7..7e697e8dd81 100644 --- a/doc/administration/geo/replication/datatypes.md +++ b/doc/administration/geo/replication/datatypes.md @@ -16,28 +16,28 @@ We currently distinguish between three different data types: See the list below of each feature or component we replicate, its corresponding data type, replication, and verification methods: -| Type | Feature / component | Replication method | Verification method | -|----------|-----------------------------------------------|---------------------------------------------|----------------------| -| Database | Application data in PostgreSQL | Native | Native | -| Database | Redis | _N/A_ (*1*) | _N/A_ | -| Database | Elasticsearch | Native | Native | -| Database | Personal snippets | PostgreSQL Replication | PostgreSQL Replication | -| Database | Project snippets | PostgreSQL Replication | PostgreSQL Replication | -| Database | SSH public keys | PostgreSQL Replication | PostgreSQL Replication | -| Git | Project repository | Geo with Gitaly | Gitaly Checksum | -| Git | Project wiki repository | Geo with Gitaly | Gitaly Checksum | -| Git | Project designs repository | Geo with Gitaly | Gitaly Checksum | -| Git | Object pools for forked project deduplication | Geo with Gitaly | _Not implemented_ | -| Blobs | User uploads _(filesystem)_ | Geo with API | _Not implemented_ | -| Blobs | User uploads _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | -| Blobs | LFS objects _(filesystem)_ | Geo with API | _Not implemented_ | -| Blobs | LFS objects _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | -| Blobs | CI job artifacts _(filesystem)_ | Geo with API | _Not implemented_ | -| Blobs | CI job artifacts _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | -| Blobs | Archived CI build traces _(filesystem)_ | Geo with API | _Not implemented_ | -| Blobs | Archived CI build traces _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | -| Blobs | Container registry _(filesystem)_ | Geo with API/Docker API | _Not implemented_ | -| Blobs | Container registry _(object storage)_ | Geo with API/Managed/Docker API (*2*) | _Not implemented_ | +| Type | Feature / component | Replication method | Verification method | +|:---------|:----------------------------------------------|:--------------------------------------|:-----------------------| +| Database | Application data in PostgreSQL | Native | Native | +| Database | Redis | _N/A_ (*1*) | _N/A_ | +| Database | Elasticsearch | Native | Native | +| Database | Personal snippets | PostgreSQL Replication | PostgreSQL Replication | +| Database | Project snippets | PostgreSQL Replication | PostgreSQL Replication | +| Database | SSH public keys | PostgreSQL Replication | PostgreSQL Replication | +| Git | Project repository | Geo with Gitaly | Gitaly Checksum | +| Git | Project wiki repository | Geo with Gitaly | Gitaly Checksum | +| Git | Project designs repository | Geo with Gitaly | Gitaly Checksum | +| Git | Object pools for forked project deduplication | Geo with Gitaly | _Not implemented_ | +| Blobs | User uploads _(filesystem)_ | Geo with API | _Not implemented_ | +| Blobs | User uploads _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | +| Blobs | LFS objects _(filesystem)_ | Geo with API | _Not implemented_ | +| Blobs | LFS objects _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | +| Blobs | CI job artifacts _(filesystem)_ | Geo with API | _Not implemented_ | +| Blobs | CI job artifacts _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | +| Blobs | Archived CI build traces _(filesystem)_ | Geo with API | _Not implemented_ | +| Blobs | Archived CI build traces _(object storage)_ | Geo with API/Managed (*2*) | _Not implemented_ | +| Blobs | Container registry _(filesystem)_ | Geo with API/Docker API | _Not implemented_ | +| Blobs | Container registry _(object storage)_ | Geo with API/Managed/Docker API (*2*) | _Not implemented_ | - (*1*): Redis replication can be used as part of HA with Redis sentinel. It's not used between Geo nodes. - (*2*): Object storage replication can be performed by Geo or by your object storage provider/appliance @@ -124,52 +124,32 @@ replicating data from those features will cause the data to be **lost**. If you wish to use those features on a **secondary** node, or to execute a failover successfully, you must replicate their data using some other means. -| Feature | Replicated | Verified | Notes | -|-----------------------------------------------------|-------------------------- |-----------------------------|---------------------------------------------| -| Application data in PostgreSQL | **Yes** | **Yes** | | -| Project repository | **Yes** | **Yes** | | -| Project wiki repository | **Yes** | **Yes** | | -| Project designs repository | **Yes** | [No][design-verification] | | -| Uploads | **Yes** | [No][upload-verification] | Verified only on transfer, or manually (*1*)| -| LFS objects | **Yes** | [No][lfs-verification] | Verified only on transfer, or manually (*1*). Unavailable for new LFS objects in 11.11.x and 12.0.x (*2*). | -| CI job artifacts (other than traces) | **Yes** | [No][artifact-verification] | Verified only manually (*1*) | -| Archived traces | **Yes** | [No][artifact-verification] | Verified only on transfer, or manually (*1*)| -| Personal snippets | **Yes** | **Yes** | | -| Project snippets | **Yes** | **Yes** | | -| Object pools for forked project deduplication | **Yes** | No | | -| [Server-side Git Hooks][custom-hooks] | No | No | | -| [Elasticsearch integration][elasticsearch] | [No][elasticsearch-replication] | No | | -| [GitLab Pages][gitlab-pages] | [No][pages-replication] | No | | -| [Container Registry][container-registry] | **Yes** | No | | -| [NPM Registry][npm-registry] | [No][packages-replication] | No | | -| [Maven Repository][maven-repository] | [No][packages-replication] | No | | -| [Conan Repository][conan-repository] | [No][packages-replication] | No | | -| [NuGet Repository][nuget-repository] | [No][packages-replication] | No | | -| [External merge request diffs][merge-request-diffs] | [No][diffs-replication] | No | | -| Content in object storage | **Yes** | No | | +| Feature | Replicated | Verified | Notes | +|:---------------------------------------------------------------------|:---------------------------------------------------------|:--------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------| +| Application data in PostgreSQL | **Yes** | **Yes** | | +| Project repository | **Yes** | **Yes** | | +| Project wiki repository | **Yes** | **Yes** | | +| Project designs repository | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/issues/32467) | | +| Uploads | **Yes** | [No](https://gitlab.com/groups/gitlab-org/-/epics/1817) | Verified only on transfer, or manually (*1*) | +| LFS objects | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/issues/8922) | Verified only on transfer, or manually (*1*). Unavailable for new LFS objects in 11.11.x and 12.0.x (*2*). | +| CI job artifacts (other than traces) | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/issues/8923) | Verified only manually (*1*) | +| Archived traces | **Yes** | [No](https://gitlab.com/gitlab-org/gitlab/issues/8923) | Verified only on transfer, or manually (*1*) | +| Personal snippets | **Yes** | **Yes** | | +| Project snippets | **Yes** | **Yes** | | +| Object pools for forked project deduplication | **Yes** | No | | +| [Server-side Git Hooks](../../custom_hooks.md) | No | No | | +| [Elasticsearch integration](../../../integration/elasticsearch.md) | [No](https://gitlab.com/gitlab-org/gitlab/-/issues/1186) | No | | +| [GitLab Pages](../../pages/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/589) | No | | +| [Container Registry](../../packages/container_registry.md) | **Yes** | No | | +| [NPM Registry](../../../user/packages/npm_registry/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | | +| [Maven Repository](../../../user/packages/maven_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | | +| [Conan Repository](../../../user/packages/conan_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | | +| [NuGet Repository](../../../user/packages/nuget_repository/index.md) | [No](https://gitlab.com/groups/gitlab-org/-/epics/2346) | No | | +| [External merge request diffs](../../merge_request_diffs.md) | [No](https://gitlab.com/gitlab-org/gitlab/issues/33817) | No | | +| Content in object storage | **Yes** | No | | - (*1*): The integrity can be verified manually using [Integrity Check Rake Task](../../raketasks/check.md) on both nodes and comparing the output between them. - (*2*): GitLab versions 11.11.x and 12.0.x are affected by [a bug that prevents any new LFS objects from replicating](https://gitlab.com/gitlab-org/gitlab/issues/32696). - -[design-replication]: https://gitlab.com/groups/gitlab-org/-/epics/1633 -[design-verification]: https://gitlab.com/gitlab-org/gitlab/issues/32467 -[upload-verification]: https://gitlab.com/groups/gitlab-org/-/epics/1817 -[lfs-verification]: https://gitlab.com/gitlab-org/gitlab/issues/8922 -[artifact-verification]: https://gitlab.com/gitlab-org/gitlab/issues/8923 -[diffs-replication]: https://gitlab.com/gitlab-org/gitlab/issues/33817 -[pages-replication]: https://gitlab.com/groups/gitlab-org/-/epics/589 -[packages-replication]: https://gitlab.com/groups/gitlab-org/-/epics/2346 -[elasticsearch-replication]: https://gitlab.com/gitlab-org/gitlab/-/issues/1186 - -[custom-hooks]: ../../custom_hooks.md -[elasticsearch]: ../../../integration/elasticsearch.md -[gitlab-pages]: ../../pages/index.md -[container-registry]: ../../packages/container_registry.md -[npm-registry]: ../../../user/packages/npm_registry/index.md -[maven-repository]: ../../../user/packages/maven_repository/index.md -[conan-repository]: ../../../user/packages/conan_repository/index.md -[nuget-repository]: ../../../user/packages/nuget_repository/index.md -[merge-request-diffs]: ../../merge_request_diffs.md diff --git a/doc/administration/geo/replication/high_availability.md b/doc/administration/geo/replication/high_availability.md index 3e7102b96da..d64262e0399 100644 --- a/doc/administration/geo/replication/high_availability.md +++ b/doc/administration/geo/replication/high_availability.md @@ -8,7 +8,7 @@ described, it is possible to adapt these instructions to your needs. ![Geo HA Diagram](../../high_availability/img/geo-ha-diagram.png) -_[diagram source - GitLab employees only][diagram-source]_ +_[diagram source - GitLab employees only](https://docs.google.com/drawings/d/1z0VlizKiLNXVVVaERFwgsIOuEgjcUqDTWPdQYsE7Z4c/edit)_ The topology above assumes that the **primary** and **secondary** Geo clusters are located in two separate locations, on their own virtual network @@ -81,7 +81,7 @@ The following steps enable a GitLab cluster to serve as the **primary** node. gitlab_rails['auto_migrate'] = false ``` -After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. +After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect. NOTE: **Note:** PostgreSQL and Redis should have already been disabled on the application servers, and connections from the application servers to those @@ -193,7 +193,7 @@ the **primary** database. Use the following as a guide. geo_logcursor['enable'] = false ``` -After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. +After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect. If using an external PostgreSQL instance, refer also to [Geo with external PostgreSQL instances](external_database.md). @@ -264,7 +264,7 @@ Configure the tracking database. unicorn['enable'] = false ``` -After making these changes, [reconfigure GitLab][gitlab-reconfigure] so the changes take effect. +After making these changes, [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect. If using an external PostgreSQL instance, refer also to [Geo with external PostgreSQL instances](external_database.md). @@ -342,7 +342,7 @@ servers connect to the databases. NOTE: **Note:** Make sure that current node IP is listed in `postgresql['md5_auth_cidr_addresses']` setting of your remote database. -After making these changes [Reconfigure GitLab][gitlab-reconfigure] so the changes take effect. +After making these changes [Reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) so the changes take effect. On the secondary the following GitLab frontend services will be enabled: @@ -458,6 +458,3 @@ application servers above, with some changes to run only the `sidekiq` service: `sidekiq['enable'] = false`. These servers do not need to be attached to the load balancer. - -[diagram-source]: https://docs.google.com/drawings/d/1z0VlizKiLNXVVVaERFwgsIOuEgjcUqDTWPdQYsE7Z4c/edit -[gitlab-reconfigure]: ../../restart_gitlab.md#omnibus-gitlab-reconfigure diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md index ee246381091..5ae199e374a 100644 --- a/doc/administration/geo/replication/troubleshooting.md +++ b/doc/administration/geo/replication/troubleshooting.md @@ -261,7 +261,7 @@ default to 1. You may need to increase this value if you have more Be sure to restart PostgreSQL for this to take effect. See the [PostgreSQL replication -setup][database-pg-replication] guide for more details. +setup](database.md#postgresql-replication) guide for more details. ### Message: `FATAL: could not start WAL streaming: ERROR: replication slot "geo_secondary_my_domain_com" does not exist`? @@ -273,7 +273,7 @@ process](database.md) on the **secondary** node . ### Message: "Command exceeded allowed execution time" when setting up replication? -This may happen while [initiating the replication process][database-start-replication] on the **secondary** node, +This may happen while [initiating the replication process](database.md#step-3-initiate-the-replication-process) on the **secondary** node, and indicates that your initial dataset is too large to be replicated in the default timeout (30 minutes). Re-run `gitlab-ctl replicate-geo-database`, but include a larger value for @@ -767,9 +767,6 @@ reload of the FDW schema. To manually reload the FDW schema: SELECT * FROM gitlab_secondary.projects limit 1; ``` -[database-start-replication]: database.md#step-3-initiate-the-replication-process -[database-pg-replication]: database.md#postgresql-replication - ### "Geo database has an outdated FDW remote schema" error GitLab can error with a `Geo database has an outdated FDW remote schema` message. diff --git a/doc/administration/geo/replication/using_a_geo_server.md b/doc/administration/geo/replication/using_a_geo_server.md index b1ba5b3e876..0f55272f667 100644 --- a/doc/administration/geo/replication/using_a_geo_server.md +++ b/doc/administration/geo/replication/using_a_geo_server.md @@ -2,7 +2,7 @@ # Using a Geo Server **(PREMIUM ONLY)** -After you set up the [database replication and configure the Geo nodes][req], use your closest GitLab node as you would a normal standalone GitLab instance. +After you set up the [database replication and configure the Geo nodes](index.md#setup-instructions), use your closest GitLab node as you would a normal standalone GitLab instance. Pushing directly to a **secondary** node (for both HTTP, SSH including Git LFS) was [introduced](https://about.gitlab.com/releases/2018/09/22/gitlab-11-3-released/) in [GitLab Premium](https://about.gitlab.com/pricing/#self-managed) 11.3. @@ -18,5 +18,3 @@ remote: ssh://git@primary.geo/user/repo.git remote: Everything up-to-date ``` - -[req]: index.md#setup-instructions |