summaryrefslogtreecommitdiff
path: root/doc/administration/geo
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-02-25 09:09:10 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2020-02-25 09:09:10 +0000
commitb98fa9ef3d5bead417ae2f325cb64637883264e9 (patch)
tree409f2002dd056f12d82d3959b3e6f012c4087123 /doc/administration/geo
parent7e3005967df23a957fe1998c8de4f50b412e69e7 (diff)
downloadgitlab-ce-b98fa9ef3d5bead417ae2f325cb64637883264e9.tar.gz
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/administration/geo')
-rw-r--r--doc/administration/geo/disaster_recovery/background_verification.md8
-rw-r--r--doc/administration/geo/disaster_recovery/index.md6
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md18
-rw-r--r--doc/administration/geo/replication/configuration.md10
-rw-r--r--doc/administration/geo/replication/datatypes.md2
-rw-r--r--doc/administration/geo/replication/docker_registry.md5
-rw-r--r--doc/administration/geo/replication/index.md2
-rw-r--r--doc/administration/geo/replication/location_aware_git_url.md2
-rw-r--r--doc/administration/geo/replication/object_storage.md2
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md2
-rw-r--r--doc/administration/geo/replication/troubleshooting.md22
-rw-r--r--doc/administration/geo/replication/tuning.md4
-rw-r--r--doc/administration/geo/replication/version_specific_updates.md2
13 files changed, 43 insertions, 42 deletions
diff --git a/doc/administration/geo/disaster_recovery/background_verification.md b/doc/administration/geo/disaster_recovery/background_verification.md
index 322b3b3fe4d..2caaaac2b9d 100644
--- a/doc/administration/geo/disaster_recovery/background_verification.md
+++ b/doc/administration/geo/disaster_recovery/background_verification.md
@@ -51,14 +51,14 @@ Feature.enable('geo_repository_verification')
## Repository verification
-Navigate to the **Admin Area > Geo** dashboard on the **primary** node and expand
+Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node and expand
the **Verification information** tab for that node to view automatic checksumming
status for repositories and wikis. Successes are shown in green, pending work
in grey, and failures in red.
![Verification status](img/verification-status-primary.png)
-Navigate to the **Admin Area > Geo** dashboard on the **secondary** node and expand
+Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node and expand
the **Verification information** tab for that node to view automatic verification
status for repositories and wikis. As with checksumming, successes are shown in
green, pending work in grey, and failures in red.
@@ -85,7 +85,7 @@ data. The default and recommended re-verification interval is 7 days, though
an interval as short as 1 day can be set. Shorter intervals reduce risk but
increase load and vice versa.
-Navigate to the **Admin Area > Geo** dashboard on the **primary** node, and
+Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **primary** node, and
click the **Edit** button for the **primary** node to customize the minimum
re-verification interval:
@@ -134,7 +134,7 @@ sudo gitlab-rake geo:verification:wiki:reset
If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
-1. Navigate to the **Admin Area > Projects** dashboard on the **primary** node, find the
+1. Navigate to the **{admin}** **Admin Area >** **{overview}** **Overview > Projects** dashboard on the **primary** node, find the
project that you want to check the checksum differences and click on the
**Edit** button:
![Projects dashboard](img/checksum-differences-admin-projects.png)
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index 5455e5914e1..a329b02e7f2 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -205,20 +205,20 @@ secondary domain, like changing Git remotes and API URLs.
This command will use the changed `external_url` configuration defined
in `/etc/gitlab/gitlab.rb`.
-1. For GitLab 11.11 through 12.7 only, you may need to update the primary
+1. For GitLab 11.11 through 12.7 only, you may need to update the **primary**
node's name in the database. This bug has been fixed in GitLab 12.8.
To determine if you need to do this, search for the
`gitlab_rails["geo_node_name"]` setting in your `/etc/gitlab/gitlab.rb`
file. If it is commented out with `#` or not found at all, then you will
- need to update the primary node's name in the database. You can search for it
+ need to update the **primary** node's name in the database. You can search for it
like so:
```shell
grep "geo_node_name" /etc/gitlab/gitlab.rb
```
- To update the primary node's name in the database:
+ To update the **primary** node's name in the database:
```shell
gitlab-rails runner 'Gitlab::Geo.primary_node.update!(name: GeoNode.current_node_name)'
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 25050e4b13e..bfeb202dd9a 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -92,7 +92,7 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use.
-Navigate to the **Admin Area > Geo** dashboard on the **secondary** node to
+Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node to
review status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in grey), consider giving the node more
@@ -117,8 +117,8 @@ This [content was moved to another location][background-verification].
### Notify users of scheduled maintenance
-On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
-message. You can check under **Admin Area > Geo** to estimate how long it
+On the **primary** node, navigate to **{admin}** **Admin Area >** **{bullhorn}** **Messages**, add a broadcast
+message. You can check under **{admin}** **Admin Area >** **{location-dot}** **Geo** to estimate how long it
will take to finish syncing. An example message would be:
> A scheduled maintenance will take place at XX:XX UTC. We expect it to take
@@ -162,8 +162,8 @@ access to the **primary** node during the maintenance window.
existing Git repository with an SSH remote URL. The server should refuse
connection.
-1. Disable non-Geo periodic background jobs on the primary node by navigating
- to **Admin Area > Monitoring > Background Jobs > Cron** , pressing `Disable All`,
+1. Disable non-Geo periodic background jobs on the **primary** node by navigating
+ to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Cron**, pressing `Disable All`,
and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
@@ -172,11 +172,11 @@ access to the **primary** node during the maintenance window.
1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now.
-1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
+1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
and wait for all queues except those with `geo` in the name to drop to 0.
These queues contain work that has been submitted by your users; failing over
before it is completed will cause the work to be lost.
-1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
+1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
- All replication meters to each 100% replicated, 0% failures.
@@ -184,7 +184,7 @@ access to the **primary** node during the maintenance window.
- Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind).
-1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
+1. On the **secondary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
1. On the **secondary** node, use [these instructions][foreground-verification]
to verify the integrity of CI artifacts, LFS objects, and uploads in file
@@ -201,7 +201,7 @@ Finally, follow the [Disaster Recovery docs][disaster-recovery] to promote the
Once it is completed, the maintenance window is over! Your new **primary** node will now
begin to diverge from the old one. If problems do arise at this point, failing
back to the old **primary** node [is possible][bring-primary-back], but likely to result
-in the loss of any data uploaded to the new primary in the meantime.
+in the loss of any data uploaded to the new **primary** in the meantime.
Don't forget to remove the broadcast message after failover is complete.
diff --git a/doc/administration/geo/replication/configuration.md b/doc/administration/geo/replication/configuration.md
index 5c8ad18a4df..74ece38d793 100644
--- a/doc/administration/geo/replication/configuration.md
+++ b/doc/administration/geo/replication/configuration.md
@@ -184,7 +184,7 @@ keys must be manually replicated to the **secondary** node.
gitlab-ctl reconfigure
```
-1. Visit the **primary** node's **Admin Area > Geo**
+1. Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
(`/admin/geo/nodes`) in your browser.
1. Click the **New node** button.
![Add secondary node](img/adding_a_secondary_node.png)
@@ -231,7 +231,7 @@ You can login to the **secondary** node with the same credentials as used for th
Using Hashed Storage significantly improves Geo replication. Project and group
renames no longer require synchronization between nodes.
-1. Visit the **primary** node's **Admin Area > Settings > Repository**
+1. Visit the **primary** node's **{admin}** **Admin Area >** **{settings}** **Settings > Repository**
(`/admin/application_settings/repository`) in your browser.
1. In the **Repository storage** section, check **Use hashed storage paths for newly created and renamed projects**.
@@ -248,7 +248,7 @@ on the **secondary** node.
### Step 6. Enable Git access over HTTP/HTTPS
Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
-method to be enabled. Navigate to **Admin Area > Settings**
+method to be enabled. Navigate to **{admin}** **Admin Area >** **{settings}** **Settings**
(`/admin/application_settings/general`) on the **primary** node, and set
`Enabled Git access protocols` to `Both SSH and HTTP(S)` or `Only HTTP(S)`.
@@ -257,13 +257,13 @@ method to be enabled. Navigate to **Admin Area > Settings**
Your **secondary** node is now configured!
You can login to the **secondary** node with the same credentials you used for the
-**primary** node. Visit the **secondary** node's **Admin Area > Geo**
+**primary** node. Visit the **secondary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo**
(`/admin/geo/nodes`) in your browser to check if it's correctly identified as a
**secondary** Geo node and if Geo is enabled.
The initial replication, or 'backfill', will probably still be in progress. You
can monitor the synchronization process on each geo node from the **primary**
-node's Geo Nodes dashboard in your browser.
+node's **Geo Nodes** dashboard in your browser.
![Geo dashboard](img/geo_node_dashboard.png)
diff --git a/doc/administration/geo/replication/datatypes.md b/doc/administration/geo/replication/datatypes.md
index 6b1b3131a96..e7c1b2793c7 100644
--- a/doc/administration/geo/replication/datatypes.md
+++ b/doc/administration/geo/replication/datatypes.md
@@ -97,7 +97,7 @@ as well as permissions and credentials.
PostgreSQL can also hold some level of cached data like HTML rendered Markdown, cached merge-requests diff (this can
also be configured to be offloaded to object storage).
-We use PostgreSQL's own replication functionality to replicate data from the primary to secondary nodes.
+We use PostgreSQL's own replication functionality to replicate data from the **primary** to **secondary** nodes.
We use Redis both as a cache store and to hold persistent data for our background jobs system. Because both
use-cases has data that are exclusive to the same Geo node, we don't replicate it between nodes.
diff --git a/doc/administration/geo/replication/docker_registry.md b/doc/administration/geo/replication/docker_registry.md
index 7d041d97ed2..1d57fece0e4 100644
--- a/doc/administration/geo/replication/docker_registry.md
+++ b/doc/administration/geo/replication/docker_registry.md
@@ -17,7 +17,7 @@ integrated [Container Registry](../../packages/container_registry.md#container-r
You can enable a storage-agnostic replication so it
can be used for cloud or local storages. Whenever a new image is pushed to the
-primary node, each **secondary** node will pull it to its own container
+**primary** node, each **secondary** node will pull it to its own container
repository.
To configure Docker Registry replication:
@@ -111,6 +111,7 @@ generate a short-lived JWT that is pull-only-capable to access the
### Verify replication
-To verify Container Registry replication is working, go to **Admin Area > Geo** (`/admin/geo/nodes`) on the **secondary** node.
+To verify Container Registry replication is working, go to **{admin}** **Admin Area >** **{location-dot}** **Geo**
+(`/admin/geo/nodes`) on the **secondary** node.
The initial replication, or "backfill", will probably still be in progress.
You can monitor the synchronization process on each Geo node from the **primary** node's **Geo Nodes** dashboard in your browser.
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index ccc8b9ecd2d..4f598162a63 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -270,7 +270,7 @@ For answers to common questions, see the [Geo FAQ](faq.md).
Since GitLab 9.5, Geo stores structured log messages in a `geo.log` file. For Omnibus installations, this file is at `/var/log/gitlab/gitlab-rails/geo.log`.
-This file contains information about when Geo attempts to sync repositories and files. Each line in the file contains a separate JSON entry that can be ingested into, for example, Elasticsearch or Splunk.
+This file contains information about when Geo attempts to sync repositories and files. Each line in the file contains a separate JSON entry that can be ingested into. For example, Elasticsearch or Splunk.
For example:
diff --git a/doc/administration/geo/replication/location_aware_git_url.md b/doc/administration/geo/replication/location_aware_git_url.md
index 6183a0ad119..f1f1edd4a9b 100644
--- a/doc/administration/geo/replication/location_aware_git_url.md
+++ b/doc/administration/geo/replication/location_aware_git_url.md
@@ -37,7 +37,7 @@ In any case, you require:
- A Route53 Hosted Zone managing your domain.
If you have not yet setup a Geo **primary** node and **secondary** node, please consult
-[the Geo setup instructions](https://docs.gitlab.com/ee/administration/geo/replication/#setup-instructions).
+[the Geo setup instructions](index.md#setup-instructions).
## Create a traffic policy
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 3251a673e4e..0c1bec5d4ae 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -24,7 +24,7 @@ whether they are stored on the local filesystem or in object storage.
To enable GitLab replication, you must:
-1. Go to **Admin Area > Geo**.
+1. Go to **{admin}** **Admin Area >** **{location-dot}** **Geo**.
1. Press **Edit** on the **secondary** node.
1. Enable the **Allow this secondary node to replicate content on Object Storage**
checkbox.
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
index c3ff0ef47c1..c04c7aec858 100644
--- a/doc/administration/geo/replication/remove_geo_node.md
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -2,7 +2,7 @@
**Secondary** nodes can be removed from the Geo cluster using the Geo admin page of the **primary** node. To remove a **secondary** node:
-1. Navigate to **Admin Area > Geo** (`/admin/geo/nodes`).
+1. Navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`).
1. Click the **Remove** button for the **secondary** node you want to remove.
1. Confirm by clicking **Remove** when the prompt appears.
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index cbb01c41002..b5be29f7dff 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -19,7 +19,7 @@ Before attempting more advanced troubleshooting:
### Check the health of the **secondary** node
-Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in
+Visit the **primary** node's **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`) in
your browser. We perform the following health checks on each **secondary** node
to help identify if something is wrong:
@@ -122,7 +122,7 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
- If that is not defined, using the `external_url` setting.
This name is used to look up the node with the same **Name** in
-**Admin Area > Geo**.
+**{admin}** **Admin Area >** **{location-dot}** **Geo**.
To check if the current machine has a node name that matches a node in the
database, run the check task:
@@ -211,9 +211,9 @@ sudo gitlab-rake gitlab:geo:check
Checking Geo ... Finished
```
- - Ensure that you have added the secondary node in the Admin Area of the primary node.
- - Ensure that you entered the `external_url` or `gitlab_rails['geo_node_name']` when adding the secondary node in the admin are of the primary node.
- - Prior to GitLab 12.4, edit the secondary node in the Admin Area of the primary node and ensure that there is a trailing `/` in the `Name` field.
+ - Ensure that you have added the secondary node in the Admin Area of the **primary** node.
+ - Ensure that you entered the `external_url` or `gitlab_rails['geo_node_name']` when adding the secondary node in the admin are of the **primary** node.
+ - Prior to GitLab 12.4, edit the secondary node in the Admin Area of the **primary** node and ensure that there is a trailing `/` in the `Name` field.
1. Check returns Exception: PG::UndefinedTable: ERROR: relation "geo_nodes" does not exist
@@ -244,8 +244,8 @@ sudo gitlab-rake gitlab:geo:check
When performing a Postgres major version (9 > 10) update this is expected. Follow:
- - [initiate-the-replication-process](https://docs.gitlab.com/ee/administration/geo/replication/database.html#step-3-initiate-the-replication-process)
- - [Geo database has an outdated FDW remote schema](https://docs.gitlab.com/ee/administration/geo/replication/troubleshooting.html#geo-database-has-an-outdated-fdw-remote-schema-error)
+ - [initiate-the-replication-process](database.md#step-3-initiate-the-replication-process)
+ - [Geo database has an outdated FDW remote schema](troubleshooting.md#geo-database-has-an-outdated-fdw-remote-schema-error)
## Fixing replication errors
@@ -359,7 +359,7 @@ To help us resolve this problem, consider commenting on
GitLab places a timeout on all repository clones, including project imports
and Geo synchronization operations. If a fresh `git clone` of a repository
-on the primary takes more than a few minutes, you may be affected by this.
+on the **primary** takes more than a few minutes, you may be affected by this.
To increase the timeout, add the following line to `/etc/gitlab/gitlab.rb`
on the **secondary** node:
@@ -494,7 +494,7 @@ If you encounter this message when running `gitlab-rake geo:set_secondary_as_pri
or `gitlab-ctl promote-to-primary-node`, either:
- Enter a Rails console and run:
-
+
```ruby
Rails.application.load_tasks; nil
Gitlab::Geo.expire_cache_keys!([:primary_node, :current_node])
@@ -750,7 +750,7 @@ If you are able to log in to the **primary** node, but you receive this error
when attempting to log into a **secondary**, you should check that the Geo
node's URL matches its external URL.
-1. On the primary, visit **Admin Area > Geo**.
+1. On the primary, visit **{admin}** **Admin Area >** **{location-dot}** **Geo**.
1. Find the affected **secondary** and click **Edit**.
1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
in `external_url "https://gitlab.example.com"` on the frontend server(s) of
@@ -833,4 +833,4 @@ To resolve this issue:
- Check `/var/log/gitlab/gitlab-rails/geo.log` to see if the **secondary** node is
using IPv6 to send its status to the **primary** node. If it is, add an entry to
the **primary** node using IPv4 in the `/etc/hosts` file. Alternatively, you should
- [enable IPv6 on the primary node](https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-address-or-addresses).
+ [enable IPv6 on the **primary** node](https://docs.gitlab.com/omnibus/settings/nginx.html#setting-the-nginx-listen-address-or-addresses).
diff --git a/doc/administration/geo/replication/tuning.md b/doc/administration/geo/replication/tuning.md
index 3ee9937774a..972bf002935 100644
--- a/doc/administration/geo/replication/tuning.md
+++ b/doc/administration/geo/replication/tuning.md
@@ -2,8 +2,8 @@
## Changing the sync capacity values
-In the Geo admin page (`/admin/geo/nodes`), there are several variables that
-can be tuned to improve performance of Geo:
+In the Geo admin page at **{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`),
+there are several variables that can be tuned to improve performance of Geo:
- Repository sync capacity.
- File sync capacity.
diff --git a/doc/administration/geo/replication/version_specific_updates.md b/doc/administration/geo/replication/version_specific_updates.md
index 89e1fc9eaa3..77fcaaba764 100644
--- a/doc/administration/geo/replication/version_specific_updates.md
+++ b/doc/administration/geo/replication/version_specific_updates.md
@@ -186,7 +186,7 @@ Replicating over SSH has been deprecated, and support for this option will be
removed in a future release.
To switch to HTTP/HTTPS replication, log into the **primary** node as an admin and visit
-**Admin Area > Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
+**{admin}** **Admin Area >** **{location-dot}** **Geo** (`/admin/geo/nodes`). For each **secondary** node listed,
press the "Edit" button, change the "Repository cloning" setting from
"SSH (deprecated)" to "HTTP/HTTPS", and press "Save changes". This should take
effect immediately.