summaryrefslogtreecommitdiff
path: root/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2021-08-19 09:08:42 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2021-08-19 09:08:42 +0000
commitb76ae638462ab0f673e5915986070518dd3f9ad3 (patch)
treebdab0533383b52873be0ec0eb4d3c66598ff8b91 /doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
parent434373eabe7b4be9593d18a585fb763f1e5f1a6f (diff)
downloadgitlab-ce-b76ae638462ab0f673e5915986070518dd3f9ad3.tar.gz
Add latest changes from gitlab-org/gitlab@14-2-stable-eev14.2.0-rc42
Diffstat (limited to 'doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md')
-rw-r--r--doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
index 16ae5bde062..27990748071 100644
--- a/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
+++ b/doc/administration/geo/disaster_recovery/runbooks/planned_failover_multi_node.md
@@ -72,7 +72,7 @@ On the **secondary** node:
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete.
- ![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
+ ![Replication status](../../replication/img/geo_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. After a planned failover, anything that
@@ -94,7 +94,7 @@ follow these steps to avoid unnecessary data loss:
1. Until a [read-only mode](https://gitlab.com/gitlab-org/gitlab/-/issues/14609)
is implemented, updates must be prevented from happening manually to the
- **primary**. Note that your **secondary** node still needs read-only
+ **primary**. Your **secondary** node still needs read-only
access to the **primary** node during the maintenance window:
1. At the scheduled time, using your cloud provider or your node's firewall, block