summaryrefslogtreecommitdiff
path: root/doc/administration/geo/disaster_recovery/planned_failover.md
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration/geo/disaster_recovery/planned_failover.md')
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md14
1 files changed, 7 insertions, 7 deletions
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index ef0e67434d0..a0cf263a762 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -110,7 +110,7 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use.
-Navigate to the **{admin}** **Admin Area >** **{location-dot}** **Geo** dashboard on the **secondary** node to
+Navigate to the **Admin Area > Geo** dashboard on the **secondary** node to
review status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
@@ -135,8 +135,8 @@ This [content was moved to another location](background_verification.md).
### Notify users of scheduled maintenance
-On the **primary** node, navigate to **{admin}** **Admin Area >** **{bullhorn}** **Messages**, add a broadcast
-message. You can check under **{admin}** **Admin Area >** **{location-dot}** **Geo** to estimate how long it
+On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast
+message. You can check under **Admin Area > Geo** to estimate how long it
will take to finish syncing. An example message would be:
> A scheduled maintenance will take place at XX:XX UTC. We expect it to take
@@ -181,7 +181,7 @@ access to the **primary** node during the maintenance window.
connection.
1. Disable non-Geo periodic background jobs on the **primary** node by navigating
- to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Cron**, pressing `Disable All`,
+ to **Admin Area > Monitoring > Background Jobs > Cron**, pressing `Disable All`,
and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
@@ -190,11 +190,11 @@ access to the **primary** node during the maintenance window.
1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now.
-1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
+1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all queues except those with `geo` in the name to drop to 0.
These queues contain work that has been submitted by your users; failing over
before it is completed will cause the work to be lost.
-1. On the **primary** node, navigate to **{admin}** **Admin Area >** **{location-dot}** **Geo** and wait for the
+1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
- All replication meters to each 100% replicated, 0% failures.
@@ -202,7 +202,7 @@ access to the **primary** node during the maintenance window.
- Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind).
-1. On the **secondary** node, navigate to **{admin}** **Admin Area >** **{monitor}** **Monitoring > Background Jobs > Queues**
+1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues**
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs.
1. On the **secondary** node, use [these instructions](../../raketasks/check.md)
to verify the integrity of CI artifacts, LFS objects, and uploads in file