Commit 747c377e authored by Manisha Singh's avatar Manisha Singh Committed by Achilleas Pipinellis

Fix Vale issues for planned_failover_single_node.md

parent 7d3777a3
......@@ -19,7 +19,7 @@ This runbook is in **alpha**. For complete, production-ready documentation, see
| Geo site | Single-node |
| Secondaries | One |
This runbook will guide you through a planned failover of a single-node Geo site
This runbook guides you through a planned failover of a single-node Geo site
with one secondary. The following general architecture is assumed:
```mermaid
......@@ -34,7 +34,7 @@ graph TD
end
```
This guide will result in the following:
This guide results in the following:
1. An offline primary.
1. A promoted secondary that is now the new primary.
......@@ -61,7 +61,7 @@ time to complete.
If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. After a planned failover, anything that
failed to replicate will be **lost**.
failed to replicate is **lost**.
You can use the
[Geo status API](../../../../api/geo_nodes.md#retrieve-project-sync-or-verification-failures-that-occurred-on-the-current-node)
......@@ -102,10 +102,10 @@ follow these steps to avoid unnecessary data loss:
sudo iptables -A INPUT --tcp-dport 443 -j REJECT
```
From this point, users will be unable to view their data or make changes on the
**primary** node. They will also be unable to log in to the **secondary** node.
However, existing sessions will work for the remainder of the maintenance period, and
public data will be accessible throughout.
From this point, users are unable to view their data or make changes on the
**primary** node. They are also unable to log in to the **secondary** node.
However, existing sessions need to work for the remainder of the maintenance period, and
so public data is accessible throughout.
1. Verify the **primary** node is blocked to HTTP traffic by visiting it in browser via
another IP. The server should refuse connection.
......@@ -155,8 +155,8 @@ follow these steps to avoid unnecessary data loss:
1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node will contain an up-to-date copy of everything the
**primary** node has, meaning nothing will be lost when you fail over.
At this point, your **secondary** node contains an up-to-date copy of everything the
**primary** node has, meaning nothing is lost when you fail over.
1. In this final step, you need to permanently disable the **primary** node.
......@@ -198,7 +198,7 @@ follow these steps to avoid unnecessary data loss:
- If you do not have SSH access to the **primary** node, take the machine offline and
prevent it from rebooting. Since there are many ways you may prefer to accomplish
this, we will avoid a single recommendation. You may need to:
this, we avoid a single recommendation. You may need to:
- Reconfigure the load balancers.
- Change DNS records (for example, point the **primary** DNS record to the
......@@ -240,7 +240,7 @@ To promote the secondary node:
1. Run the following command to list out all preflight checks and automatically
check if replication and verification are complete before scheduling a planned
failover to ensure the process will go smoothly:
failover to ensure the process goes smoothly:
NOTE:
In GitLab 13.7 and earlier, if you have a data type with zero items to sync,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment