| Up to 1000 users | [1K](../reference_architectures/1k_users.md) | Single instance for all of GitLab | Already integrated as a service | Local disk | 0 |
| Up to 2999 users | [2K](../reference_architectures/2k_users.md) | Horizontally-scaled GitLab instance (non-HA) | Single Gitaly server | Local disk of Gitaly instance | 1 |
| 3000 users and over | [3K](../reference_architectures/3k_users.md) | Horizontally-scaled GitLab instance with HA | Gitaly Cluster | Local disk of each Gitaly instance | 8 |
| RTO/RPO requirements for AZ failure tolerance regardless of user scale | [3K (with downscaling)](../reference_architectures/3k_users.md) | Custom (1) | Gitaly Cluster | Local disk of each Gitaly instance | 8 |
1. If you need AZ failure tolerance for user scaling lower than 3K, please contact Customer Success
to discuss your RTO and RPO needs and what options exist to meet those objectives.
GitLab [reference architectures](../reference_architectures/index.md) provide guidance for what
Gitaly configuration is recommended at each scale. The Gitaly configuration is noted by the
architecture diagrams and the table of required resources.
WARNING:
Some [known database inconsistency issues](#known-issues) exist in Gitaly Cluster. We recommend you
remain on your current service for now. We can adjust the date for
[NFS support removal](#nfs-deprecation-notice) if this applies to you.
## Gitaly
The following shows GitLab set up to use direct access to Gitaly:
...
...
@@ -156,53 +188,25 @@ WARNING:
If complete cluster failure occurs, disaster recovery plans should be executed. These can affect the
RPO and RTO discussed above.
### Architecture and configuration recommendations
The following table provides recommendations based on your
requirements. Users means concurrent users actively performing
simultaneous Git Operations.
Gitaly services are present in every GitLab installation and always coordinates Git repository storage and
retrieval. Gitaly can be as simple as a single background service when operating on a single instance Omnibus
GitLab (All of GitLab on one machine). Gitaly can be separated into it's own instance and it can be configured in
a full cluster configuration depending on scaling and availability requirements.
The GitLab Reference Architectures provide guidance for what Gitaly configuration is advisable at each of the scales.
The Gitaly configuration is noted by the architecture diagrams and the table of required resources.
| User Scaling | Reference Architecture To Use | GitLab Instance Configuration | Gitaly Configuration | Git Repository Storage | Instances Dedicated to Gitaly Services |
| Up to 1000 Users | [1K](../reference_architectures/1k_users.md) | Single Instance for all of GitLab | Already Integrated as a Service | Local Disk | 0 |
| Up to 2999 Users | [2K](../reference_architectures/2k_users.md) | Horizontally Scaled GitLab Instance (Non-HA) | Single Gitaly Server | Local Disk of Gitaly Instance | 1 |
| 3000 Users and Over | [3K](../reference_architectures/3k_users.md) | Horizontally Scaled GitLab Instance with HA | Gitaly Cluster | Local Disk of Each Gitaly Instance | 8 |
| RTO/RPO Requirements for AZ Failure Tolerance Regardless of User Scale | [3K (with downscaling)](../reference_architectures/3k_users.md) | Custom (1) | Gitaly Cluster | Local Disk of Each Gitaly Instance | 8 |
1. If you feel that you need AZ Failure Tolerance for user scaling lower than 3K, please contact Customer Success
to discuss your RTO and RPO needs and what options exist to meet those objectives.
WARNING:
At present, some [known database inconsistency issues](#known-issues-impacting-gitaly-cluster)
exist in Gitaly Cluster. It is our recommendation that for now, you remain on your current service.
We will adjust the date for NFS support removal if this applies to you.
### Known issues impacting Gitaly Cluster
### Known issues
The following table outlines current known issues impacting the use of Gitaly Cluster. For
the most up to date status of these issues, please refer to the referenced issues / epics.
the current status of these issues, please refer to the referenced issues and epics.
| Issue | Summary |
| Gitaly Cluster + Geo can cause database inconsistencies | There are some conditions during Geo replication that can cause database inconsistencies with Gitaly Cluster. These have been identified and are being resolved by updating Gitaly Cluster to [identify repositories with a unique and persistent identifier](https://gitlab.com/gitlab-org/gitaly/-/issues/3485). |
| Database inconsistencies due to repository access outside of Gitaly Cluster's control | Operations that write to the repository storage which do not go through normal Gitaly Cluster methods can cause database inconsistencies. These can include (but are not limited to) snapshot restoration for cluster node disks, node upgrades which modify files under Git control, or any other disk operation that may touch repository storage external to GitLab. The Gitaly team is actively working to provide manual commands to [reconcile the Praefect database with the repository storage](https://gitlab.com/groups/gitlab-org/-/epics/6723). |
| Gitaly Cluster + Geo can cause database inconsistencies | There are some conditions during Geo replication that can cause database inconsistencies with Gitaly Cluster. These have been identified and are being resolved by updating Gitaly Cluster to [identify repositories with a unique and persistent identifier](https://gitlab.com/gitlab-org/gitaly/-/issues/3485). |
| Database inconsistencies due to repository access outside of Gitaly Cluster's control | Operations that write to the repository storage that do not go through normal Gitaly Cluster methods can cause database inconsistencies. These can include (but are not limited to) snapshot restoration for cluster node disks, node upgrades which modify files under Git control, or any other disk operation that may touch repository storage external to GitLab. The Gitaly team is actively working to provide manual commands to [reconcile the Praefect database with the repository storage](https://gitlab.com/groups/gitlab-org/-/epics/6723). |
### Snapshot backup and recovery limitations
Gitaly Cluster does not support snapshot backups because these can cause issues where the
Praefect database becomes out of sync with the disk storage. Because of how Praefect rebuilds
the replication metadata of Gitaly disk information during a restore, we recommend using the
Gitaly Cluster does not support snapshot backups because these can cause issues where the Praefect
database becomes out of sync with the disk storage. Because of how Praefect rebuilds the replication
metadata of Gitaly disk information during a restore, we recommend using the
[official backup and restore Rake tasks](../../raketasks/backup_restore.md).
To track progress on work on a solution for manually re-synchronizing the Praefect database
with disk storage, see [this epic](https://gitlab.com/groups/gitlab-org/-/epics/6575).
To track progress on work on a solution for manually re-synchronizing the Praefect database with
disk storage, see [this epic](https://gitlab.com/groups/gitlab-org/-/epics/6575).
### Virtual storage
...
...
@@ -357,8 +361,8 @@ For more information on configuring Gitaly Cluster, see [Configure Gitaly Cluste
## Migrate to Gitaly Cluster
We recommend you migrate to Gitaly Cluster if your