- Redis - Key/Value store (User sessions, cache, queue for Sidekiq)
- Sentinel - Redis health check/failover manager
- Gitaly - Provides high-level storage and RPC access to Git repositories
- S3 Object Storage service[^3] and / or NFS storage servers[^4] for entities such as Uploads, Artifacts, LFS Objects, etc...
- Load Balancer[^2] - Main entry point and handles load balancing for the GitLab application nodes.
- S3 Object Storage service[^4] and / or NFS storage servers[^5] for entities such as Uploads, Artifacts, LFS Objects, etc...
- Load Balancer[^6] - Main entry point and handles load balancing for the GitLab application nodes.
- Monitor - Prometheus and Grafana monitoring with auto discovery.
## Scalable Architecture Examples
...
...
@@ -72,9 +72,9 @@ larger one.
- 1 PostgreSQL node
- 1 Redis node
- 1 Gitaly node
- 1 or more Object Storage services[^3] and / or NFS storage server[^4]
- 1 or more Object Storage services[^4] and / or NFS storage server[^5]
- 2 or more GitLab application nodes (Unicorn / Puma, Workhorse, Sidekiq)
- 1 or more Load Balancer nodes[^2]
- 1 or more Load Balancer nodes[^6]
- 1 Monitoring node (Prometheus, Grafana)
#### Installation Instructions
...
...
@@ -83,13 +83,13 @@ Complete the following installation steps in order. A link at the end of each
section will bring you back to the Scalable Architecture Examples section so
you can continue with the next step.
1.[Load Balancer(s)](load_balancer.md)[^2]
1.[Load Balancer(s)](load_balancer.md)[^6]
1.[Consul](consul.md)
1.[PostgreSQL](database.md#postgresql-in-a-scaled-environment) with [PgBouncer](https://docs.gitlab.com/ee/administration/high_availability/pgbouncer.html)
1.[PostgreSQL](database.md#postgresql-in-a-scaled-environment) with [PgBouncer](pgbouncer.md)
1.[Redis](redis.md#redis-in-a-scaled-environment)
1.[Gitaly](gitaly.md)(recommended) and / or [NFS](nfs.md)[^4]
1.[Gitaly](gitaly.md)(recommended) and / or [NFS](nfs.md)[^5]
1.[GitLab application nodes](gitlab.md)
- With [Object Storage service enabled](../gitaly/index.md#eliminating-nfs-altogether)[^3]
- With [Object Storage service enabled](../gitaly/index.md#eliminating-nfs-altogether)[^4]
1.[Monitoring node (Prometheus and Grafana)](monitoring_node.md)
### Full Scaling
...
...
@@ -103,10 +103,10 @@ in size, indicating that there is contention or there are not enough resources.
- 1 or more PostgreSQL nodes
- 1 or more Redis nodes
- 1 or more Gitaly storage servers
- 1 or more Object Storage services[^3] and / or NFS storage server[^4]
- 1 or more Object Storage services[^4] and / or NFS storage server[^5]
- 2 or more Sidekiq nodes
- 2 or more GitLab application nodes (Unicorn / Puma, Workhorse, Sidekiq)
- 1 or more Load Balancer nodes[^2]
- 1 or more Load Balancer nodes[^6]
- 1 Monitoring node (Prometheus, Grafana)
## High Availability Architecture Examples
...
...
@@ -117,17 +117,17 @@ page mentions, there is a tradeoff between cost/complexity and uptime. Be sure
this complexity is absolutely required before taking the step into full
high availability.
For all examples below, we recommend running Consul and Redis Sentinel on
dedicated nodes. If Consul is running on PostgreSQL nodes or Sentinel on
For all examples below, we recommend running Consul and Redis Sentinel separately
from the services they monitor. If Consul is running on PostgreSQL nodes or Sentinel on
Redis nodes, there is a potential that high resource usage by PostgreSQL or
Redis could prevent communication between the other Consul and Sentinel nodes.
This may lead to the other nodes believing a failure has occurred and initiating
automated failover. Isolating Redis and Consul from the services they monitor
automated failover. Isolating Consul and Redis Sentinel from the services they monitor
reduces the chances of a false positive that a failure has occurred.
The examples below do not address high availability of NFS for objects. We recommend a
S3 Object Storage service[^3] is used where possible over NFS but it's still required in
certain cases[^4]. Where NFS is to be used some enterprises have access to NFS appliances
S3 Object Storage service[^4] is used where possible over NFS but it's still required in
certain cases[^5]. Where NFS is to be used some enterprises have access to NFS appliances
that manage availability and this would be best case scenario.
There are many options in between each of these examples. Work with GitLab Support
...
...
@@ -147,12 +147,12 @@ moving to a hybrid or fully distributed architecture depending on what is causin
the contention.
- 3 PostgreSQL nodes
-2 Redis nodes
- 3 Consul/Sentinel nodes
-3 Redis nodes
- 3 Consul / Sentinel nodes
- 2 or more GitLab application nodes (Unicorn / Puma, Workhorse, Sidekiq)
- 1 Gitaly storage servers
- 1 Object Storage service[^3] and / or NFS storage server[^4]
- 1 or more Load Balancer nodes[^2]
- 1 Object Storage service[^4] and / or NFS storage server[^5]