Commit 258f578f authored by Stan Hu's avatar Stan Hu

Merge branch 'docs-update-ha' into 'master'

Docs update ha

See merge request !11446
parents c75f1d3d cd996c5c
...@@ -5,6 +5,20 @@ The solution you choose will be based on the level of scalability and ...@@ -5,6 +5,20 @@ The solution you choose will be based on the level of scalability and
availability you require. The easiest solutions are scalable, but not necessarily availability you require. The easiest solutions are scalable, but not necessarily
highly available. highly available.
GitLab provides a service that is usually essential to most organizations: it
enables people to collaborate on code in a timely fashion. Any downtime should
therefore be short and planned. Luckily, GitLab provides a solid setup even on
a single server without special measures. Due to the distributed nature
of Git, developers can still commit code locally even when GitLab is not
available. However, some GitLab features such as the issue tracker and
Continuous Integration are not available when GitLab is down.
**Keep in mind that all Highly Available solutions come with a trade-off between
cost/complexity and uptime**. The more uptime you want, the more complex the
solution. And the more complex the solution, the more work is involved in
setting up and maintaining it. High availability is not free and every HA
solution should balance the costs against the benefits.
## Architecture ## Architecture
There are two kinds of setups: There are two kinds of setups:
...@@ -37,6 +51,10 @@ Block Device) to keep all data in sync. DRBD requires a low latency link to ...@@ -37,6 +51,10 @@ Block Device) to keep all data in sync. DRBD requires a low latency link to
remain in sync. It is not advisable to attempt to run DRBD between data centers remain in sync. It is not advisable to attempt to run DRBD between data centers
or in different cloud availability zones. or in different cloud availability zones.
> **Note:** GitLab recommends against choosing this HA method because of the
complexity of managing DRBD and crafting automatic failover. This is
*compatible* with GitLab, but not officially *supported*.
Components/Servers Required: 2 servers/virtual machines (one active/one passive) Components/Servers Required: 2 servers/virtual machines (one active/one passive)
![Active/Passive HA Diagram](../img/high_availability/active-passive-diagram.png) ![Active/Passive HA Diagram](../img/high_availability/active-passive-diagram.png)
...@@ -7,6 +7,25 @@ supported natively in NFS version 4. NFSv3 also supports locking as long as ...@@ -7,6 +7,25 @@ supported natively in NFS version 4. NFSv3 also supports locking as long as
Linux Kernel 2.6.5+ is used. We recommend using version 4 and do not Linux Kernel 2.6.5+ is used. We recommend using version 4 and do not
specifically test NFSv3. specifically test NFSv3.
## AWS Elastic File System
GitLab does not recommend using AWS Elastic File System (EFS).
Customers and users have reported that AWS EFS does not perform well for GitLab's
use-case. There are several issues that can cause problems. For these reasons
GitLab does not recommend using EFS with GitLab.
- EFS bases allowed IOPS on volume size. The larger the volume, the more IOPS
are allocated. For smaller volumes, users may experience decent performance
for a period of time due to 'Burst Credits'. Over a period of weeks to months
credits may run out and performance will bottom out.
- For larger volumes, allocated IOPS may not be the problem. Workloads where
many small files are written in a serialized manner are not well-suited for EFS.
EBS with an NFS server on top will perform much better.
For more details on another person's experience with EFS, see
[Amazon's Elastic File System: Burst Credits](https://www.rawkode.io/2017/04/amazons-elastic-file-system-burst-credits/)
### Recommended options ### Recommended options
When you define your NFS exports, we recommend you also add the following When you define your NFS exports, we recommend you also add the following
......
...@@ -159,19 +159,21 @@ subnet and security group and ...@@ -159,19 +159,21 @@ subnet and security group and
*** ***
## Elastic File System ## Network File System
This new AWS offering allows us to create a file system accessible by
 GitLab requires a shared filesystem such as NFS. The file share(s) will be
EC2 instances within a VPC. Choose our VPC and the subnets will be mounted on all application servers. There are a variety of ways to build an

automatically configured assuming we don't need to set explicit IPs. NFS server on AWS.
The
next section allows us to add tags and choose between General
Purpose or
Max I/O which is a good option when being accessed by a
large number of
EC2 instances.


![Elastic File System](img/elastic-file-system.png) One option is to use a third-party AMI that offers NFS as a service. A [search
for 'NFS' in the AWS Marketplace](https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=NFS&page=1&ref_=nav_search_box)
shows options such as NetApp, SoftNAS and others.
To actually mount and install the NFS client we'll use the User Data Another option is to build a simple NFS server using a vanilla Linux server backed
section when adding our Launch Configuration. by AWS Elastic Block Storage (EBS).
> **Note:** GitLab does not recommend using AWS Elastic File System (EFS). See
details in [High Availability NFS documentation](../../../administration/high_availability/nfs.md#aws-elastic-file-system)
*** ***
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment