| Object storage service | Recommended store for shared data objects | [Cloud Object Storage configuration](../high_availability/object_storage.md) |
| Object storage service | Recommended store for shared data objects | [Cloud Object Storage configuration](../high_availability/object_storage.md) |
| NFS | Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) |
| NFS | Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) |
## Examples
## Reference architectures
- 1 - 1000 Users: A single-node [Omnibus](https://docs.gitlab.com/omnibus/) setup with frequent backups. Refer to the [Single-node Omnibus installation](#single-node-installation) section below.
- 1000 to 50000+ Users: A [Scaled-out Omnibus installation with multiple servers](#multi-node-installation-scaled-out-for-availability), it can be with or without high-availability components applied.
- To decide the level of Availability please refer to our [Availability](../availability/index.md) page.
### Single-node Omnibus installation
### Single-node installation
This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
### Omnibus installation with multiple application servers
### Multi-node installation (scaled out for availability)
This solution is appropriate for teams that are starting to scale out when
This solution is appropriate for teams that are starting to scale out when
scaling up is no longer meeting their needs. In this configuration, additional application nodes will handle frontend traffic, with a load balancer in front to distribute traffic across those nodes. Meanwhile, each application node connects to a shared file server and PostgreSQL and Redis services on the back end.
scaling up is no longer meeting their needs. In this configuration, additional application nodes will handle frontend traffic, with a load balancer in front to distribute traffic across those nodes. Meanwhile, each application node connects to a shared file server and PostgreSQL and Redis services on the back end.
...
@@ -72,14 +76,6 @@ References:
...
@@ -72,14 +76,6 @@ References:
-[Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip)
-[Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip)
-[Setting up a Redis-only server](https://docs.gitlab.com/omnibus/settings/redis.html#setting-up-a-redis-only-server)
-[Setting up a Redis-only server](https://docs.gitlab.com/omnibus/settings/redis.html#setting-up-a-redis-only-server)
## Recommended setups based on number of users
- 1 - 1000 Users: A single-node [Omnibus](https://docs.gitlab.com/omnibus/) setup with frequent backups. Refer to the [requirements page](../../install/requirements.md) for further details of the specs you will require.
- 1000 - 10000 Users: A scaled environment based on one of our [Reference Architectures](#reference-architectures), without the HA components applied. This can be a reasonable step towards a fully HA environment.
- 2000 - 50000+ Users: A scaled HA environment based on one of our [Reference Architectures](#reference-architectures) below.
## Reference architectures
In this section we'll detail the Reference Architectures that can support large numbers
In this section we'll detail the Reference Architectures that can support large numbers
of users. These were built, tested and verified by our Quality and Support teams.
of users. These were built, tested and verified by our Quality and Support teams.
...
@@ -99,7 +95,7 @@ how much automation you use, mirroring, and repo/change size. Additionally the
...
@@ -99,7 +95,7 @@ how much automation you use, mirroring, and repo/change size. Additionally the
shown memory values are given directly by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
shown memory values are given directly by [GCP machine types](https://cloud.google.com/compute/docs/machine-types).
On different cloud vendors a best effort like for like can be used.
On different cloud vendors a best effort like for like can be used.
msgid "StaticSiteEditor|A merge request was created:"
msgstr ""
msgid "StaticSiteEditor|A new branch was created:"
msgstr ""
msgid "StaticSiteEditor|Return to site"
msgid "StaticSiteEditor|Return to site"
msgstr ""
msgstr ""
...
@@ -19383,10 +19371,16 @@ msgstr ""
...
@@ -19383,10 +19371,16 @@ msgstr ""
msgid "StaticSiteEditor|View merge request"
msgid "StaticSiteEditor|View merge request"
msgstr ""
msgstr ""
msgid "StaticSiteEditor|Your changes have been submitted and a merge request has been created. The changes won’t be visible on the site until the merge request has been accepted."
msgid "StaticSiteEditor|You added a commit:"
msgstr ""
msgstr ""
msgid "StaticSiteEditor|Your changes were committed to it:"
msgid "StaticSiteEditor|You created a merge request:"
msgstr ""
msgid "StaticSiteEditor|You created a new branch:"
msgstr ""
msgid "StaticSiteEditor|Your changes have been submitted and a merge request has been created. The changes won’t be visible on the site until the merge request has been accepted."