Commit 7f3606ae authored by Craig Norris's avatar Craig Norris

Merge branch 'docs-wtd2021-mario' into 'master'

docs fix vale issue

See merge request gitlab-org/gitlab!60205
parents 8169fe18 01d4a387
...@@ -77,7 +77,7 @@ To have a summary and then a list of projects and their attachments using hashed ...@@ -77,7 +77,7 @@ To have a summary and then a list of projects and their attachments using hashed
WARNING: WARNING:
In GitLab 13.0, [hashed storage](../repository_storage_types.md#hashed-storage) In GitLab 13.0, [hashed storage](../repository_storage_types.md#hashed-storage)
is enabled by default and the legacy storage is deprecated. is enabled by default and the legacy storage is deprecated.
Support for legacy storage will be removed in GitLab 14.0. If you're on GitLab GitLab 14.0 eliminates support for legacy storage. If you're on GitLab
13.0 and later, switching new projects to legacy storage is not possible. 13.0 and later, switching new projects to legacy storage is not possible.
The option to choose between hashed and legacy storage in the admin area has The option to choose between hashed and legacy storage in the admin area has
been disabled. been disabled.
...@@ -114,7 +114,7 @@ There is a specific queue you can watch to see how long it will take to finish: ...@@ -114,7 +114,7 @@ There is a specific queue you can watch to see how long it will take to finish:
After it reaches zero, you can confirm every project has been migrated by running the commands above. After it reaches zero, you can confirm every project has been migrated by running the commands above.
If you find it necessary, you can run this migration script again to schedule missing projects. If you find it necessary, you can run this migration script again to schedule missing projects.
Any error or warning will be logged in Sidekiq's log file. Any error or warning is logged in Sidekiq's log file.
If [Geo](../geo/index.md) is enabled, each project that is successfully migrated If [Geo](../geo/index.md) is enabled, each project that is successfully migrated
generates an event to replicate the changes on any **secondary** nodes. generates an event to replicate the changes on any **secondary** nodes.
...@@ -127,12 +127,12 @@ commands below that helps you inspect projects and attachments in both legacy an ...@@ -127,12 +127,12 @@ commands below that helps you inspect projects and attachments in both legacy an
WARNING: WARNING:
In GitLab 13.0, [hashed storage](../repository_storage_types.md#hashed-storage) In GitLab 13.0, [hashed storage](../repository_storage_types.md#hashed-storage)
is enabled by default and the legacy storage is deprecated. is enabled by default and the legacy storage is deprecated.
Support for legacy storage will be removed in GitLab 14.0. If you're on GitLab GitLab 14.0 eliminates support for legacy storage. If you're on GitLab
13.0 and later, switching new projects to legacy storage is not possible. 13.0 and later, switching new projects to legacy storage is not possible.
The option to choose between hashed and legacy storage in the admin area has The option to choose between hashed and legacy storage in the admin area has
been disabled. been disabled.
This task will schedule all your existing projects and associated attachments to be rolled back to the This task schedules all your existing projects and associated attachments to be rolled back to the
legacy storage type. legacy storage type.
- **Omnibus installation** - **Omnibus installation**
...@@ -161,7 +161,7 @@ On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_ ...@@ -161,7 +161,7 @@ On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_
After it reaches zero, you can confirm every project has been rolled back by running the commands above. After it reaches zero, you can confirm every project has been rolled back by running the commands above.
If some projects weren't rolled back, you can run this rollback script again to schedule further rollbacks. If some projects weren't rolled back, you can run this rollback script again to schedule further rollbacks.
Any error or warning will be logged in Sidekiq's log file. Any error or warning is logged in Sidekiq's log file.
If you have a Geo setup, the rollback will not be reflected automatically If you have a Geo setup, the rollback will not be reflected automatically
on the **secondary** node. You may need to wait for a backfill operation to kick-in and remove on the **secondary** node. You may need to wait for a backfill operation to kick-in and remove
......
...@@ -113,7 +113,7 @@ If you want to be flexible about growing your hard drive space in the future con ...@@ -113,7 +113,7 @@ If you want to be flexible about growing your hard drive space in the future con
Apart from a local hard drive you can also mount a volume that supports the network file system (NFS) protocol. This volume might be located on a file server, a network attached storage (NAS) device, a storage area network (SAN) or on an Amazon Web Services (AWS) Elastic Block Store (EBS) volume. Apart from a local hard drive you can also mount a volume that supports the network file system (NFS) protocol. This volume might be located on a file server, a network attached storage (NAS) device, a storage area network (SAN) or on an Amazon Web Services (AWS) Elastic Block Store (EBS) volume.
If you have enough RAM and a recent CPU the speed of GitLab is mainly limited by hard drive seek times. Having a fast drive (7200 RPM and up) or a solid state drive (SSD) will improve the responsiveness of GitLab. If you have enough RAM and a recent CPU the speed of GitLab is mainly limited by hard drive seek times. Having a fast drive (7200 RPM and up) or a solid state drive (SSD) improves the responsiveness of GitLab.
NOTE: NOTE:
Since file system performance may affect the overall performance of GitLab, Since file system performance may affect the overall performance of GitLab,
...@@ -141,7 +141,7 @@ The following is the recommended minimum Memory hardware guidance for a handful ...@@ -141,7 +141,7 @@ The following is the recommended minimum Memory hardware guidance for a handful
- More users? Consult the [reference architectures page](../administration/reference_architectures/index.md) - More users? Consult the [reference architectures page](../administration/reference_architectures/index.md)
In addition to the above, we generally recommend having at least 2GB of swap on your server, In addition to the above, we generally recommend having at least 2GB of swap on your server,
even if you currently have enough available RAM. Having swap will help reduce the chance of errors occurring even if you currently have enough available RAM. Having swap helps to reduce the chance of errors occurring
if your available memory changes. We also recommend configuring the kernel's swappiness setting if your available memory changes. We also recommend configuring the kernel's swappiness setting
to a low value like `10` to make the most of your RAM while still having the swap to a low value like `10` to make the most of your RAM while still having the swap
available when needed. available when needed.
...@@ -204,7 +204,7 @@ The recommended number of workers is calculated as the highest of the following: ...@@ -204,7 +204,7 @@ The recommended number of workers is calculated as the highest of the following:
For example a node with 4 cores should be configured with 3 Puma workers. For example a node with 4 cores should be configured with 3 Puma workers.
You can increase the number of Puma workers, providing enough CPU and memory capacity is available. You can increase the number of Puma workers, providing enough CPU and memory capacity is available.
A higher number of Puma workers will usually help to reduce the response time of the application A higher number of Puma workers usually helps to reduce the response time of the application
and increase the ability to handle parallel requests. You must perform testing to verify the and increase the ability to handle parallel requests. You must perform testing to verify the
optimal settings for your infrastructure. optimal settings for your infrastructure.
...@@ -214,7 +214,7 @@ The recommended number of threads is dependent on several factors, including tot ...@@ -214,7 +214,7 @@ The recommended number of threads is dependent on several factors, including tot
of [legacy Rugged code](../administration/gitaly/index.md#direct-access-to-git-in-gitlab). of [legacy Rugged code](../administration/gitaly/index.md#direct-access-to-git-in-gitlab).
- If the operating system has a maximum 2 GB of memory, the recommended number of threads is `1`. - If the operating system has a maximum 2 GB of memory, the recommended number of threads is `1`.
A higher value will result in excess swapping, and decrease performance. A higher value results in excess swapping, and decrease performance.
- If legacy Rugged code is in use, the recommended number of threads is `1`. - If legacy Rugged code is in use, the recommended number of threads is `1`.
- In all other cases, the recommended number of threads is `4`. We don't recommend setting this - In all other cases, the recommended number of threads is `4`. We don't recommend setting this
higher, due to how [Ruby MRI multi-threading](https://en.wikipedia.org/wiki/Global_interpreter_lock) higher, due to how [Ruby MRI multi-threading](https://en.wikipedia.org/wiki/Global_interpreter_lock)
...@@ -230,7 +230,7 @@ If you have a 1GB machine we recommend to configure only two Unicorn workers to ...@@ -230,7 +230,7 @@ If you have a 1GB machine we recommend to configure only two Unicorn workers to
swapping. swapping.
As long as you have enough available CPU and memory capacity, it's okay to increase the number of As long as you have enough available CPU and memory capacity, it's okay to increase the number of
Unicorn workers and this will usually help to reduce the response time of the applications and Unicorn workers and this usually helps to reduce the response time of the applications and
increase the ability to handle parallel requests. increase the ability to handle parallel requests.
To change the Unicorn workers when you have the Omnibus package (which defaults to the To change the Unicorn workers when you have the Omnibus package (which defaults to the
...@@ -248,8 +248,7 @@ On a very active server (10,000 billable users) the Sidekiq process can use 1GB+ ...@@ -248,8 +248,7 @@ On a very active server (10,000 billable users) the Sidekiq process can use 1GB+
As of Omnibus GitLab 9.0, [Prometheus](https://prometheus.io) and its related As of Omnibus GitLab 9.0, [Prometheus](https://prometheus.io) and its related
exporters are enabled by default, to enable easy and in depth monitoring of exporters are enabled by default, to enable easy and in depth monitoring of
GitLab. Approximately 200MB of memory will be consumed by these processes, with GitLab. With default settings, these processes consume approximately 200MB of memory.
default settings.
If you would like to disable Prometheus and it's exporters or read more information If you would like to disable Prometheus and it's exporters or read more information
about it, check the [Prometheus documentation](../administration/monitoring/prometheus/index.md). about it, check the [Prometheus documentation](../administration/monitoring/prometheus/index.md).
...@@ -277,9 +276,9 @@ The GitLab Runner server requirements depend on: ...@@ -277,9 +276,9 @@ The GitLab Runner server requirements depend on:
- Resources required to run build jobs. - Resources required to run build jobs.
- Job concurrency settings. - Job concurrency settings.
Since the nature of the jobs varies for each use case, you will need to experiment by adjusting the job concurrency to get the optimum setting. Since the nature of the jobs varies for each use case, you need to experiment by adjusting the job concurrency to get the optimum setting.
For reference, GitLab.com's [auto-scaling shared runner](../user/gitlab_com/index.md#shared-runners) is configured so that a **single job** will run in a **single instance** with: For reference, GitLab.com's [auto-scaling shared runner](../user/gitlab_com/index.md#shared-runners) is configured so that a **single job** runs in a **single instance** with:
- 1vCPU. - 1vCPU.
- 3.75GB of RAM. - 3.75GB of RAM.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment