Commit 0f5f02d9 authored by Amy Qualls's avatar Amy Qualls Committed by Craig Norris

Line and word revisions, Create docset

Additional revisions to words and lines in the Create docset.
parent 6a3907f6
...@@ -10,13 +10,13 @@ type: reference, howto ...@@ -10,13 +10,13 @@ type: reference, howto
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8537) in GitLab 8.16. > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8537) in GitLab 8.16.
When [PlantUML](https://plantuml.com) integration is enabled and configured in When [PlantUML](https://plantuml.com) integration is enabled and configured in
GitLab we are able to create simple diagrams in AsciiDoc and Markdown documents GitLab you can create diagrams in AsciiDoc and Markdown documents
created in snippets, wikis, and repositories. created in snippets, wikis, and repositories.
## PlantUML Server ## PlantUML Server
Before you can enable PlantUML in GitLab; you need to set up your own PlantUML Before you can enable PlantUML in GitLab; set up your own PlantUML
server that will generate the diagrams. server to generate the diagrams.
### Docker ### Docker
...@@ -26,12 +26,11 @@ With Docker, you can just run a container like this: ...@@ -26,12 +26,11 @@ With Docker, you can just run a container like this:
docker run -d --name plantuml -p 8080:8080 plantuml/plantuml-server:tomcat docker run -d --name plantuml -p 8080:8080 plantuml/plantuml-server:tomcat
``` ```
The **PlantUML URL** will be the hostname of the server running the container. The **PlantUML URL** is the hostname of the server running the container.
When running GitLab in Docker, it will need to have access to the PlantUML container. When running GitLab in Docker, it must have access to the PlantUML container.
The easiest way to achieve that is by using [Docker Compose](https://docs.docker.com/compose/). You can achieve that by using [Docker Compose](https://docs.docker.com/compose/).
A basic `docker-compose.yml` file could contain:
A simple `docker-compose.yml` file would be:
```yaml ```yaml
version: "3" version: "3"
...@@ -47,13 +46,12 @@ services: ...@@ -47,13 +46,12 @@ services:
container_name: plantuml container_name: plantuml
``` ```
In this scenario, PlantUML will be accessible for GitLab at the URL In this scenario, PlantUML is accessible to GitLab at the URL
`http://plantuml:8080/`. `http://plantuml:8080/`.
### Debian/Ubuntu ### Debian/Ubuntu
Installing and configuring your You can also install and configure a PlantUML server in Debian/Ubuntu distributions using Tomcat.
own PlantUML server is easy in Debian/Ubuntu distributions using Tomcat.
First you need to create a `plantuml.war` file from the source code: First you need to create a `plantuml.war` file from the source code:
...@@ -64,8 +62,7 @@ cd plantuml-server ...@@ -64,8 +62,7 @@ cd plantuml-server
mvn package mvn package
``` ```
The above sequence of commands will generate a WAR file that can be deployed The above sequence of commands generates a `.war` file you can deploy with Tomcat:
using Tomcat:
```shell ```shell
sudo apt-get install tomcat8 sudo apt-get install tomcat8
...@@ -74,17 +71,18 @@ sudo chown tomcat8:tomcat8 /var/lib/tomcat8/webapps/plantuml.war ...@@ -74,17 +71,18 @@ sudo chown tomcat8:tomcat8 /var/lib/tomcat8/webapps/plantuml.war
sudo service tomcat8 restart sudo service tomcat8 restart
``` ```
Once the Tomcat service restarts the PlantUML service will be ready and After the Tomcat service restarts, the PlantUML service is ready and
listening for requests on port 8080: listening for requests on port 8080:
```plaintext ```plaintext
http://localhost:8080/plantuml http://localhost:8080/plantuml
``` ```
you can change these defaults by editing the `/etc/tomcat8/server.xml` file. To change these defaults, edit the `/etc/tomcat8/server.xml` file.
Note that the default URL is different than when using the Docker-based image, NOTE:
where the service is available at the root of URL with no relative path. Adjust The default URL is different when using this approach. The Docker-based image
makes the service available at the root URL, with no relative path. Adjust
the configuration below accordingly. the configuration below accordingly.
### Making local PlantUML accessible using custom GitLab setup ### Making local PlantUML accessible using custom GitLab setup
...@@ -112,7 +110,7 @@ To activate the changes, run the following command: ...@@ -112,7 +110,7 @@ To activate the changes, run the following command:
sudo gitlab-ctl reconfigure sudo gitlab-ctl reconfigure
``` ```
Note that the redirection through GitLab **must** be configured Note that the redirection through GitLab must be configured
when running [GitLab with TLS](https://docs.gitlab.com/omnibus/settings/ssl.html) when running [GitLab with TLS](https://docs.gitlab.com/omnibus/settings/ssl.html)
due to PlantUML's use of the insecure HTTP protocol. Newer browsers such due to PlantUML's use of the insecure HTTP protocol. Newer browsers such
as [Google Chrome 86+](https://www.chromestatus.com/feature/4926989725073408) as [Google Chrome 86+](https://www.chromestatus.com/feature/4926989725073408)
...@@ -120,7 +118,7 @@ do not load insecure HTTP resources on a page served over HTTPS. ...@@ -120,7 +118,7 @@ do not load insecure HTTP resources on a page served over HTTPS.
### Security ### Security
PlantUML has features that allows fetching network resources. PlantUML has features that allow fetching network resources.
```plaintext ```plaintext
@startuml @startuml
...@@ -136,18 +134,18 @@ stop; ...@@ -136,18 +134,18 @@ stop;
## GitLab ## GitLab
You need to enable PlantUML integration from Settings under Admin Area. To do You need to enable PlantUML integration from Settings under Admin Area. To do
that, login with an Admin account and do following: that, sign in with an Administrator account, and then do following:
- In GitLab, go to **Admin Area > Settings > General**. 1. In GitLab, go to **Admin Area > Settings > General**.
- Expand the **PlantUML** section. 1. Expand the **PlantUML** section.
- Check **Enable PlantUML** checkbox. 1. Select the **Enable PlantUML** check box.
- Set the PlantUML instance as `https://gitlab.example.com/-/plantuml/`. 1. Set the PlantUML instance as `https://gitlab.example.com/-/plantuml/`.
NOTE: NOTE:
If you are using a PlantUML server running v1.2020.9 and If you are using a PlantUML server running v1.2020.9 and
above (for example, [plantuml.com](https://plantuml.com)), set the `PLANTUML_ENCODING` above (for example, [plantuml.com](https://plantuml.com)), set the `PLANTUML_ENCODING`
environment variable to enable the `deflate` compression. On Omnibus, environment variable to enable the `deflate` compression. On Omnibus GitLab,
this can be done set in `/etc/gitlab.rb`: this can be set in `/etc/gitlab.rb`:
```ruby ```ruby
gitlab_rails['env'] = { 'PLANTUML_ENCODING' => 'deflate' } gitlab_rails['env'] = { 'PLANTUML_ENCODING' => 'deflate' }
...@@ -191,9 +189,11 @@ our AsciiDoc snippets, wikis, and repositories using delimited blocks: ...@@ -191,9 +189,11 @@ our AsciiDoc snippets, wikis, and repositories using delimited blocks:
Alice -> Bob: hi Alice -> Bob: hi
``` ```
You can also use the `uml::` directive for compatibility with [`sphinxcontrib-plantuml`](https://pypi.org/project/sphinxcontrib-plantuml/), but please note that we currently only support the `caption` option. You can also use the `uml::` directive for compatibility with
[`sphinxcontrib-plantuml`](https://pypi.org/project/sphinxcontrib-plantuml/),
but GitLab only supports the `caption` option.
The above blocks will be converted to an HTML image tag with source pointing to the The above blocks are converted to an HTML image tag with source pointing to the
PlantUML instance. If the PlantUML server is correctly configured, this should PlantUML instance. If the PlantUML server is correctly configured, this should
render a nice diagram instead of the block: render a nice diagram instead of the block:
...@@ -202,12 +202,18 @@ Bob -> Alice : hello ...@@ -202,12 +202,18 @@ Bob -> Alice : hello
Alice -> Bob : hi Alice -> Bob : hi
``` ```
Inside the block you can add any of the supported diagrams by PlantUML such as Inside the block you can add any of the diagrams PlantUML supports, such as:
[Sequence](https://plantuml.com/sequence-diagram), [Use Case](https://plantuml.com/use-case-diagram),
[Class](https://plantuml.com/class-diagram), [Activity](https://plantuml.com/activity-diagram-legacy), - [Sequence](https://plantuml.com/sequence-diagram)
[Component](https://plantuml.com/component-diagram), [State](https://plantuml.com/state-diagram), - [Use Case](https://plantuml.com/use-case-diagram)
and [Object](https://plantuml.com/object-diagram) diagrams. You do not need to use the PlantUML - [Class](https://plantuml.com/class-diagram)
diagram delimiters `@startuml`/`@enduml` as these are replaced by the AsciiDoc `plantuml` block. - [Activity](https://plantuml.com/activity-diagram-legacy)
- [Component](https://plantuml.com/component-diagram)
- [State](https://plantuml.com/state-diagram),
- [Object](https://plantuml.com/object-diagram)
You do not need to use the PlantUML
diagram delimiters `@startuml`/`@enduml`, as these are replaced by the AsciiDoc `plantuml` block.
Some parameters can be added to the AsciiDoc block definition: Some parameters can be added to the AsciiDoc block definition:
...@@ -217,4 +223,4 @@ Some parameters can be added to the AsciiDoc block definition: ...@@ -217,4 +223,4 @@ Some parameters can be added to the AsciiDoc block definition:
- `width`: Width attribute added to the image tag. - `width`: Width attribute added to the image tag.
- `height`: Height attribute added to the image tag. - `height`: Height attribute added to the image tag.
Markdown does not support any parameters and will always use PNG format. Markdown does not support any parameters and always uses PNG format.
...@@ -8,14 +8,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w ...@@ -8,14 +8,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/7690) in GitLab 8.15. > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/7690) in GitLab 8.15.
NOTE:
Only project maintainers and owners can access web terminals.
With the introduction of the [Kubernetes integration](../../user/project/clusters/index.md), With the introduction of the [Kubernetes integration](../../user/project/clusters/index.md),
GitLab gained the ability to store and use credentials for a Kubernetes cluster. GitLab can store and use credentials for a Kubernetes cluster.
One of the things it uses these credentials for is providing access to GitLab uses these credentials to provide access to
[web terminals](../../ci/environments/index.md#web-terminals) for environments. [web terminals](../../ci/environments/index.md#web-terminals) for environments.
NOTE:
Only project maintainers and owners can access web terminals.
## How it works ## How it works
A detailed overview of the architecture of web terminals and how they work A detailed overview of the architecture of web terminals and how they work
...@@ -53,15 +53,13 @@ detail below. ...@@ -53,15 +53,13 @@ detail below.
NOTE: NOTE:
AWS Elastic Load Balancers (ELBs) do not support web sockets. AWS Elastic Load Balancers (ELBs) do not support web sockets.
AWS Application Load Balancers (ALBs) must be used if you want web terminals If you want web terminals to work, use AWS Application Load Balancers (ALBs).
to work. See [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare) Read [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information. for more information.
As web terminals use WebSockets, every HTTP/HTTPS reverse proxy in front of As web terminals use WebSockets, every HTTP/HTTPS reverse proxy in front of
Workhorse needs to be configured to pass the `Connection` and `Upgrade` headers Workhorse must be configured to pass the `Connection` and `Upgrade` headers
through to the next one in the chain. If you installed GitLab using Omnibus, or to the next one in the chain. GitLab is configured by default to do so.
from source, starting with GitLab 8.15, this should be done by the default
configuration, so there's no need for you to do anything.
However, if you run a [load balancer](../load_balancer.md) in However, if you run a [load balancer](../load_balancer.md) in
front of GitLab, you may need to make some changes to your configuration. These front of GitLab, you may need to make some changes to your configuration. These
...@@ -73,17 +71,17 @@ guides document the necessary steps for a selection of popular reverse proxies: ...@@ -73,17 +71,17 @@ guides document the necessary steps for a selection of popular reverse proxies:
- [Varnish](https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html) - [Varnish](https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html)
Workhorse doesn't let WebSocket requests through to non-WebSocket endpoints, so Workhorse doesn't let WebSocket requests through to non-WebSocket endpoints, so
it's safe to enable support for these headers globally. If you'd rather had a it's safe to enable support for these headers globally. If you prefer a
narrower set of rules, you can restrict it to URLs ending with `/terminal.ws` narrower set of rules, you can restrict it to URLs ending with `/terminal.ws`.
(although this may still have a few false positives). This approach may still result in a few false positives.
If you installed from source, or have made any configuration changes to your If you installed from source, or have made any configuration changes to your
Omnibus installation before upgrading to 8.15, you may need to make some changes Omnibus installation before upgrading to 8.15, you may need to make some changes
to your configuration. See the [Upgrading Community Edition and Enterprise to your configuration. Read
Edition from source](../../update/upgrading_from_source.md#nginx-configuration) [Upgrading Community Edition and Enterprise Edition from source](../../update/upgrading_from_source.md#nginx-configuration)
document for more details. for more details.
If you'd like to disable web terminal support in GitLab, just stop passing To disable web terminal support in GitLab, stop passing
the `Connection` and `Upgrade` hop-by-hop headers in the *first* HTTP reverse the `Connection` and `Upgrade` hop-by-hop headers in the *first* HTTP reverse
proxy in the chain. For most users, this is the NGINX server bundled with proxy in the chain. For most users, this is the NGINX server bundled with
Omnibus GitLab, in which case, you need to: Omnibus GitLab, in which case, you need to:
...@@ -104,4 +102,6 @@ they receive a `Connection failed` message. ...@@ -104,4 +102,6 @@ they receive a `Connection failed` message.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8413) in GitLab 8.17. > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/8413) in GitLab 8.17.
Terminal sessions, by default, do not expire. Terminal sessions, by default, do not expire.
You can limit terminal session lifetime in your GitLab instance. To do so, navigate to [**Admin Area > Settings > Web terminal**](../../user/admin_area/settings/index.md#general), and set a `max session time`. You can limit terminal session lifetime in your GitLab instance. To do so,
go to [**Admin Area > Settings > Web terminal**](../../user/admin_area/settings/index.md#general),
and set a `max session time`.
...@@ -31,8 +31,8 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d ...@@ -31,8 +31,8 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d
gitlab_rails['external_diffs_enabled'] = true gitlab_rails['external_diffs_enabled'] = true
``` ```
1. _The external diffs will be stored in 1. The external diffs are stored in
`/var/opt/gitlab/gitlab-rails/shared/external-diffs`._ To change the path, `/var/opt/gitlab/gitlab-rails/shared/external-diffs`. To change the path,
for example, to `/mnt/storage/external-diffs`, edit `/etc/gitlab/gitlab.rb` for example, to `/mnt/storage/external-diffs`, edit `/etc/gitlab/gitlab.rb`
and add the following line: and add the following line:
...@@ -52,8 +52,8 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d ...@@ -52,8 +52,8 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d
enabled: true enabled: true
``` ```
1. _The external diffs will be stored in 1. The external diffs are stored in
`/home/git/gitlab/shared/external-diffs`._ To change the path, for example, `/home/git/gitlab/shared/external-diffs`. To change the path, for example,
to `/mnt/storage/external-diffs`, edit `/home/git/gitlab/config/gitlab.yml` to `/mnt/storage/external-diffs`, edit `/home/git/gitlab/config/gitlab.yml`
and add or amend the following lines: and add or amend the following lines:
...@@ -68,7 +68,7 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d ...@@ -68,7 +68,7 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d
## Using object storage ## Using object storage
WARNING: WARNING:
Currently migrating to object storage is **non-reversible** Migrating to object storage is not reversible.
Instead of storing the external diffs on disk, we recommended the use of an object Instead of storing the external diffs on disk, we recommended the use of an object
store like AWS S3 instead. This configuration relies on valid AWS credentials to store like AWS S3 instead. This configuration relies on valid AWS credentials to
...@@ -114,7 +114,7 @@ then `object_store:`. On Omnibus installations, they are prefixed by ...@@ -114,7 +114,7 @@ then `object_store:`. On Omnibus installations, they are prefixed by
| Setting | Description | Default | | Setting | Description | Default |
|---------|-------------|---------| |---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` | | `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where external diffs will be stored| | | `remote_directory` | The bucket name where external diffs are stored| |
| `direct_upload` | Set to `true` to enable direct upload of external diffs without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` | | `direct_upload` | Set to `true` to enable direct upload of external diffs without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
| `background_upload` | Set to `false` to disable automatic upload. Option may be removed once upload is direct to S3 | `true` | | `background_upload` | Set to `false` to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to `true` to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` | | `proxy_download` | Set to `true` to enable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
...@@ -141,7 +141,7 @@ See [the available connection settings for different providers](object_storage.m ...@@ -141,7 +141,7 @@ See [the available connection settings for different providers](object_storage.m
} }
``` ```
Note that, if you are using AWS IAM profiles, be sure to omit the If you are using AWS IAM profiles, omit the
AWS access key and secret access key/value pairs. For example: AWS access key and secret access key/value pairs. For example:
```ruby ```ruby
...@@ -206,8 +206,8 @@ To enable this feature, perform the following steps: ...@@ -206,8 +206,8 @@ To enable this feature, perform the following steps:
1. Save the file and [restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect. 1. Save the file and [restart GitLab](restart_gitlab.md#installations-from-source) for the changes to take effect.
With this feature enabled, diffs will initially stored in the database, rather With this feature enabled, diffs are initially stored in the database, rather
than externally. They will be moved to external storage once any of these than externally. They are moved to external storage after any of these
conditions become true: conditions become true:
- A newer version of the merge request diff exists - A newer version of the merge request diff exists
...@@ -233,7 +233,7 @@ and the exception for that error is of this form: ...@@ -233,7 +233,7 @@ and the exception for that error is of this form:
Errno::ENOENT (No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/external-diffs/merge_request_diffs/mr-6167082/diff-8199789) Errno::ENOENT (No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/external-diffs/merge_request_diffs/mr-6167082/diff-8199789)
``` ```
Then you are affected by this issue. Since it's not possible to safely determine Then you are affected by this issue. Because it's not possible to safely determine
all these conditions automatically, we've provided a Rake task in GitLab v13.2.0 all these conditions automatically, we've provided a Rake task in GitLab v13.2.0
that you can run manually to correct the data: that you can run manually to correct the data:
......
...@@ -20,8 +20,8 @@ The GitLab API is the recommended way to move Git repositories: ...@@ -20,8 +20,8 @@ The GitLab API is the recommended way to move Git repositories:
For more information, see: For more information, see:
- [Configuring additional storage for Gitaly](../gitaly/index.md#network-architecture). Within this - [Configuring additional storage for Gitaly](../gitaly/index.md#network-architecture). This
example, additional storage called `storage1` and `storage2` is configured. example configures additional storage called `storage1` and `storage2`.
- [The API documentation](../../api/project_repository_storage_moves.md) details the endpoints for - [The API documentation](../../api/project_repository_storage_moves.md) details the endpoints for
querying and scheduling project repository moves. querying and scheduling project repository moves.
- [The API documentation](../../api/snippet_repository_storage_moves.md) details the endpoints for - [The API documentation](../../api/snippet_repository_storage_moves.md) details the endpoints for
...@@ -38,7 +38,7 @@ Read more in the [API documentation for projects](../../api/project_repository_s ...@@ -38,7 +38,7 @@ Read more in the [API documentation for projects](../../api/project_repository_s
GitLab environment, for example: GitLab environment, for example:
- From a single-node GitLab to a scaled-out architecture. - From a single-node GitLab to a scaled-out architecture.
- From a GitLab instance in your private datacenter to a cloud provider. - From a GitLab instance in your private data center to a cloud provider.
The rest of the document looks The rest of the document looks
at some of the ways you can copy all your repositories from at some of the ways you can copy all your repositories from
...@@ -103,8 +103,8 @@ Using `rsync` to migrate Git data can cause data loss and repository corruption. ...@@ -103,8 +103,8 @@ Using `rsync` to migrate Git data can cause data loss and repository corruption.
If the target directory already contains a partial / outdated copy If the target directory already contains a partial / outdated copy
of the repositories it may be wasteful to copy all the data again of the repositories it may be wasteful to copy all the data again
with `tar`. In this scenario it is better to use `rsync`. This utility with `tar`. In this scenario it is better to use `rsync`. This utility
is either already installed on your system or easily installable is either already installed on your system, or installable
via `apt`, `yum`, and so on. by using `apt` or `yum`.
```shell ```shell
sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \ sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \
...@@ -112,7 +112,7 @@ sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \ ...@@ -112,7 +112,7 @@ sudo -u git sh -c 'rsync -a --delete /var/opt/gitlab/git-data/repositories/. \
``` ```
The `/.` in the command above is very important, without it you can The `/.` in the command above is very important, without it you can
easily get the wrong directory structure in the target directory. get the wrong directory structure in the target directory.
If you want to see progress, replace `-a` with `-av`. If you want to see progress, replace `-a` with `-av`.
#### Single `rsync` to another server #### Single `rsync` to another server
...@@ -135,20 +135,23 @@ WARNING: ...@@ -135,20 +135,23 @@ WARNING:
Using `rsync` to migrate Git data can cause data loss and repository corruption. Using `rsync` to migrate Git data can cause data loss and repository corruption.
[These instructions are being reviewed](https://gitlab.com/gitlab-org/gitlab/-/issues/270422). [These instructions are being reviewed](https://gitlab.com/gitlab-org/gitlab/-/issues/270422).
Every time you start an `rsync` job it has to inspect all files in Every time you start an `rsync` job it must:
the source directory, all files in the target directory, and then
decide what files to copy or not. If the source or target directory - Inspect all files in the source directory.
has many contents this startup phase of `rsync` can become a burden - Inspect all files in the target directory.
for your GitLab server. In cases like this you can make `rsync`'s - Decide whether or not to copy files.
life easier by dividing its work in smaller pieces, and sync one
repository at a time. If the source or target directory
has many contents, this startup phase of `rsync` can become a burden
for your GitLab server. You can reduce the workload of `rsync` by dividing its
work in smaller pieces, and sync one repository at a time.
In addition to `rsync` we use [GNU Parallel](http://www.gnu.org/software/parallel/). In addition to `rsync` we use [GNU Parallel](http://www.gnu.org/software/parallel/).
This utility is not included in GitLab so you need to install it yourself with `apt` This utility is not included in GitLab, so you must install it yourself with `apt`
or `yum`. Also note that the GitLab scripts we used below were added in GitLab 8.1. or `yum`.
**This process does not clean up repositories at the target location that no This process does not clean up repositories at the target location that no
longer exist at the source.** longer exist at the source.
#### Parallel `rsync` for all repositories known to GitLab #### Parallel `rsync` for all repositories known to GitLab
...@@ -218,8 +221,8 @@ Using `rsync` to migrate Git data can cause data loss and repository corruption. ...@@ -218,8 +221,8 @@ Using `rsync` to migrate Git data can cause data loss and repository corruption.
[These instructions are being reviewed](https://gitlab.com/gitlab-org/gitlab/-/issues/270422). [These instructions are being reviewed](https://gitlab.com/gitlab-org/gitlab/-/issues/270422).
Suppose you have already done one sync that started after 2015-10-1 12:00 UTC. Suppose you have already done one sync that started after 2015-10-1 12:00 UTC.
Then you might only want to sync repositories that were changed via GitLab Then you might only want to sync repositories that were changed by using GitLab
_after_ that time. You can use the `SINCE` variable to tell `rake after that time. You can use the `SINCE` variable to tell `rake
gitlab:list_repos` to only print repositories with recent activity. gitlab:list_repos` to only print repositories with recent activity.
```shell ```shell
......
...@@ -14,25 +14,24 @@ integrity of all data committed to a repository. GitLab administrators ...@@ -14,25 +14,24 @@ integrity of all data committed to a repository. GitLab administrators
can trigger such a check for a project via the project page under the can trigger such a check for a project via the project page under the
admin panel. The checks run asynchronously so it may take a few minutes admin panel. The checks run asynchronously so it may take a few minutes
before the check result is visible on the project admin page. If the before the check result is visible on the project admin page. If the
checks failed you can see their output on in the [`repocheck.log` checks failed you can see their output on in the
file.](logs.md#repochecklog) [`repocheck.log` file.](logs.md#repochecklog)
NOTE: This setting is off by default, because it can cause many false alarms.
It is OFF by default because it still causes too many false alarms.
## Periodic checks ## Periodic checks
When enabled, GitLab periodically runs a repository check on all project When enabled, GitLab periodically runs a repository check on all project
repositories and wiki repositories in order to detect data corruption. repositories and wiki repositories in order to detect data corruption.
A project will be checked no more than once per month. If any projects A project is checked no more than once per month. If any projects
fail their repository checks all GitLab administrators will receive an email fail their repository checks all GitLab administrators receive an email
notification of the situation. This notification is sent out once a week, notification of the situation. This notification is sent out once a week,
by default, midnight at the start of Sunday. Repositories with known check by default, midnight at the start of Sunday. Repositories with known check
failures can be found at `/admin/projects?last_repository_check_failed=1`. failures can be found at `/admin/projects?last_repository_check_failed=1`.
## Disabling periodic checks ## Disabling periodic checks
You can disable the periodic checks on the 'Settings' page of the admin You can disable the periodic checks on the **Settings** page of the admin
panel. panel.
## What to do if a check failed ## What to do if a check failed
...@@ -40,9 +39,9 @@ panel. ...@@ -40,9 +39,9 @@ panel.
If the repository check fails for some repository you should look up the error If the repository check fails for some repository you should look up the error
in the [`repocheck.log` file](logs.md#repochecklog) on disk: in the [`repocheck.log` file](logs.md#repochecklog) on disk:
- `/var/log/gitlab/gitlab-rails` for Omnibus installations - `/var/log/gitlab/gitlab-rails` for Omnibus GitLab installations
- `/home/git/gitlab/log` for installations from source - `/home/git/gitlab/log` for installations from source
If the periodic repository check causes false alarms, you can clear all repository check states by If the periodic repository check causes false alarms, you can clear all repository check states by
navigating to **Admin Area > Settings > Repository** going to **Admin Area > Settings > Repository**
(`/admin/application_settings/repository`) and clicking **Clear all repository checks**. (`/admin/application_settings/repository`) and clicking **Clear all repository checks**.
...@@ -38,13 +38,13 @@ been disabled. ...@@ -38,13 +38,13 @@ been disabled.
Hashed storage is the storage behavior we rolled out with 10.0. Instead Hashed storage is the storage behavior we rolled out with 10.0. Instead
of coupling project URL and the folder structure where the repository is of coupling project URL and the folder structure where the repository is
stored on disk, we are coupling a hash, based on the project's ID. This makes stored on disk, we couple a hash based on the project's ID. This makes
the folder structure immutable, and therefore eliminates any requirement to the folder structure immutable, and therefore eliminates any requirement to
synchronize state from URLs to disk structure. This means that renaming a group, synchronize state from URLs to disk structure. This means that renaming a group,
user, or project costs only the database transaction, and takes effect user, or project costs only the database transaction, and takes effect
immediately. immediately.
The hash also helps to spread the repositories more evenly on the disk, so the The hash also helps spread the repositories more evenly on the disk. The
top-level directory contains fewer folders than the total number of top-level top-level directory contains fewer folders than the total number of top-level
namespaces. namespaces.
...@@ -136,8 +136,8 @@ when housekeeping is run on the source project. ...@@ -136,8 +136,8 @@ when housekeeping is run on the source project.
### Hashed storage coverage migration ### Hashed storage coverage migration
Files stored in an S3-compatible endpoint do not have the downsides Files stored in an S3-compatible endpoint do not have the downsides
mentioned earlier, if they are not prefixed with `#{namespace}/#{project_name}`, mentioned earlier, if they are not prefixed with `#{namespace}/#{project_name}`.
which is true for CI Cache and LFS Objects. This is true for CI Cache and LFS Objects.
In the table below, you can find the coverage of the migration to the hashed storage. In the table below, you can find the coverage of the migration to the hashed storage.
...@@ -194,10 +194,10 @@ reasons, GitLab replicated the same mapping structure from the projects URLs: ...@@ -194,10 +194,10 @@ reasons, GitLab replicated the same mapping structure from the projects URLs:
- Project's repository: `#{namespace}/#{project_name}.git` - Project's repository: `#{namespace}/#{project_name}.git`
- Project's wiki: `#{namespace}/#{project_name}.wiki.git` - Project's wiki: `#{namespace}/#{project_name}.wiki.git`
This structure made it simple to migrate from existing solutions to GitLab and This structure enables you to migrate from existing solutions to GitLab, and
easy for Administrators to find where the repository is stored. for Administrators to find where the repository is stored.
On the other hand this has some drawbacks: This approach also has some drawbacks:
Storage location concentrates a huge number of top-level namespaces. The Storage location concentrates a huge number of top-level namespaces. The
impact can be reduced by the introduction of impact can be reduced by the introduction of
...@@ -211,4 +211,4 @@ is at that same URL today. ...@@ -211,4 +211,4 @@ is at that same URL today.
Any change in the URL needs to be reflected on disk (when groups / users or Any change in the URL needs to be reflected on disk (when groups / users or
projects are renamed). This can add a lot of load in big installations, projects are renamed). This can add a lot of load in big installations,
especially if using any type of network based filesystem. especially if using any type of network based file system.
...@@ -18,24 +18,24 @@ abuse of the feature. The default value is **52428800 Bytes** (50 MB). ...@@ -18,24 +18,24 @@ abuse of the feature. The default value is **52428800 Bytes** (50 MB).
### How does it work? ### How does it work?
The content size limit will be applied when a wiki page is created or updated The content size limit is applied when a wiki page is created or updated
through the GitLab UI or API. Local changes pushed via Git will not be validated. through the GitLab UI or API. Local changes pushed via Git are not validated.
In order not to break any existing wiki pages, the limit doesn't have any To break any existing wiki pages, the limit doesn't take effect until a wiki page
effect on them until a wiki page is edited again and the content changes. is edited again and the content changes.
### Wiki page content size limit configuration ### Wiki page content size limit configuration
This setting is not available through the [Admin Area settings](../../user/admin_area/settings/index.md). This setting is not available through the [Admin Area settings](../../user/admin_area/settings/index.md).
In order to configure this setting, use either the Rails console To configure this setting, use either the Rails console
or the [Application settings API](../../api/settings.md). or the [Application settings API](../../api/settings.md).
NOTE: NOTE:
The value of the limit **must** be in bytes. The minimum value is 1024 bytes. The value of the limit must be in bytes. The minimum value is 1024 bytes.
#### Through the Rails console #### Through the Rails console
The steps to configure this setting through the Rails console are: To configure this setting through the Rails console:
1. Start the Rails console: 1. Start the Rails console:
...@@ -61,14 +61,14 @@ To retrieve the current value, start the Rails console and run: ...@@ -61,14 +61,14 @@ To retrieve the current value, start the Rails console and run:
#### Through the API #### Through the API
The process to set the wiki page size limit through the Application Settings API is To set the wiki page size limit through the Application Settings API, use a command,
exactly the same as you would do to [update any other setting](../../api/settings.md#change-application-settings). as you would to [update any other setting](../../api/settings.md#change-application-settings):
```shell ```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings?wiki_page_max_content_bytes=52428800" curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings?wiki_page_max_content_bytes=52428800"
``` ```
You can also use the API to [retrieve the current value](../../api/settings.md#get-current-application-settings). You can also use the API to [retrieve the current value](../../api/settings.md#get-current-application-settings):
```shell ```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings" curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings"
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment