@@ -10,7 +10,7 @@ The GitLab CI/CD pipeline includes a `danger-review` job that uses [Danger](http
...
@@ -10,7 +10,7 @@ The GitLab CI/CD pipeline includes a `danger-review` job that uses [Danger](http
to perform a variety of automated checks on the code under test.
to perform a variety of automated checks on the code under test.
Danger is a gem that runs in the CI environment, like any other analysis tool.
Danger is a gem that runs in the CI environment, like any other analysis tool.
What sets it apart from, e.g., RuboCop, is that it's designed to allow you to
What sets it apart from (for example, RuboCop) is that it's designed to allow you to
easily write arbitrary code to test properties of your code or changes. To this
easily write arbitrary code to test properties of your code or changes. To this
end, it provides a set of common helpers and access to information about what
end, it provides a set of common helpers and access to information about what
has actually changed in your environment, then simply runs your code!
has actually changed in your environment, then simply runs your code!
...
@@ -32,7 +32,7 @@ from the start of the merge request.
...
@@ -32,7 +32,7 @@ from the start of the merge request.
### Disadvantages
### Disadvantages
- It's not obvious Danger will update the old comment, thus you need to
- It's not obvious Danger updates the old comment, thus you need to
pay attention to it if it is updated or not.
pay attention to it if it is updated or not.
## Run Danger locally
## Run Danger locally
...
@@ -48,13 +48,12 @@ bin/rake danger_local
...
@@ -48,13 +48,12 @@ bin/rake danger_local
On startup, Danger reads a [`Dangerfile`](https://gitlab.com/gitlab-org/gitlab/blob/master/Dangerfile)
On startup, Danger reads a [`Dangerfile`](https://gitlab.com/gitlab-org/gitlab/blob/master/Dangerfile)
from the project root. GitLab's Danger code is decomposed into a set of helpers
from the project root. GitLab's Danger code is decomposed into a set of helpers
and plugins, all within the [`danger/`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/danger/)
and plugins, all within the [`danger/`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/danger/)
subdirectory, so ours just tells Danger to load it all. Danger will then run
subdirectory, so ours just tells Danger to load it all. Danger then runs
each plugin against the merge request, collecting the output from each. A plugin
each plugin against the merge request, collecting the output from each. A plugin
may output notifications, warnings, or errors, all of which are copied to the
may output notifications, warnings, or errors, all of which are copied to the
CI job's log. If an error happens, the CI job (and so the entire pipeline) will
CI job's log. If an error happens, the CI job (and so the entire pipeline) fails.
be failed.
On merge requests, Danger will also copy the output to a comment on the MR
On merge requests, Danger also copies the output to a comment on the MR
itself, increasing visibility.
itself, increasing visibility.
## Development guidelines
## Development guidelines
...
@@ -75,17 +74,17 @@ often face similar challenges, after all. Think about how you could fulfill the
...
@@ -75,17 +74,17 @@ often face similar challenges, after all. Think about how you could fulfill the
same need while ensuring everyone can benefit from the work, and do that instead
same need while ensuring everyone can benefit from the work, and do that instead
if you can.
if you can.
If a standard tool (e.g. `rubocop`) exists for a task, it is better to use it
If a standard tool (for example, `rubocop`) exists for a task, it's better to
directly, rather than calling it via Danger. Running and debugging the results
use it directly, rather than calling it by using Danger. Running and debugging
of those tools locally is easier if Danger isn't involved, and unless you're
the results of those tools locally is easier if Danger isn't involved, and
using some Danger-specific functionality, there's no benefit to including it in
unless you're using some Danger-specific functionality, there's no benefit to
the Danger run.
including it in the Danger run.
Danger is well-suited to prototyping and rapidly iterating on solutions, so if
Danger is well-suited to prototyping and rapidly iterating on solutions, so if
what we want to build is unclear, a solution in Danger can be thought of as a
what we want to build is unclear, a solution in Danger can be thought of as a
trial run to gather information about a product area. If you're doing this, make
trial run to gather information about a product area. If you're doing this, make
sure the problem you're trying to solve, and the outcomes of that prototyping,
sure the problem you're trying to solve, and the outcomes of that prototyping,
are captured in an issue or epic as you go along. This will help us to address
are captured in an issue or epic as you go along. This helps us to address
the need as part of the product in a future version of GitLab!
the need as part of the product in a future version of GitLab!
### Implementation details
### Implementation details
...
@@ -110,16 +109,17 @@ At present, we do this by putting the code in a module in `lib/gitlab/danger/...
...
@@ -110,16 +109,17 @@ At present, we do this by putting the code in a module in `lib/gitlab/danger/...
and including it in the matching `danger/plugins/...` file. Specs can then be
and including it in the matching `danger/plugins/...` file. Specs can then be
added in `spec/lib/gitlab/danger/...`.
added in `spec/lib/gitlab/danger/...`.
You'll only know if your `Dangerfile` works by pushing the branch that contains
To determine if your `Dangerfile` works, push the branch that contains it to
it to GitLab. This can be quite frustrating, as it significantly increases the
GitLab. This can be quite frustrating, as it significantly increases the cycle
cycle time when developing a new task, or trying to debug something in an
time when developing a new task, or trying to debug something in an existing
existing one. If you've followed the guidelines above, most of your code can
one. If you've followed the guidelines above, most of your code can be exercised
be exercised locally in RSpec, minimizing the number of cycles you need to go
locally in RSpec, minimizing the number of cycles you need to go through in CI.
through in CI. However, you can speed these cycles up somewhat by emptying the
However, you can speed these cycles up somewhat by emptying the
`.gitlab/ci/rails.gitlab-ci.yml` file in your merge request. Just don't forget
`.gitlab/ci/rails.gitlab-ci.yml` file in your merge request. Just don't forget
to revert the change before merging!
to revert the change before merging!
To enable the Dangerfile on another existing GitLab project, run the following extra steps, based on [this procedure](https://danger.systems/guides/getting_started.html#creating-a-bot-account-for-danger-to-use):
To enable the Dangerfile on another existing GitLab project, run the following
extra steps, based on [this procedure](https://danger.systems/guides/getting_started.html#creating-a-bot-account-for-danger-to-use):
1. Add `@gitlab-bot` to the project as a `reporter`.
1. Add `@gitlab-bot` to the project as a `reporter`.
1. Add the `@gitlab-bot`'s `GITLAB_API_PRIVATE_TOKEN` value as a value for a new CI/CD
1. Add the `@gitlab-bot`'s `GITLAB_API_PRIVATE_TOKEN` value as a value for a new CI/CD
...
@@ -156,10 +156,10 @@ at GitLab so far:
...
@@ -156,10 +156,10 @@ at GitLab so far:
To work around this, you can add an [environment
To work around this, you can add an [environment
variable](../ci/variables/README.md) called
variable](../ci/variables/README.md) called
`DANGER_GITLAB_API_TOKEN` with a personal API token to your
`DANGER_GITLAB_API_TOKEN` with a personal API token to your
fork. That way the danger comments will be made from CI using that
fork. That way the danger comments are made from CI using that
API token instead.
API token instead.
Making the variable
Making the variable
[masked](../ci/variables/README.md#mask-a-custom-variable)will make sure
@@ -143,7 +143,7 @@ There are a few gotchas with it:
...
@@ -143,7 +143,7 @@ There are a few gotchas with it:
- you should always [`extend ::Gitlab::Utils::Override`](utilities.md#override) and use `override` to
- you should always [`extend ::Gitlab::Utils::Override`](utilities.md#override) and use `override` to
guard the "overrider" method to ensure that if the method gets renamed in
guard the "overrider" method to ensure that if the method gets renamed in
CE, the EE override won't be silently forgotten.
CE, the EE override isn't silently forgotten.
- when the "overrider" would add a line in the middle of the CE
- when the "overrider" would add a line in the middle of the CE
implementation, you should refactor the CE method and split it in
implementation, you should refactor the CE method and split it in
smaller methods. Or create a "hook" method that is empty in CE,
smaller methods. Or create a "hook" method that is empty in CE,
...
@@ -284,7 +284,7 @@ wrap it in a self-descriptive method and use that method.
...
@@ -284,7 +284,7 @@ wrap it in a self-descriptive method and use that method.
For example, in GitLab-FOSS, the only user created by the system is `User.ghost`
For example, in GitLab-FOSS, the only user created by the system is `User.ghost`
but in EE there are several types of bot-users that aren't really users. It would
but in EE there are several types of bot-users that aren't really users. It would
be incorrect to override the implementation of `User#ghost?`, so instead we add
be incorrect to override the implementation of `User#ghost?`, so instead we add
a method `#internal?` to `app/models/user.rb`. The implementation will be:
a method `#internal?` to `app/models/user.rb`. The implementation:
```ruby
```ruby
definternal?
definternal?
...
@@ -303,13 +303,13 @@ end
...
@@ -303,13 +303,13 @@ end
### Code in `config/routes`
### Code in `config/routes`
When we add `draw :admin` in `config/routes.rb`, the application will try to
When we add `draw :admin` in `config/routes.rb`, the application tries to
load the file located in `config/routes/admin.rb`, and also try to load the
load the file located in `config/routes/admin.rb`, and also try to load the
file located in `ee/config/routes/admin.rb`.
file located in `ee/config/routes/admin.rb`.
In EE, it should at least load one file, at most two files. If it cannot find
In EE, it should at least load one file, at most two files. If it cannot find
any files, an error will be raised. In CE, since we don't know if there will
any files, an error is raised. In CE, since we don't know if an
be an EE route, it will not raise any errors even if it cannot find anything.
an EE route exists, it doesn't raise any errors even if it cannot find anything.
This means if we want to extend a particular CE route file, just add the same
This means if we want to extend a particular CE route file, just add the same
file located in `ee/config/routes`. If we want to add an EE only route, we
file located in `ee/config/routes`. If we want to add an EE only route, we
...
@@ -467,7 +467,7 @@ end
...
@@ -467,7 +467,7 @@ end
#### Using `render_if_exists`
#### Using `render_if_exists`
Instead of using regular `render`, we should use `render_if_exists`, which
Instead of using regular `render`, we should use `render_if_exists`, which
will not render anything if it cannot find the specific partial. We use this
doesn't render anything if it cannot find the specific partial. We use this
so that we could put `render_if_exists` in CE, keeping code the same between
so that we could put `render_if_exists` in CE, keeping code the same between
CE and EE.
CE and EE.
...
@@ -482,7 +482,7 @@ The disadvantage of this:
...
@@ -482,7 +482,7 @@ The disadvantage of this:
##### Caveats
##### Caveats
The `render_if_exists` view path argument must be relative to `app/views/` and `ee/app/views`.
The `render_if_exists` view path argument must be relative to `app/views/` and `ee/app/views`.
Resolving an EE template path that is relative to the CE view path will not work.
Resolving an EE template path that is relative to the CE view path doesn't work.
```haml
```haml
-# app/views/projects/index.html.haml
-# app/views/projects/index.html.haml
...
@@ -577,7 +577,7 @@ We can define `params` and use `use` in another `params` definition to
...
@@ -577,7 +577,7 @@ We can define `params` and use `use` in another `params` definition to
include parameters defined in EE. However, we need to define the "interface" first
include parameters defined in EE. However, we need to define the "interface" first
in CE in order for EE to override it. We don't have to do this in other places
in CE in order for EE to override it. We don't have to do this in other places
due to `prepend_if_ee`, but Grape is complex internally and we couldn't easily
due to `prepend_if_ee`, but Grape is complex internally and we couldn't easily
do that, so we'll follow regular object-oriented practices that we define the
do that, so we follow regular object-oriented practices that we define the
interface first here.
interface first here.
For example, suppose we have a few more optional parameters for EE. We can move the
For example, suppose we have a few more optional parameters for EE. We can move the
...
@@ -738,7 +738,7 @@ end
...
@@ -738,7 +738,7 @@ end
It's very hard to extend this in an EE module, and this is simply storing
It's very hard to extend this in an EE module, and this is simply storing
some meta-data for a particular route. Given that, we could simply leave the
some meta-data for a particular route. Given that, we could simply leave the
EE `route_setting` in CE as it won't hurt and we are just not going to use
EE `route_setting` in CE as it doesn't hurt and we don't use
those meta-data in CE.
those meta-data in CE.
We could revisit this policy when we're using `route_setting` more and whether
We could revisit this policy when we're using `route_setting` more and whether
...
@@ -1039,7 +1039,7 @@ export default {
...
@@ -1039,7 +1039,7 @@ export default {
`import MyComponent from 'ee_else_ce/path/my_component'.vue`
`import MyComponent from 'ee_else_ce/path/my_component'.vue`
- this way the correct component will be included for either the ce or ee implementation
- this way the correct component is included for either the CE or EE implementation
**For EE components that need different results for the same computed values, we can pass in props to the CE wrapper as seen in the example.**
**For EE components that need different results for the same computed values, we can pass in props to the CE wrapper as seen in the example.**
...
@@ -1053,7 +1053,7 @@ export default {
...
@@ -1053,7 +1053,7 @@ export default {
For regular JS files, the approach is similar.
For regular JS files, the approach is similar.
1. We will keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
1. We keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
@@ -93,7 +93,7 @@ All the `GitlabUploader` derived classes should comply with this path segment sc
...
@@ -93,7 +93,7 @@ All the `GitlabUploader` derived classes should comply with this path segment sc
| | | `ObjectStorage::Concern#upload_path |
| | | `ObjectStorage::Concern#upload_path |
```
```
The `RecordsUploads::Concern` concern will create an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
The `RecordsUploads::Concern` concern creates an `Upload` entry for every file stored by a `GitlabUploader` persisting the dynamic parts of the path using
`GitlabUploader#dynamic_path`. You may then use the `Upload#build_uploader` method to manipulate the file.
`GitlabUploader#dynamic_path`. You may then use the `Upload#build_uploader` method to manipulate the file.
## Object Storage
## Object Storage
...
@@ -108,9 +108,9 @@ The `CarrierWave::Uploader#store_dir` is overridden to
...
@@ -108,9 +108,9 @@ The `CarrierWave::Uploader#store_dir` is overridden to
### Using `ObjectStorage::Extension::RecordsUploads`
### Using `ObjectStorage::Extension::RecordsUploads`
This concern will automatically include`RecordsUploads::Concern` if not already included.
This concern includes`RecordsUploads::Concern` if not already included.
The `ObjectStorage::Concern` uploader will search for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
The `ObjectStorage::Concern` uploader searches for the matching `Upload` to select the correct object store. The `Upload` is mapped using `#store_dirs + identifier` for each store (LOCAL/REMOTE).
```ruby
```ruby
classSongUploader<GitlabUploader
classSongUploader<GitlabUploader
...
@@ -130,7 +130,7 @@ end
...
@@ -130,7 +130,7 @@ end
### Using a mounted uploader
### Using a mounted uploader
The `ObjectStorage::Concern`will query the `model.<mount>_store` attribute to select the correct object store.
The `ObjectStorage::Concern`queries the `model.<mount>_store` attribute to select the correct object store.
@@ -12,16 +12,16 @@ info: To determine the technical writer assigned to the Stage/Group associated w
...
@@ -12,16 +12,16 @@ info: To determine the technical writer assigned to the Stage/Group associated w
In order to comply with the terms the libraries we use are licensed under, we have to make sure to check new gems for compatible licenses whenever they're added. To automate this process, we use the [license_finder](https://github.com/pivotal/LicenseFinder) gem by Pivotal. It runs every time a new commit is pushed and verifies that all gems and node modules in the bundle use a license that doesn't conflict with the licensing of either GitLab Community Edition or GitLab Enterprise Edition.
In order to comply with the terms the libraries we use are licensed under, we have to make sure to check new gems for compatible licenses whenever they're added. To automate this process, we use the [license_finder](https://github.com/pivotal/LicenseFinder) gem by Pivotal. It runs every time a new commit is pushed and verifies that all gems and node modules in the bundle use a license that doesn't conflict with the licensing of either GitLab Community Edition or GitLab Enterprise Edition.
There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, NPM, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests won't catch problematic licenses from them.
There are some limitations with the automated testing, however. CSS, JavaScript, or Ruby libraries which are not included by way of Bundler, NPM, or Yarn (for instance those manually copied into our source tree in the `vendor` directory), must be verified manually and independently. Take care whenever one such library is used, as automated tests don't catch problematic licenses from them.
Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These won't be detected by License Finder, and will have to be verified manually.
Some gems may not include their license information in their `gemspec` file, and some node modules may not include their license information in their `package.json` file. These aren't detected by License Finder, and must be verified manually.
### License Finder commands
### License Finder commands
NOTE:
NOTE:
License Finder currently uses GitLab misused terms of `whitelist` and `blacklist`. As a result, the commands below reference those terms. We've created an [issue on their project](https://github.com/pivotal/LicenseFinder/issues/745) to propose that they rename their commands.
License Finder currently uses GitLab misused terms of `whitelist` and `blacklist`. As a result, the commands below reference those terms. We've created an [issue on their project](https://github.com/pivotal/LicenseFinder/issues/745) to propose that they rename their commands.
There are a few basic commands License Finder provides that you'll need in order to manage license detection.
There are a few basic commands License Finder provides that you need in order to manage license detection.
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
To verify that the checks are passing, and/or to see what dependencies are causing the checks to fail:
This default implementation is not very efficient, because we need to call
This default implementation is not very efficient, because we need to call
`#find_object` for each reference, which may require issuing a DB query every
`#find_object` for each reference, which may require issuing a DB query every
time. For this reason, most reference filter implementations will instead use an
time. For this reason, most reference filter implementations instead use an
optimization included in `AbstractReferenceFilter`:
optimization included in `AbstractReferenceFilter`:
> `AbstractReferenceFilter` provides a lazily initialized value
> `AbstractReferenceFilter` provides a lazily initialized value
...
@@ -140,7 +140,7 @@ We are skipping:
...
@@ -140,7 +140,7 @@ We are skipping:
To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) will be updated for the next filter to use.
Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) are updated for the next filter to use.
## Reference parsers
## Reference parsers
...
@@ -199,4 +199,4 @@ In practice, all reference parsers inherit from [`BaseParser`](https://gitlab.co
...
@@ -199,4 +199,4 @@ In practice, all reference parsers inherit from [`BaseParser`](https://gitlab.co
-`#nodes_user_can_reference(user, nodes)` to filter nodes directly.
-`#nodes_user_can_reference(user, nodes)` to filter nodes directly.
A failure to implement this class for each reference type means that the
A failure to implement this class for each reference type means that the
application will raise exceptions during Markdown processing.
application raises exceptions during Markdown processing.
@@ -46,7 +46,7 @@ We have three challenges here: performance, availability, and scalability.
...
@@ -46,7 +46,7 @@ We have three challenges here: performance, availability, and scalability.
### Performance
### Performance
Rails process are expensive in terms of both CPU and memory. Ruby [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock) adds to cost too because the Ruby process will spend time on I/O operations on step 3 causing incoming requests to pile up.
Rails process are expensive in terms of both CPU and memory. Ruby [global interpreter lock](https://en.wikipedia.org/wiki/Global_interpreter_lock) adds to cost too because the Ruby process spends time on I/O operations on step 3 causing incoming requests to pile up.
In order to improve this, [disk buffered upload](#disk-buffered-upload) was implemented. With this, Rails no longer deals with writing uploaded files to disk.
In order to improve this, [disk buffered upload](#disk-buffered-upload) was implemented. With this, Rails no longer deals with writing uploaded files to disk.
...
@@ -88,7 +88,7 @@ To address this problem an HA object storage can be used and it's supported by [
...
@@ -88,7 +88,7 @@ To address this problem an HA object storage can be used and it's supported by [
Scaling NFS is outside of our support scope, and NFS is not a part of cloud native installations.
Scaling NFS is outside of our support scope, and NFS is not a part of cloud native installations.
All features that require Sidekiq and do not use direct upload won't work without NFS. In Kubernetes, machine boundaries translate to PODs, and in this case the uploaded file will be written into the POD private disk. Since Sidekiq POD cannot reach into other pods, the operation will fail to read it.
All features that require Sidekiq and do not use direct upload doesn't work without NFS. In Kubernetes, machine boundaries translate to PODs, and in this case the uploaded file is written into the POD private disk. Since Sidekiq POD cannot reach into other pods, the operation fails to read it.
## How to select the proper level of acceleration?
## How to select the proper level of acceleration?
...
@@ -96,7 +96,7 @@ Selecting the proper acceleration is a tradeoff between speed of development and
...
@@ -96,7 +96,7 @@ Selecting the proper acceleration is a tradeoff between speed of development and
We can identify three major use-cases for an upload:
We can identify three major use-cases for an upload:
1.**storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1.**storage:** if we are uploading for storing a file (like artifacts, packages, or discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1.**in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
1.**in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
1.**Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
1.**Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
...
@@ -120,7 +120,7 @@ We have three kinds of file encoding in our uploads:
...
@@ -120,7 +120,7 @@ We have three kinds of file encoding in our uploads:
1.<iclass="fa fa-check-circle"></i>**multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request.
1.<iclass="fa fa-check-circle"></i>**multipart**: `multipart/form-data` is the most common, a file is encoded as a part of a multipart encoded request.
1.<iclass="fa fa-check-circle"></i>**body**: some APIs uploads files as the whole request body.
1.<iclass="fa fa-check-circle"></i>**body**: some APIs uploads files as the whole request body.
1.<iclass="fa fa-times-circle"></i>**JSON**: some JSON API uploads files as base64 encoded strings. This will require a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/226).
1.<iclass="fa fa-times-circle"></i>**JSON**: some JSON API uploads files as base64 encoded strings. This requires a change to GitLab Workhorse, which [is planned](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/226).
## Uploading technologies
## Uploading technologies
...
@@ -166,7 +166,7 @@ is replaced with the path to the corresponding file before it is forwarded to
...
@@ -166,7 +166,7 @@ is replaced with the path to the corresponding file before it is forwarded to
Rails.
Rails.
To prevent abuse of this feature, Workhorse signs the modified request with a
To prevent abuse of this feature, Workhorse signs the modified request with a
special header, stating which entries it modified. Rails will ignore any
special header, stating which entries it modified. Rails ignores any
unsigned path entries.
unsigned path entries.
```mermaid
```mermaid
...
@@ -220,8 +220,8 @@ In this setup, an extra Rails route must be implemented in order to handle autho
...
@@ -220,8 +220,8 @@ In this setup, an extra Rails route must be implemented in order to handle autho
and [its routes](https://gitlab.com/gitlab-org/gitlab/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/config/routes/git_http.rb#L31-32).
and [its routes](https://gitlab.com/gitlab-org/gitlab/blob/cc723071ad337573e0360a879cbf99bc4fb7adb9/config/routes/git_http.rb#L31-32).
-[API endpoints for uploading packages](packages.md#file-uploads).
-[API endpoints for uploading packages](packages.md#file-uploads).
This will fallback to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
This falls back to _disk buffered upload_ when `direct_upload` is disabled inside the [object storage setting](../administration/uploads.md#object-storage-settings).
The answer to the `/authorize` call will only contain a file system path.
The answer to the `/authorize` call contains only a file system path.
```mermaid
```mermaid
sequenceDiagram
sequenceDiagram
...
@@ -272,7 +272,7 @@ sequenceDiagram
...
@@ -272,7 +272,7 @@ sequenceDiagram
## How to add a new upload route
## How to add a new upload route
In this section, we'll describe how to add a new upload route [accelerated](#uploading-technologies) by Workhorse for [body and multipart](#upload-encodings) encoded uploads.
In this section, we describe how to add a new upload route [accelerated](#uploading-technologies) by Workhorse for [body and multipart](#upload-encodings) encoded uploads.
1. Uncheck the "Pause Elasticsearch indexing" checkbox in **Admin Area > Settings > General > Advanced Search**.
## Background migrations
## Background migrations
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/234046) in GitLab 13.6.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/234046) in GitLab 13.6.
...
@@ -511,7 +527,8 @@ The following are some available Rake tasks:
...
@@ -511,7 +527,8 @@ The following are some available Rake tasks:
| [`sudo gitlab-rake gitlab:elastic:recreate_index[<TARGET_NAME>]`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Wrapper task for `gitlab:elastic:delete_index[<TARGET_NAME>]` and `gitlab:elastic:create_empty_index[<TARGET_NAME>]`. |
| [`sudo gitlab-rake gitlab:elastic:recreate_index[<TARGET_NAME>]`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Wrapper task for `gitlab:elastic:delete_index[<TARGET_NAME>]` and `gitlab:elastic:create_empty_index[<TARGET_NAME>]`. |
| [`sudo gitlab-rake gitlab:elastic:index_snippets`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Performs an Elasticsearch import that indexes the snippets data. |
| [`sudo gitlab-rake gitlab:elastic:index_snippets`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Performs an Elasticsearch import that indexes the snippets data. |
| [`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Displays which projects are not indexed. |
| [`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Displays which projects are not indexed. |
| [`sudo gitlab-rake gitlab:elastic:reindex_cluster`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Schedules a zero-downtime cluster reindexing task. This feature should be used with an index that was created after GitLab 13.0. |
| [`sudo gitlab-rake gitlab:elastic:reindex_cluster`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake) | Schedules a zero-downtime cluster reindexing task. This feature should be used with an index that was created after GitLab 13.0. |
| [`sudo gitlab-rake gitlab:elastic:mark_reindex_failed`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/tasks/gitlab/elastic.rake)`] | Mark the most recent re-index job as failed. |
NOTE:
NOTE:
The `TARGET_NAME` parameter is optional and will use the default index/alias name from the current `RAILS_ENV` if not set.
The `TARGET_NAME` parameter is optional and will use the default index/alias name from the current `RAILS_ENV` if not set.
...
@@ -789,7 +806,7 @@ There are a couple of ways to achieve that:
...
@@ -789,7 +806,7 @@ There are a couple of ways to achieve that:
This is always correctly identifying whether the current project/namespace
This is always correctly identifying whether the current project/namespace
being searched is using Elasticsearch.
being searched is using Elasticsearch.
- From the admin area under **Settings > General > Elasticsearch** check that the
- From the admin area under **Settings > General > Advanced Search** check that the
Advanced Search settings are checked.
Advanced Search settings are checked.
Those same settings there can be obtained from the Rails console if necessary:
Those same settings there can be obtained from the Rails console if necessary:
@@ -10,7 +10,7 @@ A possible security concern when managing a public facing GitLab instance is
...
@@ -10,7 +10,7 @@ A possible security concern when managing a public facing GitLab instance is
the ability to steal a users IP address by referencing images in issues, comments, etc.
the ability to steal a users IP address by referencing images in issues, comments, etc.
For example, adding `![Example image](http://example.com/example.png)` to
For example, adding `![Example image](http://example.com/example.png)` to
an issue description will cause the image to be loaded from the external
an issue description causes the image to be loaded from the external
server in order to be displayed. However, this also allows the external server
server in order to be displayed. However, this also allows the external server
to log the IP address of the user.
to log the IP address of the user.
...
@@ -51,7 +51,7 @@ To install a Camo server as an asset proxy:
...
@@ -51,7 +51,7 @@ To install a Camo server as an asset proxy:
| `asset_proxy_enabled` | Enable proxying of assets. If enabled, requires: `asset_proxy_url`). |
| `asset_proxy_enabled` | Enable proxying of assets. If enabled, requires: `asset_proxy_url`). |
| `asset_proxy_secret_key` | Shared secret with the asset proxy server. |
| `asset_proxy_secret_key` | Shared secret with the asset proxy server. |
| `asset_proxy_url` | URL of the asset proxy server. |
| `asset_proxy_url` | URL of the asset proxy server. |
| `asset_proxy_whitelist` | Assets that match these domain(s) will NOT be proxied. Wildcards allowed. Your GitLab installation URL is automatically whitelisted. |
| `asset_proxy_whitelist` | Assets that match these domain(s) are NOT proxied. Wildcards allowed. Your GitLab installation URL is automatically whitelisted. |
1. Restart the server for the changes to take effect. Each time you change any values for the asset
1. Restart the server for the changes to take effect. Each time you change any values for the asset
proxy, you need to restart the server.
proxy, you need to restart the server.
...
@@ -59,7 +59,7 @@ To install a Camo server as an asset proxy:
...
@@ -59,7 +59,7 @@ To install a Camo server as an asset proxy:
## Using the Camo server
## Using the Camo server
Once the Camo server is running and you've enabled the GitLab settings, any image, video, or audio that
Once the Camo server is running and you've enabled the GitLab settings, any image, video, or audio that
references an external source will get proxied to the Camo server.
references an external source are proxied to the Camo server.
For example, the following is a link to an image in Markdown:
For example, the following is a link to an image in Markdown:
@@ -158,7 +158,7 @@ Users will be notified of the following events:
...
@@ -158,7 +158,7 @@ Users will be notified of the following events:
## Issue / Epics / Merge request events
## Issue / Epics / Merge request events
In most of the below cases, the notification will be sent to:
In most of the below cases, the notification is sent to:
- Participants:
- Participants:
- the author and assignee of the issue/merge request
- the author and assignee of the issue/merge request
...
@@ -193,23 +193,23 @@ To minimize the number of notifications that do not require any action, from [Gi
...
@@ -193,23 +193,23 @@ To minimize the number of notifications that do not require any action, from [Gi
| New comment | The above, plus anyone mentioned by `@username` in the comment, with notification level "Mention" or higher |
| New comment | The above, plus anyone mentioned by `@username` in the comment, with notification level "Mention" or higher |
| Failed pipeline | The author of the pipeline |
| Failed pipeline | The author of the pipeline |
| Fixed pipeline | The author of the pipeline. Enabled by default. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/24309) in GitLab 13.1. |
| Fixed pipeline | The author of the pipeline. Enabled by default. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/24309) in GitLab 13.1. |
| Successful pipeline | The author of the pipeline, if they have the custom notification setting for successful pipelines set. If the pipeline failed previously, a `Fixed pipeline` message will be sent for the first successful pipeline after the failure, then a `Successful pipeline` message for any further successful pipelines. |
| Successful pipeline | The author of the pipeline, if they have the custom notification setting for successful pipelines set. If the pipeline failed previously, a `Fixed pipeline` message is sent for the first successful pipeline after the failure, then a `Successful pipeline` message for any further successful pipelines. |
| New epic **(ULTIMATE)** | |
| New epic **(ULTIMATE)** | |
| Close epic **(ULTIMATE)** | |
| Close epic **(ULTIMATE)** | |
| Reopen epic **(ULTIMATE)** | |
| Reopen epic **(ULTIMATE)** | |
In addition, if the title or description of an Issue or Merge Request is
In addition, if the title or description of an Issue or Merge Request is
changed, notifications will be sent to any **new** mentions by `@username` as
changed, notifications are sent to any **new** mentions by `@username` as
if they had been mentioned in the original text.
if they had been mentioned in the original text.
You won't receive notifications for Issues, Merge Requests or Milestones created
You don't receive notifications for Issues, Merge Requests or Milestones created
by yourself (except when an issue is due). You will only receive automatic
by yourself (except when an issue is due). You only receive automatic
notifications when somebody else comments or adds changes to the ones that
notifications when somebody else comments or adds changes to the ones that
you've created or mentions you.
you've created or mentions you.
If an open merge request becomes unmergeable due to conflict, its author will be notified about the cause.
If an open merge request becomes unmergeable due to conflict, its author is notified about the cause.
If a user has also set the merge request to automatically merge once pipeline succeeds,
If a user has also set the merge request to automatically merge once pipeline succeeds,
then that user will also be notified.
then that user is also notified.
## Design email notifications
## Design email notifications
...
@@ -252,7 +252,7 @@ The `X-GitLab-NotificationReason` header contains the reason for the notificatio
...
@@ -252,7 +252,7 @@ The `X-GitLab-NotificationReason` header contains the reason for the notificatio
-`mentioned`
-`mentioned`
The reason for the notification is also included in the footer of the notification email. For example an email with the
The reason for the notification is also included in the footer of the notification email. For example an email with the
reason `assigned`will have this sentence in the footer:
reason `assigned`has this sentence in the footer:
-`You are receiving this email because you have been assigned an item on <configured GitLab hostname>.`
-`You are receiving this email because you have been assigned an item on <configured GitLab hostname>.`