Commit 3e6c3f82 authored by Suzanne Selhorn's avatar Suzanne Selhorn

Merge branch 'doc/clean_up_doc_warning_latin_276202_4' into 'master'

Fix up the docs warning detected by the vale latin term rule PART 4

See merge request gitlab-org/gitlab!66325
parents 711b58c5 e4dc09b3
...@@ -193,7 +193,7 @@ Combined with [protected branches](../../../user/project/protected_branches.md), ...@@ -193,7 +193,7 @@ Combined with [protected branches](../../../user/project/protected_branches.md),
For the full list of options, see Vault's [Create Role documentation](https://www.vaultproject.io/api/auth/jwt#create-role). For the full list of options, see Vault's [Create Role documentation](https://www.vaultproject.io/api/auth/jwt#create-role).
WARNING: WARNING:
Always restrict your roles to project or namespace by using one of the provided claims (e.g. `project_id` or `namespace_id`). Otherwise any JWT generated by this instance may be allowed to authenticate using this role. Always restrict your roles to project or namespace by using one of the provided claims (for example, `project_id` or `namespace_id`). Otherwise any JWT generated by this instance may be allowed to authenticate using this role.
Now, configure the JWT Authentication method: Now, configure the JWT Authentication method:
......
...@@ -123,7 +123,7 @@ Therefore, for a production environment we use additional steps to ensure that a ...@@ -123,7 +123,7 @@ Therefore, for a production environment we use additional steps to ensure that a
Since this was a WordPress project, I gave real life code snippets. Some further ideas you can pursue: Since this was a WordPress project, I gave real life code snippets. Some further ideas you can pursue:
- Having a slightly different script for the default branch allows you to deploy to a production server from that branch and to a stage server from any other branches. - Having a slightly different script for the default branch allows you to deploy to a production server from that branch and to a stage server from any other branches.
- Instead of pushing it live, you can push it to WordPress official repository (with creating a SVN commit, etc.). - Instead of pushing it live, you can push it to WordPress official repository.
- You could generate i18n text domains on the fly. - You could generate i18n text domains on the fly.
--- ---
......
...@@ -9,7 +9,7 @@ type: reference ...@@ -9,7 +9,7 @@ type: reference
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/9788) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.10. Requires GitLab Runner 11.10 and above. > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/9788) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.10. Requires GitLab Runner 11.10 and above.
GitLab provides a lot of great reporting tools for [merge requests](../user/project/merge_requests/index.md) - [Unit test reports](unit_test_reports.md), [code quality](../user/project/merge_requests/code_quality.md), performance tests, etc. While JUnit is a great open framework for tests that "pass" or "fail", it is also important to see other types of metrics from a given change. GitLab provides a lot of great reporting tools for things like [merge requests](../user/project/merge_requests/index.md) - [Unit test reports](unit_test_reports.md), [code quality](../user/project/merge_requests/code_quality.md), and performance tests. While JUnit is a great open framework for tests that "pass" or "fail", it is also important to see other types of metrics from a given change.
You can configure your job to use custom Metrics Reports, and GitLab displays a report on the merge request so that it's easier and faster to identify changes without having to check the entire log. You can configure your job to use custom Metrics Reports, and GitLab displays a report on the merge request so that it's easier and faster to identify changes without having to check the entire log.
......
...@@ -153,8 +153,8 @@ is recreated and all pipelines restart. ...@@ -153,8 +153,8 @@ is recreated and all pipelines restart.
### Merge request dropped from the merge train immediately ### Merge request dropped from the merge train immediately
If a merge request is not mergeable (for example, it's a draft merge request, there is a merge If a merge request is not mergeable (for example, it's a draft merge request or it has a merge
conflict, etc.), your merge request is dropped from the merge train automatically. conflict), the merge train drops your merge request automatically.
In these cases, the reason for dropping the merge request is in the **system notes**. In these cases, the reason for dropping the merge request is in the **system notes**.
......
...@@ -13,7 +13,7 @@ environment (where the GitLab Runner runs). ...@@ -13,7 +13,7 @@ environment (where the GitLab Runner runs).
The SSH keys can be useful when: The SSH keys can be useful when:
1. You want to checkout internal submodules 1. You want to checkout internal submodules
1. You want to download private packages using your package manager (e.g., Bundler) 1. You want to download private packages using your package manager (for example, Bundler)
1. You want to deploy your application to your own server, or, for example, Heroku 1. You want to deploy your application to your own server, or, for example, Heroku
1. You want to execute SSH commands from the build environment to a remote server 1. You want to execute SSH commands from the build environment to a remote server
1. You want to rsync files from the build environment to a remote server 1. You want to rsync files from the build environment to a remote server
...@@ -21,9 +21,9 @@ The SSH keys can be useful when: ...@@ -21,9 +21,9 @@ The SSH keys can be useful when:
If anything of the above rings a bell, then you most likely need an SSH key. If anything of the above rings a bell, then you most likely need an SSH key.
The most widely supported method is to inject an SSH key into your build The most widely supported method is to inject an SSH key into your build
environment by extending your `.gitlab-ci.yml`, and it's a solution which works environment by extending your `.gitlab-ci.yml`, and it's a solution that works
with any type of [executor](https://docs.gitlab.com/runner/executors/) with any type of [executor](https://docs.gitlab.com/runner/executors/)
(Docker, shell, etc.). (like Docker or shell, for example).
## How it works ## How it works
......
...@@ -113,7 +113,7 @@ Feature.disable(:variable_inside_variable, Project.find(<project id>)) ...@@ -113,7 +113,7 @@ Feature.disable(:variable_inside_variable, Project.find(<project id>))
- Supported: project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and - Supported: project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and
variables from triggers, pipeline schedules, and manual pipelines. variables from triggers, pipeline schedules, and manual pipelines.
- Not supported: variables defined inside of scripts (e.g., `export MY_VARIABLE="test"`). - Not supported: variables defined inside of scripts (for example, `export MY_VARIABLE="test"`).
The runner uses Go's `os.Expand()` method for variable expansion. It means that it handles The runner uses Go's `os.Expand()` method for variable expansion. It means that it handles
only variables defined as `$variable` and `${variable}`. What's also important, is that only variables defined as `$variable` and `${variable}`. What's also important, is that
...@@ -132,7 +132,7 @@ use a different variables syntax. ...@@ -132,7 +132,7 @@ use a different variables syntax.
Supported: Supported:
- The `script` may use all available variables that are default for the shell (e.g., `$PATH` which - The `script` may use all available variables that are default for the shell (for example, `$PATH` which
should be present in all bash/sh shells) and all variables defined by GitLab CI/CD (project/group variables, should be present in all bash/sh shells) and all variables defined by GitLab CI/CD (project/group variables,
`.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers and pipeline schedules). `.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers and pipeline schedules).
- The `script` may also use all variables defined in the lines before. So, for example, if you define - The `script` may also use all variables defined in the lines before. So, for example, if you define
......
...@@ -79,7 +79,7 @@ especially the case for small tables. ...@@ -79,7 +79,7 @@ especially the case for small tables.
If a table is expected to grow in size and you expect your query has to filter If a table is expected to grow in size and you expect your query has to filter
out a lot of rows you may want to consider adding an index. If the table size is out a lot of rows you may want to consider adding an index. If the table size is
very small (e.g. less than `1,000` records) or any existing indexes filter out very small (for example, fewer than `1,000` records) or any existing indexes filter out
enough rows you may _not_ want to add a new index. enough rows you may _not_ want to add a new index.
## Maintenance Overhead ## Maintenance Overhead
......
...@@ -90,4 +90,4 @@ In addition, any system dependencies used in Omnibus packages or the Cloud Nativ ...@@ -90,4 +90,4 @@ In addition, any system dependencies used in Omnibus packages or the Cloud Nativ
If the service component needs to be updated or released with the monthly GitLab release, then the component should be added to the [release tools automation](https://gitlab.com/gitlab-org/release-tools). This project is maintained by the [Delivery team](https://about.gitlab.com/handbook/engineering/infrastructure/team/delivery/). A list of the projects managed this way can be found in the [release tools project directory](https://about.gitlab.com/handbook/engineering/infrastructure/team/delivery/). If the service component needs to be updated or released with the monthly GitLab release, then the component should be added to the [release tools automation](https://gitlab.com/gitlab-org/release-tools). This project is maintained by the [Delivery team](https://about.gitlab.com/handbook/engineering/infrastructure/team/delivery/). A list of the projects managed this way can be found in the [release tools project directory](https://about.gitlab.com/handbook/engineering/infrastructure/team/delivery/).
For example, during the monthly GitLab release, the desired version of Gitaly, GitLab Workhorse, GitLab Shell, etc., need to synchronized through the various release pipelines. For example, during the monthly GitLab release, the desired version of Gitaly, GitLab Workhorse and GitLab Shell need to be synchronized through the various release pipelines.
...@@ -148,7 +148,7 @@ to make this easier. ...@@ -148,7 +148,7 @@ to make this easier.
## Using HTTP status helpers ## Using HTTP status helpers
For non-200 HTTP responses, use the provided helpers in `lib/api/helpers.rb` to ensure correct behavior (`not_found!`, `no_content!` etc.). These `throw` inside Grape and abort the execution of your endpoint. For non-200 HTTP responses, use the provided helpers in `lib/api/helpers.rb` to ensure correct behavior (like `not_found!` or `no_content!`). These `throw` inside Grape and abort the execution of your endpoint.
For `DELETE` requests, you should also generally use the `destroy_conditionally!` helper which by default returns a `204 No Content` response on success, or a `412 Precondition Failed` response if the given `If-Unmodified-Since` header is out of range. This helper calls `#destroy` on the passed resource, but you can also implement a custom deletion method by passing a block. For `DELETE` requests, you should also generally use the `destroy_conditionally!` helper which by default returns a `204 No Content` response on success, or a `412 Precondition Failed` response if the given `If-Unmodified-Since` header is out of range. This helper calls `#destroy` on the passed resource, but you can also implement a custom deletion method by passing a block.
......
...@@ -235,7 +235,7 @@ updating many rows in sequence. ...@@ -235,7 +235,7 @@ updating many rows in sequence.
To reduce database pressure you should instead use To reduce database pressure you should instead use
`change_column_type_using_background_migration` or `rename_column_using_background_migration` `change_column_type_using_background_migration` or `rename_column_using_background_migration`
when migrating a column in a large table (e.g. `issues`). These methods work when migrating a column in a large table (for example, `issues`). These methods work
similarly to the concurrent counterparts but uses background migration to spread similarly to the concurrent counterparts but uses background migration to spread
the work / load over a longer time period, without slowing down deployments. the work / load over a longer time period, without slowing down deployments.
...@@ -402,7 +402,7 @@ into errors. On the other hand, if we were to migrate after deploying the ...@@ -402,7 +402,7 @@ into errors. On the other hand, if we were to migrate after deploying the
application code we could run into the same problems. application code we could run into the same problems.
If you merely need to correct some invalid data, then a post-deployment If you merely need to correct some invalid data, then a post-deployment
migration is usually enough. If you need to change the format of data (e.g. from migration is usually enough. If you need to change the format of data (for example, from
JSON to something else) it's typically best to add a new column for the new data JSON to something else) it's typically best to add a new column for the new data
format, and have the application use that. In such a case the procedure would format, and have the application use that. In such a case the procedure would
be: be:
......
...@@ -31,7 +31,7 @@ Some examples where background migrations can be useful: ...@@ -31,7 +31,7 @@ Some examples where background migrations can be useful:
- Migrating events from one table to multiple separate tables. - Migrating events from one table to multiple separate tables.
- Populating one column based on JSON stored in another column. - Populating one column based on JSON stored in another column.
- Migrating data that depends on the output of external services (e.g. an API). - Migrating data that depends on the output of external services (for example, an API).
NOTE: NOTE:
If the background migration is part of an important upgrade, make sure it's announced If the background migration is part of an important upgrade, make sure it's announced
...@@ -40,7 +40,7 @@ into this category. ...@@ -40,7 +40,7 @@ into this category.
## Isolation ## Isolation
Background migrations must be isolated and can not use application code (e.g. Background migrations must be isolated and can not use application code (for example,
models defined in `app/models`). Since these migrations can take a long time to models defined in `app/models`). Since these migrations can take a long time to
run it's possible for new versions to be deployed while they are still running. run it's possible for new versions to be deployed while they are still running.
...@@ -157,7 +157,7 @@ Because background migrations can take a long time you can't immediately clean ...@@ -157,7 +157,7 @@ Because background migrations can take a long time you can't immediately clean
things up after scheduling them. For example, you can't drop a column that's things up after scheduling them. For example, you can't drop a column that's
used in the migration process as this would cause jobs to fail. This means that used in the migration process as this would cause jobs to fail. This means that
you'll need to add a separate _post deployment_ migration in a future release you'll need to add a separate _post deployment_ migration in a future release
that finishes any remaining jobs before cleaning things up (e.g. removing a that finishes any remaining jobs before cleaning things up (for example, removing a
column). column).
As an example, say you want to migrate the data from column `foo` (containing a As an example, say you want to migrate the data from column `foo` (containing a
...@@ -167,7 +167,7 @@ roughly be as follows: ...@@ -167,7 +167,7 @@ roughly be as follows:
1. Release A: 1. Release A:
1. Create a migration class that perform the migration for a row with a given ID. 1. Create a migration class that perform the migration for a row with a given ID.
1. Deploy the code for this release, this should include some code that will 1. Deploy the code for this release, this should include some code that will
schedule jobs for newly created data (e.g. using an `after_create` hook). schedule jobs for newly created data (for example, using an `after_create` hook).
1. Schedule jobs for all existing rows in a post-deployment migration. It's 1. Schedule jobs for all existing rows in a post-deployment migration. It's
possible some newly created rows may be scheduled twice so your migration possible some newly created rows may be scheduled twice so your migration
should take care of this. should take care of this.
...@@ -178,7 +178,7 @@ roughly be as follows: ...@@ -178,7 +178,7 @@ roughly be as follows:
`BackgroundMigrationHelpers` to ensure no jobs remain. This helper will: `BackgroundMigrationHelpers` to ensure no jobs remain. This helper will:
1. Use `Gitlab::BackgroundMigration.steal` to process any remaining 1. Use `Gitlab::BackgroundMigration.steal` to process any remaining
jobs in Sidekiq. jobs in Sidekiq.
1. Reschedule the migration to be run directly (i.e. not through Sidekiq) 1. Reschedule the migration to be run directly (that is, not through Sidekiq)
on any rows that weren't migrated by Sidekiq. This can happen if, for on any rows that weren't migrated by Sidekiq. This can happen if, for
instance, Sidekiq received a SIGKILL, or if a particular batch failed instance, Sidekiq received a SIGKILL, or if a particular batch failed
enough times to be marked as dead. enough times to be marked as dead.
......
...@@ -29,7 +29,7 @@ trigger. ...@@ -29,7 +29,7 @@ trigger.
## Specifying versions of components ## Specifying versions of components
If you want to create a package from a specific branch, commit or tag of any of If you want to create a package from a specific branch, commit or tag of any of
the GitLab components (like GitLab Workhorse, Gitaly, GitLab Pages, etc.), you the GitLab components (like GitLab Workhorse, Gitaly, or GitLab Pages), you
can specify the branch name, commit SHA or tag in the component's respective can specify the branch name, commit SHA or tag in the component's respective
`*_VERSION` file. For example, if you want to build a package that uses the `*_VERSION` file. For example, if you want to build a package that uses the
branch `0-1-stable`, modify the content of `GITALY_SERVER_VERSION` to branch `0-1-stable`, modify the content of `GITALY_SERVER_VERSION` to
......
...@@ -45,7 +45,7 @@ processing it, and returns any syntax or semantic errors. The `YAML Processor` c ...@@ -45,7 +45,7 @@ processing it, and returns any syntax or semantic errors. The `YAML Processor` c
[all the keywords](../../ci/yaml/index.md) available to structure a pipeline. [all the keywords](../../ci/yaml/index.md) available to structure a pipeline.
The `CreatePipelineService` receives the abstract data structure returned by the `YAML Processor`, The `CreatePipelineService` receives the abstract data structure returned by the `YAML Processor`,
which then converts it to persisted models (pipeline, stages, jobs, etc.). After that, the pipeline is ready which then converts it to persisted models (like pipeline, stages, and jobs). After that, the pipeline is ready
to be processed. Processing a pipeline means running the jobs in order of execution (stage or DAG) to be processed. Processing a pipeline means running the jobs in order of execution (stage or DAG)
until either one of the following: until either one of the following:
...@@ -77,7 +77,7 @@ that need to be stored. Also, a job may depend on artifacts from previous jobs i ...@@ -77,7 +77,7 @@ that need to be stored. Also, a job may depend on artifacts from previous jobs i
case the runner downloads them using a dedicated API endpoint. case the runner downloads them using a dedicated API endpoint.
Artifacts are stored in object storage, while metadata is kept in the database. An important example of artifacts Artifacts are stored in object storage, while metadata is kept in the database. An important example of artifacts
are reports (JUnit, SAST, DAST, etc.) which are parsed and rendered in the merge request. are reports (like JUnit, SAST, and DAST) which are parsed and rendered in the merge request.
Job status transitions are not all automated. A user may run [manual jobs](../../ci/yaml/index.md#whenmanual), cancel a pipeline, retry Job status transitions are not all automated. A user may run [manual jobs](../../ci/yaml/index.md#whenmanual), cancel a pipeline, retry
specific failed jobs or the entire pipeline. Anything that specific failed jobs or the entire pipeline. Anything that
......
...@@ -52,8 +52,8 @@ When self-identifying as a domain expert, it is recommended to assign the MR cha ...@@ -52,8 +52,8 @@ When self-identifying as a domain expert, it is recommended to assign the MR cha
We make the following assumption with regards to automatically being considered a domain expert: We make the following assumption with regards to automatically being considered a domain expert:
- Team members working in a specific stage/group (e.g. create: source code) are considered domain experts for that area of the app they work on - Team members working in a specific stage/group (for example, create: source code) are considered domain experts for that area of the app they work on
- Team members working on a specific feature (e.g. search) are considered domain experts for that feature - Team members working on a specific feature (for example, search) are considered domain experts for that feature
We default to assigning reviews to team members with domain expertise. We default to assigning reviews to team members with domain expertise.
When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or simply follow the [Reviewer roulette](#reviewer-roulette) recommendation. When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or simply follow the [Reviewer roulette](#reviewer-roulette) recommendation.
......
...@@ -12,7 +12,7 @@ GitLab community members and their privileges/responsibilities. ...@@ -12,7 +12,7 @@ GitLab community members and their privileges/responsibilities.
|-------|------------------|--------------| |-------|------------------|--------------|
| Maintainer | Accepts merge requests on several GitLab projects | Added to the [team page](https://about.gitlab.com/company/team/). An expert on code reviews and knows the product/codebase | | Maintainer | Accepts merge requests on several GitLab projects | Added to the [team page](https://about.gitlab.com/company/team/). An expert on code reviews and knows the product/codebase |
| Reviewer | Performs code reviews on MRs | Added to the [team page](https://about.gitlab.com/company/team/) | | Reviewer | Performs code reviews on MRs | Added to the [team page](https://about.gitlab.com/company/team/) |
| Developer |Has access to GitLab internal infrastructure & issues (e.g. HR-related) | GitLab employee or a Core Team member (with an NDA) | | Developer |Has access to GitLab internal infrastructure & issues (for example, HR-related) | GitLab employee or a Core Team member (with an NDA) |
| Contributor | Can make contributions to all GitLab public projects | Have a GitLab.com account | | Contributor | Can make contributions to all GitLab public projects | Have a GitLab.com account |
[List of current reviewers/maintainers](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce). [List of current reviewers/maintainers](https://about.gitlab.com/handbook/engineering/projects/#gitlab-ce).
......
...@@ -191,7 +191,7 @@ If you are not sure who to mention, the reviewer will do this for you early in t ...@@ -191,7 +191,7 @@ If you are not sure who to mention, the reviewer will do this for you early in t
A "breaking change" is any change that requires users to make a corresponding change to their code, settings, or workflow. "Users" might be humans, API clients, or even code classes that "use" another class. Examples of breaking changes include: A "breaking change" is any change that requires users to make a corresponding change to their code, settings, or workflow. "Users" might be humans, API clients, or even code classes that "use" another class. Examples of breaking changes include:
- Removing a user-facing feature without a replacement/workaround. - Removing a user-facing feature without a replacement/workaround.
- Changing the definition of an existing API (by re-naming query parameters, changing routes, etc.). - Changing the definition of an existing API (by doing things like re-naming query parameters or changing routes).
- Removing a public method from a code class. - Removing a public method from a code class.
A breaking change can be considered "major" if it affects many users, or represents a significant change in behavior. A breaking change can be considered "major" if it affects many users, or represents a significant change in behavior.
......
...@@ -47,11 +47,11 @@ scheduling into milestones. Labeling is a task for everyone. (For some projects, ...@@ -47,11 +47,11 @@ scheduling into milestones. Labeling is a task for everyone. (For some projects,
Most issues will have labels for at least one of the following: Most issues will have labels for at least one of the following:
- Type: `~feature`, `~bug`, `~tooling`, `~documentation`, etc. - Type. For example: `~feature`, `~bug`, `~tooling`, or `~documentation`.
- Stage: `~"devops::plan"`, `~"devops::create"`, etc. - Stage. For example: `~"devops::plan"` or `~"devops::create"`.
- Group: `~"group::source code"`, `~"group::knowledge"`, `~"group::editor"`, etc. - Group. For example: `~"group::source code"`, `~"group::knowledge"`, or `~"group::editor"`.
- Category: `~"Category:Code Analytics"`, `~"Category:DevOps Reports"`, `~"Category:Templates"`, etc. - Category. For example: `~"Category:Code Analytics"`, `~"Category:DevOps Reports"`, or `~"Category:Templates"`.
- Feature: `~wiki`, `~ldap`, `~api`, `~issues`, `~"merge requests"`, etc. - Feature. For example: `~wiki`, `~ldap`, `~api`, `~issues`, or `~"merge requests"`.
- Department: `~UX`, `~Quality` - Department: `~UX`, `~Quality`
- Team: `~"Technical Writing"`, `~Delivery` - Team: `~"Technical Writing"`, `~Delivery`
- Specialization: `~frontend`, `~backend`, `~documentation` - Specialization: `~frontend`, `~backend`, `~documentation`
...@@ -201,7 +201,7 @@ If you are an expert in a particular area, it makes it easier to find issues to ...@@ -201,7 +201,7 @@ If you are an expert in a particular area, it makes it easier to find issues to
work on. You can also subscribe to those labels to receive an email each time an work on. You can also subscribe to those labels to receive an email each time an
issue is labeled with a feature label corresponding to your expertise. issue is labeled with a feature label corresponding to your expertise.
Examples of feature labels are `~wiki`, `~ldap`, `~api`, `~issues`, `~"merge requests"` etc. Examples of feature labels are `~wiki`, `~ldap`, `~api`, `~issues`, and `~"merge requests"`.
#### Naming and color convention #### Naming and color convention
...@@ -223,7 +223,7 @@ The current department labels are: ...@@ -223,7 +223,7 @@ The current department labels are:
### Team labels ### Team labels
**Important**: Most of the historical team labels (e.g. Manage, Plan etc.) are **Important**: Most of the historical team labels (like Manage or Plan) are
now deprecated in favor of [Group labels](#group-labels) and [Stage labels](#stage-labels). now deprecated in favor of [Group labels](#group-labels) and [Stage labels](#stage-labels).
Team labels specify what team is responsible for this issue. Team labels specify what team is responsible for this issue.
......
...@@ -23,8 +23,8 @@ wireframes of the proposed feature if it will also change the UI. ...@@ -23,8 +23,8 @@ wireframes of the proposed feature if it will also change the UI.
Merge requests should be submitted to the appropriate project at GitLab.com, for example Merge requests should be submitted to the appropriate project at GitLab.com, for example
[GitLab](https://gitlab.com/gitlab-org/gitlab/-/merge_requests), [GitLab](https://gitlab.com/gitlab-org/gitlab/-/merge_requests),
[GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests), [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests), or
[Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests), etc. [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests).
If you are new to GitLab development (or web development in general), see the If you are new to GitLab development (or web development in general), see the
[how to contribute](index.md#how-to-contribute) section to get started with [how to contribute](index.md#how-to-contribute) section to get started with
...@@ -69,7 +69,7 @@ request is as follows: ...@@ -69,7 +69,7 @@ request is as follows:
request addresses. Referenced issues do not [close automatically](../../user/project/issues/managing_issues.md#closing-issues-automatically). request addresses. Referenced issues do not [close automatically](../../user/project/issues/managing_issues.md#closing-issues-automatically).
You must close them manually once the merge request is merged. You must close them manually once the merge request is merged.
1. The MR must include *Before* and *After* screenshots if UI changes are made. 1. The MR must include *Before* and *After* screenshots if UI changes are made.
1. Include any steps or setup required to ensure reviewers can view the changes you've made (e.g. include any information about feature flags). 1. Include any steps or setup required to ensure reviewers can view the changes you've made (for example, include any information about feature flags).
1. If you're allowed to, set a relevant milestone and [labels](issue_workflow.md). 1. If you're allowed to, set a relevant milestone and [labels](issue_workflow.md).
1. UI changes should use available components from the GitLab Design System, 1. UI changes should use available components from the GitLab Design System,
[Pajamas](https://design.gitlab.com/). [Pajamas](https://design.gitlab.com/).
...@@ -204,7 +204,7 @@ the contribution acceptance criteria below: ...@@ -204,7 +204,7 @@ the contribution acceptance criteria below:
only one working on your feature branch, otherwise merge `main`. only one working on your feature branch, otherwise merge `main`.
1. Only one specific issue is fixed or one specific feature is implemented. Do not 1. Only one specific issue is fixed or one specific feature is implemented. Do not
combine things; send separate merge requests for each issue or feature. combine things; send separate merge requests for each issue or feature.
1. Migrations should do only one thing (e.g., create a table, move data to a new 1. Migrations should do only one thing (for example, create a table, move data to a new
table, or remove an old table) to aid retrying on failure. table, or remove an old table) to aid retrying on failure.
1. Contains functionality that other users will benefit from. 1. Contains functionality that other users will benefit from.
1. Doesn't add configuration options or settings options since they complicate making 1. Doesn't add configuration options or settings options since they complicate making
...@@ -214,7 +214,7 @@ the contribution acceptance criteria below: ...@@ -214,7 +214,7 @@ the contribution acceptance criteria below:
- Check for N+1 queries via the SQL log or [`QueryRecorder`](../merge_request_performance_guidelines.md). - Check for N+1 queries via the SQL log or [`QueryRecorder`](../merge_request_performance_guidelines.md).
- Avoid repeated access of the file system. - Avoid repeated access of the file system.
- Use [polling with ETag caching](../polling.md) if needed to support real-time features. - Use [polling with ETag caching](../polling.md) if needed to support real-time features.
1. If the merge request adds any new libraries (gems, JavaScript libraries, etc.), 1. If the merge request adds any new libraries (like gems or JavaScript libraries),
they should conform to our [Licensing guidelines](../licensing.md). See those they should conform to our [Licensing guidelines](../licensing.md). See those
instructions for help if the "license-finder" test fails with a instructions for help if the "license-finder" test fails with a
`Dependencies that need approval` error. Also, make the reviewer aware of the new `Dependencies that need approval` error. Also, make the reviewer aware of the new
...@@ -272,7 +272,7 @@ request: ...@@ -272,7 +272,7 @@ request:
We allow engineering time to fix small problems (with or without an We allow engineering time to fix small problems (with or without an
issue) that are incremental improvements, such as: issue) that are incremental improvements, such as:
1. Unprioritized bug fixes (e.g. [Banner alerting of project move is 1. Unprioritized bug fixes (for example, [Banner alerting of project move is
showing up everywhere](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18985)) showing up everywhere](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18985))
1. Documentation improvements 1. Documentation improvements
1. Rubocop or Code Quality improvements 1. Rubocop or Code Quality improvements
......
...@@ -73,9 +73,9 @@ end ...@@ -73,9 +73,9 @@ end
This works as-is, however, it has a couple of downside that: This works as-is, however, it has a couple of downside that:
- Someone could define a key/value pair in EE that is **conflicted** with a value defined in FOSS. - Someone could define a key/value pair in EE that is **conflicted** with a value defined in FOSS.
e.g. Define `activity_limit_exceeded: 1` in `EE::Enums::Pipeline`. For example, define `activity_limit_exceeded: 1` in `EE::Enums::Pipeline`.
- When it happens, the feature works totally different. - When it happens, the feature works totally different.
e.g. We cannot figure out `failure_reason` is either `config_error` or `activity_limit_exceeded`. For example, we cannot figure out `failure_reason` is either `config_error` or `activity_limit_exceeded`.
- When it happens, we have to ship a database migration to fix the data integrity, - When it happens, we have to ship a database migration to fix the data integrity,
which might be impossible if you cannot recover the original value. which might be impossible if you cannot recover the original value.
...@@ -98,7 +98,7 @@ end ...@@ -98,7 +98,7 @@ end
This looks working as a workaround, however, this approach has some downsides that: This looks working as a workaround, however, this approach has some downsides that:
- Features could move from EE to FOSS or vice versa. Therefore, the offset might be mixed between FOSS and EE in the future. - Features could move from EE to FOSS or vice versa. Therefore, the offset might be mixed between FOSS and EE in the future.
e.g. When you move `activity_limit_exceeded` to FOSS, you'll see `{ unknown_failure: 0, config_error: 1, activity_limit_exceeded: 1_000 }`. For example, when you move `activity_limit_exceeded` to FOSS, you'll see `{ unknown_failure: 0, config_error: 1, activity_limit_exceeded: 1_000 }`.
- The integer column for the `enum` is likely created [as `SMALLINT`](#creating-enums). - The integer column for the `enum` is likely created [as `SMALLINT`](#creating-enums).
Therefore, you need to be careful of that the offset doesn't exceed the maximum value of 2 bytes integer. Therefore, you need to be careful of that the offset doesn't exceed the maximum value of 2 bytes integer.
......
...@@ -58,7 +58,7 @@ different releases: ...@@ -58,7 +58,7 @@ different releases:
1. Release `N.M` (current release) 1. Release `N.M` (current release)
- Ensure the constraint is enforced at the application level (i.e. add a model validation). - Ensure the constraint is enforced at the application level (that is, add a model validation).
- Add a post-deployment migration to add the `NOT NULL` constraint with `validate: false`. - Add a post-deployment migration to add the `NOT NULL` constraint with `validate: false`.
- Add a post-deployment migration to fix the existing records. - Add a post-deployment migration to fix the existing records.
......
...@@ -128,7 +128,7 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac ...@@ -128,7 +128,7 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
- Write the raw SQL in the MR description. Preferably formatted - Write the raw SQL in the MR description. Preferably formatted
nicely with [pgFormatter](https://sqlformat.darold.net) or nicely with [pgFormatter](https://sqlformat.darold.net) or
[paste.depesz.com](https://paste.depesz.com) and using regular quotes [paste.depesz.com](https://paste.depesz.com) and using regular quotes
(e.g. `"projects"."id"`) and avoiding smart quotes (e.g. `“projects”.“id”`). (for example, `"projects"."id"`) and avoiding smart quotes (for example, `“projects”.“id”`).
- In case of queries generated dynamically by using parameters, there should be one raw SQL query for each variation. - In case of queries generated dynamically by using parameters, there should be one raw SQL query for each variation.
For example, a finder for issues that may take as a parameter an optional filter on projects, For example, a finder for issues that may take as a parameter an optional filter on projects,
......
...@@ -21,8 +21,8 @@ Instead of deleting we can opt for disabling the migration. ...@@ -21,8 +21,8 @@ Instead of deleting we can opt for disabling the migration.
Migrations can be disabled if: Migrations can be disabled if:
- They caused a timeout or general issue on GitLab.com. - They caused a timeout or general issue on GitLab.com.
- They are obsoleted, e.g. changes are not necessary due to a feature change. - They are obsoleted, for example, changes are not necessary due to a feature change.
- Migration is a data migration only, i.e. the migration does not change the database schema. - Migration is a data migration only, that is, the migration does not change the database schema.
## How to disable a data migration? ## How to disable a data migration?
......
...@@ -72,7 +72,7 @@ could make them shallow and more coupled with other contexts. ...@@ -72,7 +72,7 @@ could make them shallow and more coupled with other contexts.
Bounded contexts (or top-level namespaces) can be seen as macro-components in the overall app. Bounded contexts (or top-level namespaces) can be seen as macro-components in the overall app.
Good bounded contexts should be [deep](https://medium.com/@nakabonne/depth-of-module-f62dac3c2fdb) Good bounded contexts should be [deep](https://medium.com/@nakabonne/depth-of-module-f62dac3c2fdb)
so consider having nested namespaces to further break down complex parts of the domain. so consider having nested namespaces to further break down complex parts of the domain.
E.g. `Ci::Config::`. For example, `Ci::Config::`.
For example, instead of having separate and granular bounded contexts like: `ContainerScanning::`, For example, instead of having separate and granular bounded contexts like: `ContainerScanning::`,
`ContainerHostSecurity::`, `ContainerNetworkSecurity::`, we could have: `ContainerHostSecurity::`, `ContainerNetworkSecurity::`, we could have:
......
...@@ -174,10 +174,10 @@ There are a few gotchas with it: ...@@ -174,10 +174,10 @@ There are a few gotchas with it:
implementation, you should refactor the CE method and split it in implementation, you should refactor the CE method and split it in
smaller methods. Or create a "hook" method that is empty in CE, smaller methods. Or create a "hook" method that is empty in CE,
and with the EE-specific implementation in EE. and with the EE-specific implementation in EE.
- when the original implementation contains a guard clause (e.g. - when the original implementation contains a guard clause (for example,
`return unless condition`), we cannot easily extend the behavior by `return unless condition`), we cannot easily extend the behavior by
overriding the method, because we can't know when the overridden method overriding the method, because we can't know when the overridden method
(i.e. calling `super` in the overriding method) would want to stop early. (that is, calling `super` in the overriding method) would want to stop early.
In this case, we shouldn't just override it, but update the original method In this case, we shouldn't just override it, but update the original method
to make it call the other method we want to extend, like a [template method to make it call the other method we want to extend, like a [template method
pattern](https://en.wikipedia.org/wiki/Template_method_pattern). pattern](https://en.wikipedia.org/wiki/Template_method_pattern).
...@@ -522,10 +522,10 @@ Resolving an EE template path that is relative to the CE view path doesn't work. ...@@ -522,10 +522,10 @@ Resolving an EE template path that is relative to the CE view path doesn't work.
For `render` and `render_if_exists`, they search for the EE partial first, For `render` and `render_if_exists`, they search for the EE partial first,
and then CE partial. They would only render a particular partial, not all and then CE partial. They would only render a particular partial, not all
partials with the same name. We could take the advantage of this, so that partials with the same name. We could take the advantage of this, so that
the same partial path (e.g. `shared/issuable/form/default_templates`) could the same partial path (for example, `shared/issuable/form/default_templates`) could
be referring to the CE partial in CE (i.e. be referring to the CE partial in CE (that is,
`app/views/shared/issuable/form/_default_templates.html.haml`), while EE `app/views/shared/issuable/form/_default_templates.html.haml`), while EE
partial in EE (i.e. partial in EE (that is,
`ee/app/views/shared/issuable/form/_default_templates.html.haml`). This way, `ee/app/views/shared/issuable/form/_default_templates.html.haml`). This way,
we could show different things between CE and EE. we could show different things between CE and EE.
...@@ -549,8 +549,8 @@ In the above example, we can't use ...@@ -549,8 +549,8 @@ In the above example, we can't use
`render 'shared/issuable/form/default_templates'` because it would find the `render 'shared/issuable/form/default_templates'` because it would find the
same EE partial, causing infinite recursion. Instead, we could use `render_ce` same EE partial, causing infinite recursion. Instead, we could use `render_ce`
so it ignores any partials in `ee/` and then it would render the CE partial so it ignores any partials in `ee/` and then it would render the CE partial
(i.e. `app/views/shared/issuable/form/_default_templates.html.haml`) (that is, `app/views/shared/issuable/form/_default_templates.html.haml`)
for the same path (i.e. `shared/issuable/form/default_templates`). This way for the same path (that is, `shared/issuable/form/default_templates`). This way
we could easily wrap around the CE partial. we could easily wrap around the CE partial.
### Code in `lib/` ### Code in `lib/`
...@@ -1107,7 +1107,7 @@ If a component you're adding styles for is limited to EE, it is better to have a ...@@ -1107,7 +1107,7 @@ If a component you're adding styles for is limited to EE, it is better to have a
separate SCSS file in an appropriate directory within `app/assets/stylesheets`. separate SCSS file in an appropriate directory within `app/assets/stylesheets`.
In some cases, this is not entirely possible or creating dedicated SCSS file is an overkill, In some cases, this is not entirely possible or creating dedicated SCSS file is an overkill,
e.g. a text style of some component is different for EE. In such cases, for example, a text style of some component is different for EE. In such cases,
styles are usually kept in a stylesheet that is common for both CE and EE, and it is wise styles are usually kept in a stylesheet that is common for both CE and EE, and it is wise
to isolate such ruleset from rest of CE rules (along with adding comment describing the same) to isolate such ruleset from rest of CE rules (along with adding comment describing the same)
to avoid conflicts during CE to EE merge. to avoid conflicts during CE to EE merge.
......
...@@ -142,13 +142,13 @@ forwarded to both indices. Once the new index is ready, an admin can ...@@ -142,13 +142,13 @@ forwarded to both indices. Once the new index is ready, an admin can
mark it active, which will direct all searches to it, and remove the old mark it active, which will direct all searches to it, and remove the old
index. index.
This is also helpful for migrating to new servers, e.g. moving to/from AWS. This is also helpful for migrating to new servers, for example, moving to/from AWS.
Currently we are on the process of migrating to this new design. Everything is hardwired to work with one single version for now. Currently we are on the process of migrating to this new design. Everything is hardwired to work with one single version for now.
### Architecture ### Architecture
The traditional setup, provided by `elasticsearch-rails`, is to communicate through its internal proxy classes. Developers would write model-specific logic in a module for the model to include in (e.g. `SnippetsSearch`). The `__elasticsearch__` methods would return a proxy object, e.g.: The traditional setup, provided by `elasticsearch-rails`, is to communicate through its internal proxy classes. Developers would write model-specific logic in a module for the model to include in (for example, `SnippetsSearch`). The `__elasticsearch__` methods would return a proxy object, for example:
- `Issue.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::ClassMethodsProxy` - `Issue.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::ClassMethodsProxy`
- `Issue.first.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::InstanceMethodsProxy`. - `Issue.first.__elasticsearch__` returns an instance of `Elasticsearch::Model::Proxy::InstanceMethodsProxy`.
...@@ -171,7 +171,7 @@ The global configurations per version are now in the `Elastic::(Version)::Config ...@@ -171,7 +171,7 @@ The global configurations per version are now in the `Elastic::(Version)::Config
NOTE: NOTE:
This is not applicable yet as multiple indices functionality is not fully implemented. This is not applicable yet as multiple indices functionality is not fully implemented.
Folders like `ee/lib/elastic/v12p1` contain snapshots of search logic from different versions. To keep a continuous Git history, the latest version lives under `ee/lib/elastic/latest`, but its classes are aliased under an actual version (e.g. `ee/lib/elastic/v12p3`). When referencing these classes, never use the `Latest` namespace directly, but use the actual version (e.g. `V12p3`). Folders like `ee/lib/elastic/v12p1` contain snapshots of search logic from different versions. To keep a continuous Git history, the latest version lives under `ee/lib/elastic/latest`, but its classes are aliased under an actual version (for example, `ee/lib/elastic/v12p3`). When referencing these classes, never use the `Latest` namespace directly, but use the actual version (for example, `V12p3`).
The version name basically follows the GitLab release version. If setting is changed in 12.3, we will create a new namespace called `V12p3` (p stands for "point"). Raise an issue if there is a need to name a version differently. The version name basically follows the GitLab release version. If setting is changed in 12.3, we will create a new namespace called `V12p3` (p stands for "point"). Raise an issue if there is a need to name a version differently.
...@@ -254,7 +254,7 @@ class BatchedMigrationName < Elastic::Migration ...@@ -254,7 +254,7 @@ class BatchedMigrationName < Elastic::Migration
throttle_delay 10.minutes throttle_delay 10.minutes
pause_indexing! pause_indexing!
space_requirements! space_requirements!
# ... # ...
end end
``` ```
......
...@@ -59,7 +59,7 @@ Shared Global Object's solve the problem of making something globally accessible ...@@ -59,7 +59,7 @@ Shared Global Object's solve the problem of making something globally accessible
could be appropriate: could be appropriate:
- When a responsibility is truly global and should be referenced across the application - When a responsibility is truly global and should be referenced across the application
(e.g., an application-wide Event Bus). (for example, an application-wide Event Bus).
Even in these scenarios, please consider avoiding the Shared Global Object pattern because the Even in these scenarios, please consider avoiding the Shared Global Object pattern because the
side-effects can be notoriously difficult to reason with. side-effects can be notoriously difficult to reason with.
...@@ -136,8 +136,8 @@ many problems with a module that exports utility functions. ...@@ -136,8 +136,8 @@ many problems with a module that exports utility functions.
Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible Singletons solve the problem of enforcing there to be only 1 instance of a thing. It's possible
that a Singleton could be appropriate in the following rare cases: that a Singleton could be appropriate in the following rare cases:
- We need to manage some resource that **MUST** have just 1 instance (i.e. some hardware restriction). - We need to manage some resource that **MUST** have just 1 instance (that is, some hardware restriction).
- There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (e.g., logging) and a Singleton provides the simplest API. - There is a real [cross-cutting concern](https://en.wikipedia.org/wiki/Cross-cutting_concern) (for example, logging) and a Singleton provides the simplest API.
Even in these scenarios, please consider avoiding the Singleton pattern. Even in these scenarios, please consider avoiding the Singleton pattern.
...@@ -174,7 +174,7 @@ export const fuzzify = (id) => { /* ... */ }; ...@@ -174,7 +174,7 @@ export const fuzzify = (id) => { /* ... */ };
#### Dependency Injection #### Dependency Injection
[Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks [Dependency Injection](https://en.wikipedia.org/wiki/Dependency_injection) is an approach which breaks
coupling by declaring a module's dependencies to be injected from outside the module (e.g., through constructor parameters, a bona-fide Dependency Injection framework, and even Vue's `provide/inject`). coupling by declaring a module's dependencies to be injected from outside the module (for example, through constructor parameters, a bona-fide Dependency Injection framework, and even Vue's `provide/inject`).
```javascript ```javascript
// bad - Vue component coupled to Singleton // bad - Vue component coupled to Singleton
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment