Commit b7c276c0 authored by Amy Qualls's avatar Amy Qualls

Merge branch 'docs-ci-future-tense' into 'master'

Remove use of present tense in ci docs (part 1)

See merge request gitlab-org/gitlab!47983
parents c8b21732 9d36b452
......@@ -216,7 +216,7 @@ jsdom
JupyterHub
kanban
kanbans
Kaniko
kaniko
Karma
Kerberos
keyset
......@@ -317,6 +317,7 @@ PgBouncer
Phabricator
phaser
phasers
phpenv
Pipfile
Pipfiles
Piwik
......
......@@ -38,12 +38,12 @@ runtime dependencies needed to compile the project:
be configured to pass intermediate build results between stages, this should be
done with artifacts instead.
- `artifacts`: **Use for stage results that will be passed between stages.**
- `artifacts`: **Use for stage results that are passed between stages.**
Artifacts are files generated by a job which are stored and uploaded, and can then
be fetched and used by jobs in later stages of the **same pipeline**. In other words,
[you can't create an artifact in job-A in stage-1, and then use this artifact in job-B in stage-1](https://gitlab.com/gitlab-org/gitlab/-/issues/25837).
This data will not be available in different pipelines, but is available to be downloaded
This data is be available in different pipelines, but is available to be downloaded
from the UI.
The name `artifacts` sounds like it's only useful outside of the job, like for downloading
......@@ -87,7 +87,7 @@ cache, when declaring `cache` in your jobs, use one or a mix of the following:
- [Tag your runners](../runners/README.md#use-tags-to-limit-the-number-of-jobs-using-the-runner) and use the tag on jobs
that share their cache.
- [Use sticky runners](../runners/README.md#prevent-a-specific-runner-from-being-enabled-for-other-projects)
that will be only available to a particular project.
that are only available to a particular project.
- [Use a `key`](../yaml/README.md#cachekey) that fits your workflow (for example,
different caches on each branch). For that, you can take advantage of the
[CI/CD predefined variables](../variables/README.md#predefined-environment-variables).
......@@ -106,7 +106,7 @@ of the following must be true:
where the cache is stored in S3 buckets (like shared runners on GitLab.com).
- Use multiple runners (not in autoscale mode) of the same architecture that
share a common network-mounted directory (using NFS or something similar)
where the cache will be stored.
where the cache is stored.
TIP: **Tip:**
Read about the [availability of the cache](#availability-of-the-cache)
......@@ -125,7 +125,7 @@ cache:
While this feels like it might be safe from accidentally overwriting the cache,
it means merge requests get slow first pipelines, which might be a bad
developer experience. The next time a new commit is pushed to the branch, the
cache will be re-used.
cache is re-used.
To enable per-job and per-branch caching:
......@@ -160,7 +160,7 @@ cache:
### Disabling cache on specific jobs
If you have defined the cache globally, it means that each job will use the
If you have defined the cache globally, it means that each job uses the
same definition. You can override this behavior per-job, and if you want to
disable it completely, use an empty hash:
......@@ -431,9 +431,9 @@ Here's what happens behind the scenes:
1. `script` is executed.
1. Pipeline finishes.
By using a single runner on a single machine, you'll not have the issue where
By using a single runner on a single machine, you don't have the issue where
`job B` might execute on a runner different from `job A`, thus guaranteeing the
cache between stages. That will only work if the build goes from stage `build`
cache between stages. That only works if the build goes from stage `build`
to `test` in the same runner/machine, otherwise, you [might not have the cache
available](#cache-mismatch).
......@@ -442,7 +442,7 @@ During the caching process, there's also a couple of things to consider:
- If some other job, with another cache configuration had saved its
cache in the same zip file, it is overwritten. If the S3 based shared cache is
used, the file is additionally uploaded to S3 to an object based on the cache
key. So, two jobs with different paths, but the same cache key, will overwrite
key. So, two jobs with different paths, but the same cache key, overwrites
their cache.
- When extracting the cache from `cache.zip`, everything in the zip file is
extracted in the job's working directory (usually the repository which is
......@@ -450,7 +450,7 @@ During the caching process, there's also a couple of things to consider:
things in the archive of `job B`.
The reason why it works this way is because the cache created for one runner
often will not be valid when used by a different one which can run on a
often isn't valid when used by a different one which can run on a
**different architecture** (e.g., when the cache includes binary files). And
since the different steps might be executed by runners running on different
machines, it is a safe default.
......@@ -472,7 +472,7 @@ Let's explore some examples.
#### Examples
Let's assume you have only one runner assigned to your project, so the cache
will be stored in the runner's machine by default. If two jobs, A and B,
is stored in the runner's machine by default. If two jobs, A and B,
have the same cache key, but they cache different paths, cache B would overwrite
cache A, even if their `paths` don't match:
......@@ -506,15 +506,15 @@ job B:
1. `job B` runs.
1. The previous cache, if any, is unzipped.
1. `vendor/` is cached as cache.zip and overwrites the previous one.
1. The next time `job A` runs it will use the cache of `job B` which is different
and thus will be ineffective.
1. The next time `job A` runs it uses the cache of `job B` which is different
and thus isn't effective.
To fix that, use different `keys` for each job.
In another case, let's assume you have more than one runner assigned to your
project, but the distributed cache is not enabled. The second time the
pipeline is run, we want `job A` and `job B` to re-use their cache (which in this case
will be different):
is different):
```yaml
stages:
......@@ -553,7 +553,7 @@ To start with a fresh copy of the cache, there are two ways to do that.
### Clearing the cache by changing `cache:key`
All you have to do is set a new `cache: key` in your `.gitlab-ci.yml`. In the
next run of the pipeline, the cache will be stored in a different location.
next run of the pipeline, the cache is stored in a different location.
### Clearing the cache manually
......@@ -567,7 +567,7 @@ via GitLab's UI:
![Clear runner caches](img/clear_runners_cache.png)
1. On the next push, your CI/CD job will use a new cache.
1. On the next push, your CI/CD job uses a new cache.
Behind the scenes, this works by increasing a counter in the database, and the
value of that counter is used to create the key for the cache by appending an
......
......@@ -19,12 +19,12 @@ To use GitLab CI/CD with a Bitbucket Cloud repository:
![Create project](img/external_repository.png)
GitLab will import the repository and enable [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository).
GitLab imports the repository and enables [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository).
1. In GitLab create a
[Personal Access Token](../../user/profile/personal_access_tokens.md)
with `api` scope. This will be used to authenticate requests from the web
hook that will be created in Bitbucket to notify GitLab of new commits.
with `api` scope. This is used to authenticate requests from the web
hook that is created in Bitbucket to notify GitLab of new commits.
1. In Bitbucket, from **Settings > Webhooks**, create a new web hook to notify
GitLab of new commits.
......@@ -62,7 +62,7 @@ To use GitLab CI/CD with a Bitbucket Cloud repository:
1. In Bitbucket, add a script to push the pipeline status to Bitbucket.
> Note: changes made in GitLab will be overwritten by any changes made
> Note: changes made in GitLab are overwritten by any changes made
> upstream in Bitbucket.
Create a file `build_status` and insert the script below and run
......
......@@ -28,7 +28,7 @@ To perform a one-off authorization with GitHub to grant GitLab access your
repositories:
1. Open <https://github.com/settings/tokens/new> to create a **Personal Access
Token**. This token will be used to access your repository and push commit
Token**. This token is used to access your repository and push commit
statuses to GitHub.
The `repo` and `admin:repo_hook` should be enable to allow GitLab access to
......@@ -43,12 +43,12 @@ repositories:
1. In GitHub, add a `.gitlab-ci.yml` to [configure GitLab CI/CD](../quick_start/README.md).
GitLab will:
GitLab:
1. Import the project.
1. Enable [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository)
1. Enable [GitHub project integration](../../user/project/integrations/github.md)
1. Create a web hook on GitHub to notify GitLab of new commits.
1. Imports the project.
1. Enables [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository)
1. Enables [GitHub project integration](../../user/project/integrations/github.md)
1. Creates a web hook on GitHub to notify GitLab of new commits.
## Connect manually
......@@ -57,7 +57,7 @@ To use **GitHub Enterprise** with **GitLab.com**, use this method.
To manually enable GitLab CI/CD for your repository:
1. In GitHub open <https://github.com/settings/tokens/new> create a **Personal
Access Token.** GitLab will use this token to access your repository and
Access Token.** GitLab uses this token to access your repository and
push commit statuses.
Enter a **Token description** and update the scope to allow:
......@@ -68,7 +68,7 @@ To manually enable GitLab CI/CD for your repository:
URL for your GitHub repository. If your project is private, use the personal
access token you just created for authentication.
GitLab will automatically configure polling-based pull mirroring.
GitLab automatically configures polling-based pull mirroring.
1. Still in GitLab, enable the [GitHub project integration](../../user/project/integrations/github.md)
from **Settings > Integrations.**
......
......@@ -18,7 +18,7 @@ GitLab CI/CD can be used with:
Instead of moving your entire project to GitLab, you can connect your
external repository to get the benefits of GitLab CI/CD.
Connecting an external repository will set up [repository mirroring](../../user/project/repository/repository_mirroring.md)
Connecting an external repository sets up [repository mirroring](../../user/project/repository/repository_mirroring.md)
and create a lightweight project with issues, merge requests, wiki, and
snippets disabled. These features
[can be re-enabled later](../../user/project/settings/index.md#sharing-and-permissions).
......@@ -74,7 +74,7 @@ If changes are pushed to the branch referenced by the Pull Request and the
Pull Request is still open, a pipeline for the external pull request is
created.
GitLab CI/CD will create 2 pipelines in this case. One for the
GitLab CI/CD creates 2 pipelines in this case. One for the
branch push and one for the external pull request.
After the Pull Request is closed, no pipelines are created for the external pull
......@@ -89,10 +89,10 @@ The variable names are prefixed with `CI_EXTERNAL_PULL_REQUEST_`.
### Limitations
This feature currently does not support Pull Requests from fork repositories. Any Pull Requests from fork repositories will be ignored. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/5667).
This feature currently does not support Pull Requests from fork repositories. Any Pull Requests from fork repositories are ignored. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/5667).
Given that GitLab will create 2 pipelines, if changes are pushed to a remote branch that
references an open Pull Request, both will contribute to the status of the Pull Request
Given that GitLab creates 2 pipelines, if changes are pushed to a remote branch that
references an open Pull Request, both contribute to the status of the Pull Request
via GitHub integration. If you want to exclusively run pipelines on external pull
requests and not on branches you can add `except: [branches]` to the job specs.
[Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/24089#workaround).
......@@ -17,7 +17,7 @@ be set up.
For example, you may have a specific tool or separate website that is built
as part of your main project. Using a DAG, you can specify the relationship between
these jobs and GitLab will then execute the jobs as soon as possible instead of waiting
these jobs and GitLab executes the jobs as soon as possible instead of waiting
for each stage to complete.
Unlike other DAG solutions for CI/CD, GitLab does not require you to choose one or the
......@@ -44,9 +44,9 @@ It has a pipeline that looks like the following:
| build_d | test_d | deploy_d |
Using a DAG, you can relate the `_a` jobs to each other separately from the `_b` jobs,
and even if service `a` takes a very long time to build, service `b` will not
wait for it and will finish as quickly as it can. In this very same pipeline, `_c` and
`_d` can be left alone and will run together in staged sequence just like any normal
and even if service `a` takes a very long time to build, service `b` doesn't
wait for it and finishes as quickly as it can. In this very same pipeline, `_c` and
`_d` can be left alone and run together in staged sequence just like any normal
GitLab pipeline.
## Use cases
......@@ -60,7 +60,7 @@ but related microservices.
Additionally, a DAG can help with general speediness of pipelines and helping
to deliver fast feedback. By creating dependency relationships that don't unnecessarily
block each other, your pipelines will run as quickly as possible regardless of
block each other, your pipelines run as quickly as possible regardless of
pipeline stages, ensuring output (including errors) is available to developers
as quickly as possible.
......@@ -88,13 +88,13 @@ are certain use cases that you may need to work around. For more information:
> - It's enabled on GitLab.com.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#enable-or-disable-needs-visualization).
The needs visualization makes it easier to visualize the relationships between dependent jobs in a DAG. This graph will display all the jobs in a pipeline that need or are needed by other jobs. Jobs with no relationships are not displayed in this view.
The needs visualization makes it easier to visualize the relationships between dependent jobs in a DAG. This graph displays all the jobs in a pipeline that need or are needed by other jobs. Jobs with no relationships are not displayed in this view.
To see the needs visualization, click on the **Needs** tab when viewing a pipeline that uses the `needs:` keyword.
![Needs visualization example](img/dag_graph_example_v13_1.png)
Clicking a node will highlight all the job paths it depends on.
Clicking a node highlights all the job paths it depends on.
![Needs visualization with path highlight](img/dag_graph_example_clicked_v13_1.png)
......
......@@ -381,7 +381,7 @@ content:
Update the `config.toml` file to mount the file to
`/etc/docker/daemon.json`. This would mount the file for **every**
container that is created by GitLab Runner. The configuration will be
container that is created by GitLab Runner. The configuration is
picked up by the `dind` service.
```toml
......
......@@ -37,8 +37,8 @@ few important details:
- The kaniko debug image is recommended (`gcr.io/kaniko-project/executor:debug`)
because it has a shell, and a shell is required for an image to be used with
GitLab CI/CD.
- The entrypoint will need to be [overridden](using_docker_images.md#overriding-the-entrypoint-of-an-image),
otherwise the build script will not run.
- The entrypoint needs to be [overridden](using_docker_images.md#overriding-the-entrypoint-of-an-image),
otherwise the build script doesn't run.
- A Docker `config.json` file needs to be created with the authentication
information for the desired container registry.
......@@ -47,7 +47,7 @@ In the following example, kaniko is used to:
1. Build a Docker image.
1. Then push it to [GitLab Container Registry](../../user/packages/container_registry/index.md).
The job will run only when a tag is pushed. A `config.json` file is created under
The job runs only when a tag is pushed. A `config.json` file is created under
`/kaniko/.docker` with the needed GitLab Container Registry credentials taken from the
[environment variables](../variables/README.md#predefined-environment-variables)
GitLab CI/CD provides.
......
......@@ -15,17 +15,17 @@ using the Shell executor.
## Test PHP projects using the Docker executor
While it is possible to test PHP apps on any system, this would require manual
configuration from the developer. To overcome this we will be using the
configuration from the developer. To overcome this we use the
official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub.
This will allow us to test PHP projects against different versions of PHP.
This allows us to test PHP projects against different versions of PHP.
However, not everything is plug 'n' play, you still need to configure some
things manually.
As with every job, you need to create a valid `.gitlab-ci.yml` describing the
build environment.
Let's first specify the PHP image that will be used for the job process
Let's first specify the PHP image that is used for the job process
(you can read more about what an image means in the runner's lingo reading
about [Using Docker images](../docker/using_docker_images.md#what-is-an-image)).
......@@ -106,7 +106,7 @@ test:app:
### Test against different PHP versions in Docker builds
Testing against multiple versions of PHP is super easy. Just add another job
with a different Docker image version and the runner will do the rest:
with a different Docker image version and the runner does the rest:
```yaml
before_script:
......@@ -128,7 +128,7 @@ test:7.0:
### Custom PHP configuration in Docker builds
There are times where you will need to customise your PHP environment by
There are times where you need to customise your PHP environment by
putting your `.ini` file into `/usr/local/etc/php/conf.d/`. For that purpose
add a `before_script` action:
......@@ -168,7 +168,7 @@ The [phpenv](https://github.com/phpenv/phpenv) project allows you to easily mana
each with its own configuration. This is especially useful when testing PHP projects
with the Shell executor.
You will have to install it on your build machine under the `gitlab-runner`
You have to install it on your build machine under the `gitlab-runner`
user following [the upstream installation guide](https://github.com/phpenv/phpenv#installation).
Using phpenv also allows to easily configure the PHP environment with:
......@@ -181,7 +181,7 @@ phpenv config-add my_config.ini
[is abandoned](https://github.com/phpenv/phpenv/issues/57). There is a fork
at [madumlao/phpenv](https://github.com/madumlao/phpenv) that tries to bring
the project back to life. [CHH/phpenv](https://github.com/CHH/phpenv) also
seems like a good alternative. Picking any of the mentioned tools will work
seems like a good alternative. Picking any of the mentioned tools works
with the basic phpenv commands. Guiding you to choose the right phpenv is out
of the scope of this tutorial.*
......@@ -274,4 +274,4 @@ that runs on [GitLab.com](https://gitlab.com) using our publicly available
[shared runners](../runners/README.md).
Want to hack on it? Simply fork it, commit, and push your changes. Within a few
moments the changes will be picked by a public runner and the job will begin.
moments the changes are picked by a public runner and the job begins.
......@@ -8,7 +8,7 @@ type: concepts
# Introduction to CI/CD with GitLab
In this document, we'll present an overview of the concepts of Continuous Integration,
This document presents an overview of the concepts of Continuous Integration,
Continuous Delivery, and Continuous Deployment, as well as an introduction to
GitLab CI/CD.
......@@ -100,18 +100,18 @@ located in the root path of your repository.
In this file, you can define the scripts you want to run, define include and
cache dependencies, choose commands you want to run in sequence
and those you want to run in parallel, define where you want to
deploy your app, and specify whether you will want to run the scripts automatically
deploy your app, and specify whether you want to run the scripts automatically
or trigger any of them manually. After you're familiar with
GitLab CI/CD you can add more advanced steps into the configuration file.
To add scripts to that file, you'll need to organize them in a
To add scripts to that file, you need to organize them in a
sequence that suits your application and are in accordance with
the tests you wish to perform. To visualize the process, imagine
that all the scripts you add to the configuration file are the
same as the commands you run on a terminal on your computer.
After you've added your `.gitlab-ci.yml` configuration file to your
repository, GitLab will detect it and run your scripts with the
repository, GitLab detects it and run your scripts with the
tool called [GitLab Runner](https://docs.gitlab.com/runner/), which
works similarly to your terminal.
......@@ -191,7 +191,7 @@ lifecycle, as shown in the illustration below.
![Deeper look into the basic CI/CD workflow](img/gitlab_workflow_example_extended_v12_3.png)
If you look at the image from the left to the right,
you'll see some of the features available in GitLab
you can see some of the features available in GitLab
according to each stage (Verify, Package, Release).
1. **Verify**:
......
......@@ -57,7 +57,7 @@ When you use this method, you have to specify `only: - merge_requests` for each
example, the pipeline contains a `test` job that is configured to run on merge requests.
The `build` and `deploy` jobs don't have the `only: - merge_requests` keyword,
so they will not run on merge requests.
so they don't run on merge requests.
```yaml
build:
......@@ -82,7 +82,7 @@ deploy:
#### Excluding certain jobs
The behavior of the `only: [merge_requests]` keyword is such that _only_ jobs with
that keyword are run in the context of a merge request; no other jobs will be run.
that keyword are run in the context of a merge request; no other jobs run.
However, you can invert this behavior and have all of your jobs run _except_
for one or two.
......@@ -120,8 +120,8 @@ C:
Therefore:
- Since `A` and `B` are getting the `only:` rule to execute in all cases, they will always run.
- Since `C` specifies that it should only run for merge requests, it will not run for any pipeline
- Since `A` and `B` are getting the `only:` rule to execute in all cases, they always run.
- Since `C` specifies that it should only run for merge requests, it doesn't run for any pipeline
except a merge request pipeline.
This helps you avoid having to add the `only:` rule to all of your jobs to make
......@@ -209,7 +209,7 @@ The variable names begin with the `CI_MERGE_REQUEST_` prefix.
If you are experiencing duplicated pipelines when using `rules`, take a look at
the [important differences between `rules` and `only`/`except`](../yaml/README.md#prevent-duplicate-pipelines),
which will help you get your starting configuration correct.
which helps you get your starting configuration correct.
If you are seeing two pipelines when using `only/except`, please see the caveats
related to using `only/except` above (or, consider moving to `rules`).
......
......@@ -19,8 +19,8 @@ the source branch have already been merged into the target branch.
If the pipeline fails due to a problem in the target branch, you can wait until the
target is fixed and re-run the pipeline.
This new pipeline will run as if the source is merged with the updated target, and you
will not need to rebase.
This new pipeline runs as if the source is merged with the updated target, and you
don't need to rebase.
The pipeline does not automatically run when the target branch changes. Only changes
to the source branch trigger a new pipeline. If a long time has passed since the last successful
......@@ -33,7 +33,7 @@ When the merge request can't be merged, the pipeline runs against the source bra
- The merge request is a [**Draft** merge request](../../../user/project/merge_requests/work_in_progress_merge_requests.md).
In these cases, the pipeline runs as a [pipeline for merge requests](../index.md)
and is labeled as `detached`. If these cases no longer exist, new pipelines will
and is labeled as `detached`. If these cases no longer exist, new pipelines
again run against the merged results.
Any user who has developer [permissions](../../../user/permissions.md) can run a
......@@ -71,7 +71,7 @@ GitLab [automatically displays](merge_trains/index.md#add-a-merge-request-to-a-m
a **Start/Add Merge Train button**.
Generally, this is a safer option than merging merge requests immediately, because your
merge request will be evaluated with an expected post-merge result before the actual
merge request is evaluated with an expected post-merge result before the actual
merge happens.
For more information, read the [documentation on Merge Trains](merge_trains/index.md).
......@@ -84,10 +84,10 @@ GitLab CI/CD can detect the presence of redundant pipelines, and cancels them
to conserve CI resources.
When a user merges a merge request immediately within an ongoing merge
train, the train will be reconstructed, as it will recreate the expected
train, the train is reconstructed, because it recreates the expected
post-merge commit and pipeline. In this case, the merge train may already
have pipelines running against the previous expected post-merge commit.
These pipelines are considered redundant and will be automatically
These pipelines are considered redundant and are automatically
canceled.
## Troubleshooting
......
......@@ -39,7 +39,7 @@ To add a merge request to a merge train, you need [permissions](../../../../user
Each merge train can run a maximum of **twenty** pipelines in parallel.
If more than twenty merge requests are added to the merge train, the merge requests
will be queued until a slot in the merge train is free. There is no limit to the
are queued until a slot in the merge train is free. There is no limit to the
number of merge requests that can be queued.
## Merge train example
......@@ -55,7 +55,7 @@ If the pipeline for `B` fails, it is removed from the train. The pipeline for
`C` restarts with the `A` and `C` changes, but without the `B` changes.
If `A` then completes successfully, it merges into the target branch, and `C` continues
to run. If more merge requests are added to the train, they will now include the `A`
to run. If more merge requests are added to the train, they now include the `A`
changes that are included in the target branch, and the `C` changes that are from
the merge request already in the train.
......@@ -152,7 +152,7 @@ is recreated and all pipelines restart.
### Merge request dropped from the merge train immediately
If a merge request is not mergeable (for example, it's a draft merge request, there is a merge
conflict, etc.), your merge request will be dropped from the merge train automatically.
conflict, etc.), your merge request is dropped from the merge train automatically.
In these cases, the reason for dropping the merge request is in the **system notes**.
......@@ -179,7 +179,7 @@ for more information.
A Merge Train pipeline cannot be retried because the merge request is dropped from the merge train upon failure. For this reason, the retry button does not appear next to the pipeline icon.
In the case of pipeline failure, you should [re-enqueue](#add-a-merge-request-to-a-merge-train) the merge request to the merge train, which will then initiate a new pipeline.
In the case of pipeline failure, you should [re-enqueue](#add-a-merge-request-to-a-merge-train) the merge request to the merge train, which then initiates a new pipeline.
### Unable to add to merge train with message "The pipeline for this merge request failed."
......@@ -195,9 +195,10 @@ you can clear the **Pipelines must succeed** check box and keep
**Enable merge trains and pipelines for merged results** (merge trains) enabled.
If you want to keep the **Pipelines must succeed** option enabled along with Merge
Trains, you can create a new pipeline for merged results when this error occurs by
going to the **Pipelines** tab and clicking **Run pipeline**. Then click
**Start/Add to merge train when pipeline succeeds**.
Trains, create a new pipeline for merged results when this error occurs:
1. Go to the **Pipelines** tab and click **Run pipeline**.
1. Click **Start/Add to merge train when pipeline succeeds**.
See [the related issue](https://gitlab.com/gitlab-org/gitlab/-/issues/35135)
for more information.
......
......@@ -47,8 +47,8 @@ For an example Performance job, see
NOTE: **Note:**
If the Browser Performance report has no data to compare, such as when you add the
Browser Performance job in your `.gitlab-ci.yml` for the very first time,
the Browser Performance report widget won't show. It must have run at least
once on the target branch (`master`, for example), before it will display in a
the Browser Performance report widget doesn't show. It must have run at least
once on the target branch (`master`, for example), before it displays in a
merge request targeting that branch.
![Browser Performance Widget](img/browser_performance_testing.png)
......@@ -81,7 +81,7 @@ The above example creates a `performance` job in your CI/CD pipeline and runs
sitespeed.io against the webpage you defined in `URL` to gather key metrics.
The example uses a CI/CD template that is included in all GitLab installations since
12.4, but it will not work with Kubernetes clusters. If you are using GitLab 12.3
12.4, but it doesn't work with Kubernetes clusters. If you are using GitLab 12.3
or older, you must [add the configuration manually](#gitlab-versions-123-and-older)
The template uses the [GitLab plugin for sitespeed.io](https://gitlab.com/gitlab-org/gl-performance),
......@@ -115,7 +115,7 @@ performance:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27599) in GitLab 13.0.
You can configure the sensitivity of degradation alerts to avoid getting alerts for minor drops in metrics.
This is done by setting the `DEGRADATION_THRESHOLD` variable. In the example below, the alert will only show up
This is done by setting the `DEGRADATION_THRESHOLD` variable. In the example below, the alert only shows up
if the `Total Score` metric degrades by 5 points or more:
```yaml
......@@ -181,7 +181,7 @@ performance:
### GitLab versions 12.3 and older
Browser Performance Testing has gone through several changes since it's introduction.
In this section we'll detail these changes and how you can run the test based on your
In this section we detail these changes and how you can run the test based on your
GitLab version:
- In GitLab 12.4 [a job template was made available](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Verify/Browser-Performance.gitlab-ci.yml).
......
......@@ -84,8 +84,8 @@ include:
- template: Code-Quality.gitlab-ci.yml
```
The above example will create a `code_quality` job in your CI/CD pipeline which
will scan your source code for code quality issues. The report will be saved as a
The above example creates a `code_quality` job in your CI/CD pipeline which
scans your source code for code quality issues. The report is saved as a
[Code Quality report artifact](../../../ci/pipelines/job_artifacts.md#artifactsreportscodequality)
that you can later download and analyze.
......@@ -132,17 +132,17 @@ stages:
```
TIP: **Tip:**
This information will be automatically extracted and shown right in the merge request widget.
This information is automatically extracted and shown right in the merge request widget.
CAUTION: **Caution:**
On self-managed instances, if a malicious actor compromises the Code Quality job
definition they will be able to execute privileged Docker commands on the runner
definition they could execute privileged Docker commands on the runner
host. Having proper access control policies mitigates this attack vector by
allowing access only to trusted actors.
### Disabling the code quality job
The `code_quality` job will not run if the `$CODE_QUALITY_DISABLED` environment
The `code_quality` job doesn't run if the `$CODE_QUALITY_DISABLED` environment
variable is present. Please refer to the environment variables [documentation](../../../ci/variables/README.md)
to learn more about how to define one.
......@@ -185,7 +185,7 @@ job1:
- if: '$CI_COMMIT_TAG' # Run job1 in pipelines for tags
```
To make these work together, you will need to overwrite the code quality `rules`
To make these work together, you need to overwrite the code quality `rules`
so that they match your current `rules`. From the example above, it could look like:
```yaml
......@@ -260,7 +260,7 @@ Once the Code Quality job has completed:
Code Quality tab of the Pipeline Details page.
- Potential changes to code quality are shown directly in the merge request.
The Code Quality widget in the merge request compares the reports from the base and head of the branch,
then lists any violations that will be resolved or created when the branch is merged.
then lists any violations that are resolved or created when the branch is merged.
- The full JSON report is available as a
[downloadable artifact](../../../ci/pipelines/job_artifacts.md#downloading-artifacts)
for the `code_quality` job.
......@@ -341,11 +341,11 @@ is still used.
This can be due to multiple reasons:
- You just added the Code Quality job in your `.gitlab-ci.yml`. The report does not
have anything to compare to yet, so no information can be displayed. Future merge
requests will have something to compare to.
have anything to compare to yet, so no information can be displayed. It only displays
after future merge requests have something to compare to.
- Your pipeline is not set to run the code quality job on your default branch. If there is no report generated from the default branch, your MR branch reports will not have anything to compare to.
- If no [degradation or error is detected](https://docs.codeclimate.com/docs/maintainability#section-checks),
nothing will be displayed.
nothing is displayed.
- The [`artifacts:expire_in`](../../../ci/yaml/README.md#artifactsexpire_in) CI/CD
setting can cause the Code Quality artifact(s) to expire faster than desired.
- Large `codeclimate.json` files (esp. >10 MB) are [known to prevent the report from being displayed](https://gitlab.com/gitlab-org/gitlab/-/issues/2737).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment