Commit b7c276c0 authored by Amy Qualls's avatar Amy Qualls

Merge branch 'docs-ci-future-tense' into 'master'

Remove use of present tense in ci docs (part 1)

See merge request gitlab-org/gitlab!47983
parents c8b21732 9d36b452
...@@ -216,7 +216,7 @@ jsdom ...@@ -216,7 +216,7 @@ jsdom
JupyterHub JupyterHub
kanban kanban
kanbans kanbans
Kaniko kaniko
Karma Karma
Kerberos Kerberos
keyset keyset
...@@ -317,6 +317,7 @@ PgBouncer ...@@ -317,6 +317,7 @@ PgBouncer
Phabricator Phabricator
phaser phaser
phasers phasers
phpenv
Pipfile Pipfile
Pipfiles Pipfiles
Piwik Piwik
......
...@@ -38,12 +38,12 @@ runtime dependencies needed to compile the project: ...@@ -38,12 +38,12 @@ runtime dependencies needed to compile the project:
be configured to pass intermediate build results between stages, this should be be configured to pass intermediate build results between stages, this should be
done with artifacts instead. done with artifacts instead.
- `artifacts`: **Use for stage results that will be passed between stages.** - `artifacts`: **Use for stage results that are passed between stages.**
Artifacts are files generated by a job which are stored and uploaded, and can then Artifacts are files generated by a job which are stored and uploaded, and can then
be fetched and used by jobs in later stages of the **same pipeline**. In other words, be fetched and used by jobs in later stages of the **same pipeline**. In other words,
[you can't create an artifact in job-A in stage-1, and then use this artifact in job-B in stage-1](https://gitlab.com/gitlab-org/gitlab/-/issues/25837). [you can't create an artifact in job-A in stage-1, and then use this artifact in job-B in stage-1](https://gitlab.com/gitlab-org/gitlab/-/issues/25837).
This data will not be available in different pipelines, but is available to be downloaded This data is be available in different pipelines, but is available to be downloaded
from the UI. from the UI.
The name `artifacts` sounds like it's only useful outside of the job, like for downloading The name `artifacts` sounds like it's only useful outside of the job, like for downloading
...@@ -87,7 +87,7 @@ cache, when declaring `cache` in your jobs, use one or a mix of the following: ...@@ -87,7 +87,7 @@ cache, when declaring `cache` in your jobs, use one or a mix of the following:
- [Tag your runners](../runners/README.md#use-tags-to-limit-the-number-of-jobs-using-the-runner) and use the tag on jobs - [Tag your runners](../runners/README.md#use-tags-to-limit-the-number-of-jobs-using-the-runner) and use the tag on jobs
that share their cache. that share their cache.
- [Use sticky runners](../runners/README.md#prevent-a-specific-runner-from-being-enabled-for-other-projects) - [Use sticky runners](../runners/README.md#prevent-a-specific-runner-from-being-enabled-for-other-projects)
that will be only available to a particular project. that are only available to a particular project.
- [Use a `key`](../yaml/README.md#cachekey) that fits your workflow (for example, - [Use a `key`](../yaml/README.md#cachekey) that fits your workflow (for example,
different caches on each branch). For that, you can take advantage of the different caches on each branch). For that, you can take advantage of the
[CI/CD predefined variables](../variables/README.md#predefined-environment-variables). [CI/CD predefined variables](../variables/README.md#predefined-environment-variables).
...@@ -106,7 +106,7 @@ of the following must be true: ...@@ -106,7 +106,7 @@ of the following must be true:
where the cache is stored in S3 buckets (like shared runners on GitLab.com). where the cache is stored in S3 buckets (like shared runners on GitLab.com).
- Use multiple runners (not in autoscale mode) of the same architecture that - Use multiple runners (not in autoscale mode) of the same architecture that
share a common network-mounted directory (using NFS or something similar) share a common network-mounted directory (using NFS or something similar)
where the cache will be stored. where the cache is stored.
TIP: **Tip:** TIP: **Tip:**
Read about the [availability of the cache](#availability-of-the-cache) Read about the [availability of the cache](#availability-of-the-cache)
...@@ -125,7 +125,7 @@ cache: ...@@ -125,7 +125,7 @@ cache:
While this feels like it might be safe from accidentally overwriting the cache, While this feels like it might be safe from accidentally overwriting the cache,
it means merge requests get slow first pipelines, which might be a bad it means merge requests get slow first pipelines, which might be a bad
developer experience. The next time a new commit is pushed to the branch, the developer experience. The next time a new commit is pushed to the branch, the
cache will be re-used. cache is re-used.
To enable per-job and per-branch caching: To enable per-job and per-branch caching:
...@@ -160,7 +160,7 @@ cache: ...@@ -160,7 +160,7 @@ cache:
### Disabling cache on specific jobs ### Disabling cache on specific jobs
If you have defined the cache globally, it means that each job will use the If you have defined the cache globally, it means that each job uses the
same definition. You can override this behavior per-job, and if you want to same definition. You can override this behavior per-job, and if you want to
disable it completely, use an empty hash: disable it completely, use an empty hash:
...@@ -431,9 +431,9 @@ Here's what happens behind the scenes: ...@@ -431,9 +431,9 @@ Here's what happens behind the scenes:
1. `script` is executed. 1. `script` is executed.
1. Pipeline finishes. 1. Pipeline finishes.
By using a single runner on a single machine, you'll not have the issue where By using a single runner on a single machine, you don't have the issue where
`job B` might execute on a runner different from `job A`, thus guaranteeing the `job B` might execute on a runner different from `job A`, thus guaranteeing the
cache between stages. That will only work if the build goes from stage `build` cache between stages. That only works if the build goes from stage `build`
to `test` in the same runner/machine, otherwise, you [might not have the cache to `test` in the same runner/machine, otherwise, you [might not have the cache
available](#cache-mismatch). available](#cache-mismatch).
...@@ -442,7 +442,7 @@ During the caching process, there's also a couple of things to consider: ...@@ -442,7 +442,7 @@ During the caching process, there's also a couple of things to consider:
- If some other job, with another cache configuration had saved its - If some other job, with another cache configuration had saved its
cache in the same zip file, it is overwritten. If the S3 based shared cache is cache in the same zip file, it is overwritten. If the S3 based shared cache is
used, the file is additionally uploaded to S3 to an object based on the cache used, the file is additionally uploaded to S3 to an object based on the cache
key. So, two jobs with different paths, but the same cache key, will overwrite key. So, two jobs with different paths, but the same cache key, overwrites
their cache. their cache.
- When extracting the cache from `cache.zip`, everything in the zip file is - When extracting the cache from `cache.zip`, everything in the zip file is
extracted in the job's working directory (usually the repository which is extracted in the job's working directory (usually the repository which is
...@@ -450,7 +450,7 @@ During the caching process, there's also a couple of things to consider: ...@@ -450,7 +450,7 @@ During the caching process, there's also a couple of things to consider:
things in the archive of `job B`. things in the archive of `job B`.
The reason why it works this way is because the cache created for one runner The reason why it works this way is because the cache created for one runner
often will not be valid when used by a different one which can run on a often isn't valid when used by a different one which can run on a
**different architecture** (e.g., when the cache includes binary files). And **different architecture** (e.g., when the cache includes binary files). And
since the different steps might be executed by runners running on different since the different steps might be executed by runners running on different
machines, it is a safe default. machines, it is a safe default.
...@@ -472,7 +472,7 @@ Let's explore some examples. ...@@ -472,7 +472,7 @@ Let's explore some examples.
#### Examples #### Examples
Let's assume you have only one runner assigned to your project, so the cache Let's assume you have only one runner assigned to your project, so the cache
will be stored in the runner's machine by default. If two jobs, A and B, is stored in the runner's machine by default. If two jobs, A and B,
have the same cache key, but they cache different paths, cache B would overwrite have the same cache key, but they cache different paths, cache B would overwrite
cache A, even if their `paths` don't match: cache A, even if their `paths` don't match:
...@@ -506,15 +506,15 @@ job B: ...@@ -506,15 +506,15 @@ job B:
1. `job B` runs. 1. `job B` runs.
1. The previous cache, if any, is unzipped. 1. The previous cache, if any, is unzipped.
1. `vendor/` is cached as cache.zip and overwrites the previous one. 1. `vendor/` is cached as cache.zip and overwrites the previous one.
1. The next time `job A` runs it will use the cache of `job B` which is different 1. The next time `job A` runs it uses the cache of `job B` which is different
and thus will be ineffective. and thus isn't effective.
To fix that, use different `keys` for each job. To fix that, use different `keys` for each job.
In another case, let's assume you have more than one runner assigned to your In another case, let's assume you have more than one runner assigned to your
project, but the distributed cache is not enabled. The second time the project, but the distributed cache is not enabled. The second time the
pipeline is run, we want `job A` and `job B` to re-use their cache (which in this case pipeline is run, we want `job A` and `job B` to re-use their cache (which in this case
will be different): is different):
```yaml ```yaml
stages: stages:
...@@ -553,7 +553,7 @@ To start with a fresh copy of the cache, there are two ways to do that. ...@@ -553,7 +553,7 @@ To start with a fresh copy of the cache, there are two ways to do that.
### Clearing the cache by changing `cache:key` ### Clearing the cache by changing `cache:key`
All you have to do is set a new `cache: key` in your `.gitlab-ci.yml`. In the All you have to do is set a new `cache: key` in your `.gitlab-ci.yml`. In the
next run of the pipeline, the cache will be stored in a different location. next run of the pipeline, the cache is stored in a different location.
### Clearing the cache manually ### Clearing the cache manually
...@@ -567,7 +567,7 @@ via GitLab's UI: ...@@ -567,7 +567,7 @@ via GitLab's UI:
![Clear runner caches](img/clear_runners_cache.png) ![Clear runner caches](img/clear_runners_cache.png)
1. On the next push, your CI/CD job will use a new cache. 1. On the next push, your CI/CD job uses a new cache.
Behind the scenes, this works by increasing a counter in the database, and the Behind the scenes, this works by increasing a counter in the database, and the
value of that counter is used to create the key for the cache by appending an value of that counter is used to create the key for the cache by appending an
......
...@@ -19,12 +19,12 @@ To use GitLab CI/CD with a Bitbucket Cloud repository: ...@@ -19,12 +19,12 @@ To use GitLab CI/CD with a Bitbucket Cloud repository:
![Create project](img/external_repository.png) ![Create project](img/external_repository.png)
GitLab will import the repository and enable [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository). GitLab imports the repository and enables [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository).
1. In GitLab create a 1. In GitLab create a
[Personal Access Token](../../user/profile/personal_access_tokens.md) [Personal Access Token](../../user/profile/personal_access_tokens.md)
with `api` scope. This will be used to authenticate requests from the web with `api` scope. This is used to authenticate requests from the web
hook that will be created in Bitbucket to notify GitLab of new commits. hook that is created in Bitbucket to notify GitLab of new commits.
1. In Bitbucket, from **Settings > Webhooks**, create a new web hook to notify 1. In Bitbucket, from **Settings > Webhooks**, create a new web hook to notify
GitLab of new commits. GitLab of new commits.
...@@ -62,7 +62,7 @@ To use GitLab CI/CD with a Bitbucket Cloud repository: ...@@ -62,7 +62,7 @@ To use GitLab CI/CD with a Bitbucket Cloud repository:
1. In Bitbucket, add a script to push the pipeline status to Bitbucket. 1. In Bitbucket, add a script to push the pipeline status to Bitbucket.
> Note: changes made in GitLab will be overwritten by any changes made > Note: changes made in GitLab are overwritten by any changes made
> upstream in Bitbucket. > upstream in Bitbucket.
Create a file `build_status` and insert the script below and run Create a file `build_status` and insert the script below and run
......
...@@ -28,7 +28,7 @@ To perform a one-off authorization with GitHub to grant GitLab access your ...@@ -28,7 +28,7 @@ To perform a one-off authorization with GitHub to grant GitLab access your
repositories: repositories:
1. Open <https://github.com/settings/tokens/new> to create a **Personal Access 1. Open <https://github.com/settings/tokens/new> to create a **Personal Access
Token**. This token will be used to access your repository and push commit Token**. This token is used to access your repository and push commit
statuses to GitHub. statuses to GitHub.
The `repo` and `admin:repo_hook` should be enable to allow GitLab access to The `repo` and `admin:repo_hook` should be enable to allow GitLab access to
...@@ -43,12 +43,12 @@ repositories: ...@@ -43,12 +43,12 @@ repositories:
1. In GitHub, add a `.gitlab-ci.yml` to [configure GitLab CI/CD](../quick_start/README.md). 1. In GitHub, add a `.gitlab-ci.yml` to [configure GitLab CI/CD](../quick_start/README.md).
GitLab will: GitLab:
1. Import the project. 1. Imports the project.
1. Enable [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository) 1. Enables [Pull Mirroring](../../user/project/repository/repository_mirroring.md#pulling-from-a-remote-repository)
1. Enable [GitHub project integration](../../user/project/integrations/github.md) 1. Enables [GitHub project integration](../../user/project/integrations/github.md)
1. Create a web hook on GitHub to notify GitLab of new commits. 1. Creates a web hook on GitHub to notify GitLab of new commits.
## Connect manually ## Connect manually
...@@ -57,7 +57,7 @@ To use **GitHub Enterprise** with **GitLab.com**, use this method. ...@@ -57,7 +57,7 @@ To use **GitHub Enterprise** with **GitLab.com**, use this method.
To manually enable GitLab CI/CD for your repository: To manually enable GitLab CI/CD for your repository:
1. In GitHub open <https://github.com/settings/tokens/new> create a **Personal 1. In GitHub open <https://github.com/settings/tokens/new> create a **Personal
Access Token.** GitLab will use this token to access your repository and Access Token.** GitLab uses this token to access your repository and
push commit statuses. push commit statuses.
Enter a **Token description** and update the scope to allow: Enter a **Token description** and update the scope to allow:
...@@ -68,7 +68,7 @@ To manually enable GitLab CI/CD for your repository: ...@@ -68,7 +68,7 @@ To manually enable GitLab CI/CD for your repository:
URL for your GitHub repository. If your project is private, use the personal URL for your GitHub repository. If your project is private, use the personal
access token you just created for authentication. access token you just created for authentication.
GitLab will automatically configure polling-based pull mirroring. GitLab automatically configures polling-based pull mirroring.
1. Still in GitLab, enable the [GitHub project integration](../../user/project/integrations/github.md) 1. Still in GitLab, enable the [GitHub project integration](../../user/project/integrations/github.md)
from **Settings > Integrations.** from **Settings > Integrations.**
......
...@@ -18,7 +18,7 @@ GitLab CI/CD can be used with: ...@@ -18,7 +18,7 @@ GitLab CI/CD can be used with:
Instead of moving your entire project to GitLab, you can connect your Instead of moving your entire project to GitLab, you can connect your
external repository to get the benefits of GitLab CI/CD. external repository to get the benefits of GitLab CI/CD.
Connecting an external repository will set up [repository mirroring](../../user/project/repository/repository_mirroring.md) Connecting an external repository sets up [repository mirroring](../../user/project/repository/repository_mirroring.md)
and create a lightweight project with issues, merge requests, wiki, and and create a lightweight project with issues, merge requests, wiki, and
snippets disabled. These features snippets disabled. These features
[can be re-enabled later](../../user/project/settings/index.md#sharing-and-permissions). [can be re-enabled later](../../user/project/settings/index.md#sharing-and-permissions).
...@@ -74,7 +74,7 @@ If changes are pushed to the branch referenced by the Pull Request and the ...@@ -74,7 +74,7 @@ If changes are pushed to the branch referenced by the Pull Request and the
Pull Request is still open, a pipeline for the external pull request is Pull Request is still open, a pipeline for the external pull request is
created. created.
GitLab CI/CD will create 2 pipelines in this case. One for the GitLab CI/CD creates 2 pipelines in this case. One for the
branch push and one for the external pull request. branch push and one for the external pull request.
After the Pull Request is closed, no pipelines are created for the external pull After the Pull Request is closed, no pipelines are created for the external pull
...@@ -89,10 +89,10 @@ The variable names are prefixed with `CI_EXTERNAL_PULL_REQUEST_`. ...@@ -89,10 +89,10 @@ The variable names are prefixed with `CI_EXTERNAL_PULL_REQUEST_`.
### Limitations ### Limitations
This feature currently does not support Pull Requests from fork repositories. Any Pull Requests from fork repositories will be ignored. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/5667). This feature currently does not support Pull Requests from fork repositories. Any Pull Requests from fork repositories are ignored. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/5667).
Given that GitLab will create 2 pipelines, if changes are pushed to a remote branch that Given that GitLab creates 2 pipelines, if changes are pushed to a remote branch that
references an open Pull Request, both will contribute to the status of the Pull Request references an open Pull Request, both contribute to the status of the Pull Request
via GitHub integration. If you want to exclusively run pipelines on external pull via GitHub integration. If you want to exclusively run pipelines on external pull
requests and not on branches you can add `except: [branches]` to the job specs. requests and not on branches you can add `except: [branches]` to the job specs.
[Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/24089#workaround). [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/24089#workaround).
...@@ -17,7 +17,7 @@ be set up. ...@@ -17,7 +17,7 @@ be set up.
For example, you may have a specific tool or separate website that is built For example, you may have a specific tool or separate website that is built
as part of your main project. Using a DAG, you can specify the relationship between as part of your main project. Using a DAG, you can specify the relationship between
these jobs and GitLab will then execute the jobs as soon as possible instead of waiting these jobs and GitLab executes the jobs as soon as possible instead of waiting
for each stage to complete. for each stage to complete.
Unlike other DAG solutions for CI/CD, GitLab does not require you to choose one or the Unlike other DAG solutions for CI/CD, GitLab does not require you to choose one or the
...@@ -44,9 +44,9 @@ It has a pipeline that looks like the following: ...@@ -44,9 +44,9 @@ It has a pipeline that looks like the following:
| build_d | test_d | deploy_d | | build_d | test_d | deploy_d |
Using a DAG, you can relate the `_a` jobs to each other separately from the `_b` jobs, Using a DAG, you can relate the `_a` jobs to each other separately from the `_b` jobs,
and even if service `a` takes a very long time to build, service `b` will not and even if service `a` takes a very long time to build, service `b` doesn't
wait for it and will finish as quickly as it can. In this very same pipeline, `_c` and wait for it and finishes as quickly as it can. In this very same pipeline, `_c` and
`_d` can be left alone and will run together in staged sequence just like any normal `_d` can be left alone and run together in staged sequence just like any normal
GitLab pipeline. GitLab pipeline.
## Use cases ## Use cases
...@@ -60,7 +60,7 @@ but related microservices. ...@@ -60,7 +60,7 @@ but related microservices.
Additionally, a DAG can help with general speediness of pipelines and helping Additionally, a DAG can help with general speediness of pipelines and helping
to deliver fast feedback. By creating dependency relationships that don't unnecessarily to deliver fast feedback. By creating dependency relationships that don't unnecessarily
block each other, your pipelines will run as quickly as possible regardless of block each other, your pipelines run as quickly as possible regardless of
pipeline stages, ensuring output (including errors) is available to developers pipeline stages, ensuring output (including errors) is available to developers
as quickly as possible. as quickly as possible.
...@@ -88,13 +88,13 @@ are certain use cases that you may need to work around. For more information: ...@@ -88,13 +88,13 @@ are certain use cases that you may need to work around. For more information:
> - It's enabled on GitLab.com. > - It's enabled on GitLab.com.
> - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#enable-or-disable-needs-visualization). > - For GitLab self-managed instances, GitLab administrators can opt to [disable it](#enable-or-disable-needs-visualization).
The needs visualization makes it easier to visualize the relationships between dependent jobs in a DAG. This graph will display all the jobs in a pipeline that need or are needed by other jobs. Jobs with no relationships are not displayed in this view. The needs visualization makes it easier to visualize the relationships between dependent jobs in a DAG. This graph displays all the jobs in a pipeline that need or are needed by other jobs. Jobs with no relationships are not displayed in this view.
To see the needs visualization, click on the **Needs** tab when viewing a pipeline that uses the `needs:` keyword. To see the needs visualization, click on the **Needs** tab when viewing a pipeline that uses the `needs:` keyword.
![Needs visualization example](img/dag_graph_example_v13_1.png) ![Needs visualization example](img/dag_graph_example_v13_1.png)
Clicking a node will highlight all the job paths it depends on. Clicking a node highlights all the job paths it depends on.
![Needs visualization with path highlight](img/dag_graph_example_clicked_v13_1.png) ![Needs visualization with path highlight](img/dag_graph_example_clicked_v13_1.png)
......
...@@ -381,7 +381,7 @@ content: ...@@ -381,7 +381,7 @@ content:
Update the `config.toml` file to mount the file to Update the `config.toml` file to mount the file to
`/etc/docker/daemon.json`. This would mount the file for **every** `/etc/docker/daemon.json`. This would mount the file for **every**
container that is created by GitLab Runner. The configuration will be container that is created by GitLab Runner. The configuration is
picked up by the `dind` service. picked up by the `dind` service.
```toml ```toml
......
...@@ -37,8 +37,8 @@ few important details: ...@@ -37,8 +37,8 @@ few important details:
- The kaniko debug image is recommended (`gcr.io/kaniko-project/executor:debug`) - The kaniko debug image is recommended (`gcr.io/kaniko-project/executor:debug`)
because it has a shell, and a shell is required for an image to be used with because it has a shell, and a shell is required for an image to be used with
GitLab CI/CD. GitLab CI/CD.
- The entrypoint will need to be [overridden](using_docker_images.md#overriding-the-entrypoint-of-an-image), - The entrypoint needs to be [overridden](using_docker_images.md#overriding-the-entrypoint-of-an-image),
otherwise the build script will not run. otherwise the build script doesn't run.
- A Docker `config.json` file needs to be created with the authentication - A Docker `config.json` file needs to be created with the authentication
information for the desired container registry. information for the desired container registry.
...@@ -47,7 +47,7 @@ In the following example, kaniko is used to: ...@@ -47,7 +47,7 @@ In the following example, kaniko is used to:
1. Build a Docker image. 1. Build a Docker image.
1. Then push it to [GitLab Container Registry](../../user/packages/container_registry/index.md). 1. Then push it to [GitLab Container Registry](../../user/packages/container_registry/index.md).
The job will run only when a tag is pushed. A `config.json` file is created under The job runs only when a tag is pushed. A `config.json` file is created under
`/kaniko/.docker` with the needed GitLab Container Registry credentials taken from the `/kaniko/.docker` with the needed GitLab Container Registry credentials taken from the
[environment variables](../variables/README.md#predefined-environment-variables) [environment variables](../variables/README.md#predefined-environment-variables)
GitLab CI/CD provides. GitLab CI/CD provides.
......
...@@ -15,17 +15,17 @@ using the Shell executor. ...@@ -15,17 +15,17 @@ using the Shell executor.
## Test PHP projects using the Docker executor ## Test PHP projects using the Docker executor
While it is possible to test PHP apps on any system, this would require manual While it is possible to test PHP apps on any system, this would require manual
configuration from the developer. To overcome this we will be using the configuration from the developer. To overcome this we use the
official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub. official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub.
This will allow us to test PHP projects against different versions of PHP. This allows us to test PHP projects against different versions of PHP.
However, not everything is plug 'n' play, you still need to configure some However, not everything is plug 'n' play, you still need to configure some
things manually. things manually.
As with every job, you need to create a valid `.gitlab-ci.yml` describing the As with every job, you need to create a valid `.gitlab-ci.yml` describing the
build environment. build environment.
Let's first specify the PHP image that will be used for the job process Let's first specify the PHP image that is used for the job process
(you can read more about what an image means in the runner's lingo reading (you can read more about what an image means in the runner's lingo reading
about [Using Docker images](../docker/using_docker_images.md#what-is-an-image)). about [Using Docker images](../docker/using_docker_images.md#what-is-an-image)).
...@@ -106,7 +106,7 @@ test:app: ...@@ -106,7 +106,7 @@ test:app:
### Test against different PHP versions in Docker builds ### Test against different PHP versions in Docker builds
Testing against multiple versions of PHP is super easy. Just add another job Testing against multiple versions of PHP is super easy. Just add another job
with a different Docker image version and the runner will do the rest: with a different Docker image version and the runner does the rest:
```yaml ```yaml
before_script: before_script:
...@@ -128,7 +128,7 @@ test:7.0: ...@@ -128,7 +128,7 @@ test:7.0:
### Custom PHP configuration in Docker builds ### Custom PHP configuration in Docker builds
There are times where you will need to customise your PHP environment by There are times where you need to customise your PHP environment by
putting your `.ini` file into `/usr/local/etc/php/conf.d/`. For that purpose putting your `.ini` file into `/usr/local/etc/php/conf.d/`. For that purpose
add a `before_script` action: add a `before_script` action:
...@@ -168,7 +168,7 @@ The [phpenv](https://github.com/phpenv/phpenv) project allows you to easily mana ...@@ -168,7 +168,7 @@ The [phpenv](https://github.com/phpenv/phpenv) project allows you to easily mana
each with its own configuration. This is especially useful when testing PHP projects each with its own configuration. This is especially useful when testing PHP projects
with the Shell executor. with the Shell executor.
You will have to install it on your build machine under the `gitlab-runner` You have to install it on your build machine under the `gitlab-runner`
user following [the upstream installation guide](https://github.com/phpenv/phpenv#installation). user following [the upstream installation guide](https://github.com/phpenv/phpenv#installation).
Using phpenv also allows to easily configure the PHP environment with: Using phpenv also allows to easily configure the PHP environment with:
...@@ -181,7 +181,7 @@ phpenv config-add my_config.ini ...@@ -181,7 +181,7 @@ phpenv config-add my_config.ini
[is abandoned](https://github.com/phpenv/phpenv/issues/57). There is a fork [is abandoned](https://github.com/phpenv/phpenv/issues/57). There is a fork
at [madumlao/phpenv](https://github.com/madumlao/phpenv) that tries to bring at [madumlao/phpenv](https://github.com/madumlao/phpenv) that tries to bring
the project back to life. [CHH/phpenv](https://github.com/CHH/phpenv) also the project back to life. [CHH/phpenv](https://github.com/CHH/phpenv) also
seems like a good alternative. Picking any of the mentioned tools will work seems like a good alternative. Picking any of the mentioned tools works
with the basic phpenv commands. Guiding you to choose the right phpenv is out with the basic phpenv commands. Guiding you to choose the right phpenv is out
of the scope of this tutorial.* of the scope of this tutorial.*
...@@ -274,4 +274,4 @@ that runs on [GitLab.com](https://gitlab.com) using our publicly available ...@@ -274,4 +274,4 @@ that runs on [GitLab.com](https://gitlab.com) using our publicly available
[shared runners](../runners/README.md). [shared runners](../runners/README.md).
Want to hack on it? Simply fork it, commit, and push your changes. Within a few Want to hack on it? Simply fork it, commit, and push your changes. Within a few
moments the changes will be picked by a public runner and the job will begin. moments the changes are picked by a public runner and the job begins.
...@@ -8,7 +8,7 @@ type: concepts ...@@ -8,7 +8,7 @@ type: concepts
# Introduction to CI/CD with GitLab # Introduction to CI/CD with GitLab
In this document, we'll present an overview of the concepts of Continuous Integration, This document presents an overview of the concepts of Continuous Integration,
Continuous Delivery, and Continuous Deployment, as well as an introduction to Continuous Delivery, and Continuous Deployment, as well as an introduction to
GitLab CI/CD. GitLab CI/CD.
...@@ -100,18 +100,18 @@ located in the root path of your repository. ...@@ -100,18 +100,18 @@ located in the root path of your repository.
In this file, you can define the scripts you want to run, define include and In this file, you can define the scripts you want to run, define include and
cache dependencies, choose commands you want to run in sequence cache dependencies, choose commands you want to run in sequence
and those you want to run in parallel, define where you want to and those you want to run in parallel, define where you want to
deploy your app, and specify whether you will want to run the scripts automatically deploy your app, and specify whether you want to run the scripts automatically
or trigger any of them manually. After you're familiar with or trigger any of them manually. After you're familiar with
GitLab CI/CD you can add more advanced steps into the configuration file. GitLab CI/CD you can add more advanced steps into the configuration file.
To add scripts to that file, you'll need to organize them in a To add scripts to that file, you need to organize them in a
sequence that suits your application and are in accordance with sequence that suits your application and are in accordance with
the tests you wish to perform. To visualize the process, imagine the tests you wish to perform. To visualize the process, imagine
that all the scripts you add to the configuration file are the that all the scripts you add to the configuration file are the
same as the commands you run on a terminal on your computer. same as the commands you run on a terminal on your computer.
After you've added your `.gitlab-ci.yml` configuration file to your After you've added your `.gitlab-ci.yml` configuration file to your
repository, GitLab will detect it and run your scripts with the repository, GitLab detects it and run your scripts with the
tool called [GitLab Runner](https://docs.gitlab.com/runner/), which tool called [GitLab Runner](https://docs.gitlab.com/runner/), which
works similarly to your terminal. works similarly to your terminal.
...@@ -191,7 +191,7 @@ lifecycle, as shown in the illustration below. ...@@ -191,7 +191,7 @@ lifecycle, as shown in the illustration below.
![Deeper look into the basic CI/CD workflow](img/gitlab_workflow_example_extended_v12_3.png) ![Deeper look into the basic CI/CD workflow](img/gitlab_workflow_example_extended_v12_3.png)
If you look at the image from the left to the right, If you look at the image from the left to the right,
you'll see some of the features available in GitLab you can see some of the features available in GitLab
according to each stage (Verify, Package, Release). according to each stage (Verify, Package, Release).
1. **Verify**: 1. **Verify**:
......
...@@ -57,7 +57,7 @@ When you use this method, you have to specify `only: - merge_requests` for each ...@@ -57,7 +57,7 @@ When you use this method, you have to specify `only: - merge_requests` for each
example, the pipeline contains a `test` job that is configured to run on merge requests. example, the pipeline contains a `test` job that is configured to run on merge requests.
The `build` and `deploy` jobs don't have the `only: - merge_requests` keyword, The `build` and `deploy` jobs don't have the `only: - merge_requests` keyword,
so they will not run on merge requests. so they don't run on merge requests.
```yaml ```yaml
build: build:
...@@ -82,7 +82,7 @@ deploy: ...@@ -82,7 +82,7 @@ deploy:
#### Excluding certain jobs #### Excluding certain jobs
The behavior of the `only: [merge_requests]` keyword is such that _only_ jobs with The behavior of the `only: [merge_requests]` keyword is such that _only_ jobs with
that keyword are run in the context of a merge request; no other jobs will be run. that keyword are run in the context of a merge request; no other jobs run.
However, you can invert this behavior and have all of your jobs run _except_ However, you can invert this behavior and have all of your jobs run _except_
for one or two. for one or two.
...@@ -120,8 +120,8 @@ C: ...@@ -120,8 +120,8 @@ C:
Therefore: Therefore:
- Since `A` and `B` are getting the `only:` rule to execute in all cases, they will always run. - Since `A` and `B` are getting the `only:` rule to execute in all cases, they always run.
- Since `C` specifies that it should only run for merge requests, it will not run for any pipeline - Since `C` specifies that it should only run for merge requests, it doesn't run for any pipeline
except a merge request pipeline. except a merge request pipeline.
This helps you avoid having to add the `only:` rule to all of your jobs to make This helps you avoid having to add the `only:` rule to all of your jobs to make
...@@ -209,7 +209,7 @@ The variable names begin with the `CI_MERGE_REQUEST_` prefix. ...@@ -209,7 +209,7 @@ The variable names begin with the `CI_MERGE_REQUEST_` prefix.
If you are experiencing duplicated pipelines when using `rules`, take a look at If you are experiencing duplicated pipelines when using `rules`, take a look at
the [important differences between `rules` and `only`/`except`](../yaml/README.md#prevent-duplicate-pipelines), the [important differences between `rules` and `only`/`except`](../yaml/README.md#prevent-duplicate-pipelines),
which will help you get your starting configuration correct. which helps you get your starting configuration correct.
If you are seeing two pipelines when using `only/except`, please see the caveats If you are seeing two pipelines when using `only/except`, please see the caveats
related to using `only/except` above (or, consider moving to `rules`). related to using `only/except` above (or, consider moving to `rules`).
......
...@@ -19,8 +19,8 @@ the source branch have already been merged into the target branch. ...@@ -19,8 +19,8 @@ the source branch have already been merged into the target branch.
If the pipeline fails due to a problem in the target branch, you can wait until the If the pipeline fails due to a problem in the target branch, you can wait until the
target is fixed and re-run the pipeline. target is fixed and re-run the pipeline.
This new pipeline will run as if the source is merged with the updated target, and you This new pipeline runs as if the source is merged with the updated target, and you
will not need to rebase. don't need to rebase.
The pipeline does not automatically run when the target branch changes. Only changes The pipeline does not automatically run when the target branch changes. Only changes
to the source branch trigger a new pipeline. If a long time has passed since the last successful to the source branch trigger a new pipeline. If a long time has passed since the last successful
...@@ -33,7 +33,7 @@ When the merge request can't be merged, the pipeline runs against the source bra ...@@ -33,7 +33,7 @@ When the merge request can't be merged, the pipeline runs against the source bra
- The merge request is a [**Draft** merge request](../../../user/project/merge_requests/work_in_progress_merge_requests.md). - The merge request is a [**Draft** merge request](../../../user/project/merge_requests/work_in_progress_merge_requests.md).
In these cases, the pipeline runs as a [pipeline for merge requests](../index.md) In these cases, the pipeline runs as a [pipeline for merge requests](../index.md)
and is labeled as `detached`. If these cases no longer exist, new pipelines will and is labeled as `detached`. If these cases no longer exist, new pipelines
again run against the merged results. again run against the merged results.
Any user who has developer [permissions](../../../user/permissions.md) can run a Any user who has developer [permissions](../../../user/permissions.md) can run a
...@@ -71,7 +71,7 @@ GitLab [automatically displays](merge_trains/index.md#add-a-merge-request-to-a-m ...@@ -71,7 +71,7 @@ GitLab [automatically displays](merge_trains/index.md#add-a-merge-request-to-a-m
a **Start/Add Merge Train button**. a **Start/Add Merge Train button**.
Generally, this is a safer option than merging merge requests immediately, because your Generally, this is a safer option than merging merge requests immediately, because your
merge request will be evaluated with an expected post-merge result before the actual merge request is evaluated with an expected post-merge result before the actual
merge happens. merge happens.
For more information, read the [documentation on Merge Trains](merge_trains/index.md). For more information, read the [documentation on Merge Trains](merge_trains/index.md).
...@@ -84,10 +84,10 @@ GitLab CI/CD can detect the presence of redundant pipelines, and cancels them ...@@ -84,10 +84,10 @@ GitLab CI/CD can detect the presence of redundant pipelines, and cancels them
to conserve CI resources. to conserve CI resources.
When a user merges a merge request immediately within an ongoing merge When a user merges a merge request immediately within an ongoing merge
train, the train will be reconstructed, as it will recreate the expected train, the train is reconstructed, because it recreates the expected
post-merge commit and pipeline. In this case, the merge train may already post-merge commit and pipeline. In this case, the merge train may already
have pipelines running against the previous expected post-merge commit. have pipelines running against the previous expected post-merge commit.
These pipelines are considered redundant and will be automatically These pipelines are considered redundant and are automatically
canceled. canceled.
## Troubleshooting ## Troubleshooting
......
...@@ -39,7 +39,7 @@ To add a merge request to a merge train, you need [permissions](../../../../user ...@@ -39,7 +39,7 @@ To add a merge request to a merge train, you need [permissions](../../../../user
Each merge train can run a maximum of **twenty** pipelines in parallel. Each merge train can run a maximum of **twenty** pipelines in parallel.
If more than twenty merge requests are added to the merge train, the merge requests If more than twenty merge requests are added to the merge train, the merge requests
will be queued until a slot in the merge train is free. There is no limit to the are queued until a slot in the merge train is free. There is no limit to the
number of merge requests that can be queued. number of merge requests that can be queued.
## Merge train example ## Merge train example
...@@ -55,7 +55,7 @@ If the pipeline for `B` fails, it is removed from the train. The pipeline for ...@@ -55,7 +55,7 @@ If the pipeline for `B` fails, it is removed from the train. The pipeline for
`C` restarts with the `A` and `C` changes, but without the `B` changes. `C` restarts with the `A` and `C` changes, but without the `B` changes.
If `A` then completes successfully, it merges into the target branch, and `C` continues If `A` then completes successfully, it merges into the target branch, and `C` continues
to run. If more merge requests are added to the train, they will now include the `A` to run. If more merge requests are added to the train, they now include the `A`
changes that are included in the target branch, and the `C` changes that are from changes that are included in the target branch, and the `C` changes that are from
the merge request already in the train. the merge request already in the train.
...@@ -152,7 +152,7 @@ is recreated and all pipelines restart. ...@@ -152,7 +152,7 @@ is recreated and all pipelines restart.
### Merge request dropped from the merge train immediately ### Merge request dropped from the merge train immediately
If a merge request is not mergeable (for example, it's a draft merge request, there is a merge If a merge request is not mergeable (for example, it's a draft merge request, there is a merge
conflict, etc.), your merge request will be dropped from the merge train automatically. conflict, etc.), your merge request is dropped from the merge train automatically.
In these cases, the reason for dropping the merge request is in the **system notes**. In these cases, the reason for dropping the merge request is in the **system notes**.
...@@ -179,7 +179,7 @@ for more information. ...@@ -179,7 +179,7 @@ for more information.
A Merge Train pipeline cannot be retried because the merge request is dropped from the merge train upon failure. For this reason, the retry button does not appear next to the pipeline icon. A Merge Train pipeline cannot be retried because the merge request is dropped from the merge train upon failure. For this reason, the retry button does not appear next to the pipeline icon.
In the case of pipeline failure, you should [re-enqueue](#add-a-merge-request-to-a-merge-train) the merge request to the merge train, which will then initiate a new pipeline. In the case of pipeline failure, you should [re-enqueue](#add-a-merge-request-to-a-merge-train) the merge request to the merge train, which then initiates a new pipeline.
### Unable to add to merge train with message "The pipeline for this merge request failed." ### Unable to add to merge train with message "The pipeline for this merge request failed."
...@@ -195,9 +195,10 @@ you can clear the **Pipelines must succeed** check box and keep ...@@ -195,9 +195,10 @@ you can clear the **Pipelines must succeed** check box and keep
**Enable merge trains and pipelines for merged results** (merge trains) enabled. **Enable merge trains and pipelines for merged results** (merge trains) enabled.
If you want to keep the **Pipelines must succeed** option enabled along with Merge If you want to keep the **Pipelines must succeed** option enabled along with Merge
Trains, you can create a new pipeline for merged results when this error occurs by Trains, create a new pipeline for merged results when this error occurs:
going to the **Pipelines** tab and clicking **Run pipeline**. Then click
**Start/Add to merge train when pipeline succeeds**. 1. Go to the **Pipelines** tab and click **Run pipeline**.
1. Click **Start/Add to merge train when pipeline succeeds**.
See [the related issue](https://gitlab.com/gitlab-org/gitlab/-/issues/35135) See [the related issue](https://gitlab.com/gitlab-org/gitlab/-/issues/35135)
for more information. for more information.
......
...@@ -47,8 +47,8 @@ For an example Performance job, see ...@@ -47,8 +47,8 @@ For an example Performance job, see
NOTE: **Note:** NOTE: **Note:**
If the Browser Performance report has no data to compare, such as when you add the If the Browser Performance report has no data to compare, such as when you add the
Browser Performance job in your `.gitlab-ci.yml` for the very first time, Browser Performance job in your `.gitlab-ci.yml` for the very first time,
the Browser Performance report widget won't show. It must have run at least the Browser Performance report widget doesn't show. It must have run at least
once on the target branch (`master`, for example), before it will display in a once on the target branch (`master`, for example), before it displays in a
merge request targeting that branch. merge request targeting that branch.
![Browser Performance Widget](img/browser_performance_testing.png) ![Browser Performance Widget](img/browser_performance_testing.png)
...@@ -81,7 +81,7 @@ The above example creates a `performance` job in your CI/CD pipeline and runs ...@@ -81,7 +81,7 @@ The above example creates a `performance` job in your CI/CD pipeline and runs
sitespeed.io against the webpage you defined in `URL` to gather key metrics. sitespeed.io against the webpage you defined in `URL` to gather key metrics.
The example uses a CI/CD template that is included in all GitLab installations since The example uses a CI/CD template that is included in all GitLab installations since
12.4, but it will not work with Kubernetes clusters. If you are using GitLab 12.3 12.4, but it doesn't work with Kubernetes clusters. If you are using GitLab 12.3
or older, you must [add the configuration manually](#gitlab-versions-123-and-older) or older, you must [add the configuration manually](#gitlab-versions-123-and-older)
The template uses the [GitLab plugin for sitespeed.io](https://gitlab.com/gitlab-org/gl-performance), The template uses the [GitLab plugin for sitespeed.io](https://gitlab.com/gitlab-org/gl-performance),
...@@ -115,7 +115,7 @@ performance: ...@@ -115,7 +115,7 @@ performance:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27599) in GitLab 13.0. > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/27599) in GitLab 13.0.
You can configure the sensitivity of degradation alerts to avoid getting alerts for minor drops in metrics. You can configure the sensitivity of degradation alerts to avoid getting alerts for minor drops in metrics.
This is done by setting the `DEGRADATION_THRESHOLD` variable. In the example below, the alert will only show up This is done by setting the `DEGRADATION_THRESHOLD` variable. In the example below, the alert only shows up
if the `Total Score` metric degrades by 5 points or more: if the `Total Score` metric degrades by 5 points or more:
```yaml ```yaml
...@@ -181,7 +181,7 @@ performance: ...@@ -181,7 +181,7 @@ performance:
### GitLab versions 12.3 and older ### GitLab versions 12.3 and older
Browser Performance Testing has gone through several changes since it's introduction. Browser Performance Testing has gone through several changes since it's introduction.
In this section we'll detail these changes and how you can run the test based on your In this section we detail these changes and how you can run the test based on your
GitLab version: GitLab version:
- In GitLab 12.4 [a job template was made available](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Verify/Browser-Performance.gitlab-ci.yml). - In GitLab 12.4 [a job template was made available](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Verify/Browser-Performance.gitlab-ci.yml).
......
...@@ -84,8 +84,8 @@ include: ...@@ -84,8 +84,8 @@ include:
- template: Code-Quality.gitlab-ci.yml - template: Code-Quality.gitlab-ci.yml
``` ```
The above example will create a `code_quality` job in your CI/CD pipeline which The above example creates a `code_quality` job in your CI/CD pipeline which
will scan your source code for code quality issues. The report will be saved as a scans your source code for code quality issues. The report is saved as a
[Code Quality report artifact](../../../ci/pipelines/job_artifacts.md#artifactsreportscodequality) [Code Quality report artifact](../../../ci/pipelines/job_artifacts.md#artifactsreportscodequality)
that you can later download and analyze. that you can later download and analyze.
...@@ -132,17 +132,17 @@ stages: ...@@ -132,17 +132,17 @@ stages:
``` ```
TIP: **Tip:** TIP: **Tip:**
This information will be automatically extracted and shown right in the merge request widget. This information is automatically extracted and shown right in the merge request widget.
CAUTION: **Caution:** CAUTION: **Caution:**
On self-managed instances, if a malicious actor compromises the Code Quality job On self-managed instances, if a malicious actor compromises the Code Quality job
definition they will be able to execute privileged Docker commands on the runner definition they could execute privileged Docker commands on the runner
host. Having proper access control policies mitigates this attack vector by host. Having proper access control policies mitigates this attack vector by
allowing access only to trusted actors. allowing access only to trusted actors.
### Disabling the code quality job ### Disabling the code quality job
The `code_quality` job will not run if the `$CODE_QUALITY_DISABLED` environment The `code_quality` job doesn't run if the `$CODE_QUALITY_DISABLED` environment
variable is present. Please refer to the environment variables [documentation](../../../ci/variables/README.md) variable is present. Please refer to the environment variables [documentation](../../../ci/variables/README.md)
to learn more about how to define one. to learn more about how to define one.
...@@ -185,7 +185,7 @@ job1: ...@@ -185,7 +185,7 @@ job1:
- if: '$CI_COMMIT_TAG' # Run job1 in pipelines for tags - if: '$CI_COMMIT_TAG' # Run job1 in pipelines for tags
``` ```
To make these work together, you will need to overwrite the code quality `rules` To make these work together, you need to overwrite the code quality `rules`
so that they match your current `rules`. From the example above, it could look like: so that they match your current `rules`. From the example above, it could look like:
```yaml ```yaml
...@@ -260,7 +260,7 @@ Once the Code Quality job has completed: ...@@ -260,7 +260,7 @@ Once the Code Quality job has completed:
Code Quality tab of the Pipeline Details page. Code Quality tab of the Pipeline Details page.
- Potential changes to code quality are shown directly in the merge request. - Potential changes to code quality are shown directly in the merge request.
The Code Quality widget in the merge request compares the reports from the base and head of the branch, The Code Quality widget in the merge request compares the reports from the base and head of the branch,
then lists any violations that will be resolved or created when the branch is merged. then lists any violations that are resolved or created when the branch is merged.
- The full JSON report is available as a - The full JSON report is available as a
[downloadable artifact](../../../ci/pipelines/job_artifacts.md#downloading-artifacts) [downloadable artifact](../../../ci/pipelines/job_artifacts.md#downloading-artifacts)
for the `code_quality` job. for the `code_quality` job.
...@@ -341,11 +341,11 @@ is still used. ...@@ -341,11 +341,11 @@ is still used.
This can be due to multiple reasons: This can be due to multiple reasons:
- You just added the Code Quality job in your `.gitlab-ci.yml`. The report does not - You just added the Code Quality job in your `.gitlab-ci.yml`. The report does not
have anything to compare to yet, so no information can be displayed. Future merge have anything to compare to yet, so no information can be displayed. It only displays
requests will have something to compare to. after future merge requests have something to compare to.
- Your pipeline is not set to run the code quality job on your default branch. If there is no report generated from the default branch, your MR branch reports will not have anything to compare to. - Your pipeline is not set to run the code quality job on your default branch. If there is no report generated from the default branch, your MR branch reports will not have anything to compare to.
- If no [degradation or error is detected](https://docs.codeclimate.com/docs/maintainability#section-checks), - If no [degradation or error is detected](https://docs.codeclimate.com/docs/maintainability#section-checks),
nothing will be displayed. nothing is displayed.
- The [`artifacts:expire_in`](../../../ci/yaml/README.md#artifactsexpire_in) CI/CD - The [`artifacts:expire_in`](../../../ci/yaml/README.md#artifactsexpire_in) CI/CD
setting can cause the Code Quality artifact(s) to expire faster than desired. setting can cause the Code Quality artifact(s) to expire faster than desired.
- Large `codeclimate.json` files (esp. >10 MB) are [known to prevent the report from being displayed](https://gitlab.com/gitlab-org/gitlab/-/issues/2737). - Large `codeclimate.json` files (esp. >10 MB) are [known to prevent the report from being displayed](https://gitlab.com/gitlab-org/gitlab/-/issues/2737).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment