Commit a47c2860 authored by Marcel Amirault's avatar Marcel Amirault

Merge branch 'eread/add-docker-to-capitalization-rules-docs' into 'master'

Add Docker to capitalization rules

See merge request gitlab-org/gitlab!33134
parents 0f896bd4 0e90253f
......@@ -43,6 +43,7 @@
"Consul",
"Debian",
"DevOps",
"Docker",
"Elasticsearch",
"Facebook",
"GDK",
......
......@@ -363,7 +363,7 @@ The following documentation relates to the DevOps **Secure** stage:
| Secure Topics | Description |
|:------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------|
| [Compliance Dashboard](user/compliance/compliance_dashboard/index.md) **(ULTIMATE)** | View the most recent Merge Request activity in a group. |
| [Container Scanning](user/application_security/container_scanning/index.md) **(ULTIMATE)** | Use Clair to scan docker images for known vulnerabilities. |
| [Container Scanning](user/application_security/container_scanning/index.md) **(ULTIMATE)** | Use Clair to scan Docker images for known vulnerabilities. |
| [Dependency List](user/application_security/dependency_list/index.md) **(ULTIMATE)** | View your project's dependencies and their known vulnerabilities. |
| [Dependency Scanning](user/application_security/dependency_scanning/index.md) **(ULTIMATE)** | Analyze your dependencies for known vulnerabilities. |
| [Dynamic Application Security Testing (DAST)](user/application_security/dast/index.md) **(ULTIMATE)** | Analyze running web applications for known vulnerabilities. |
......
......@@ -63,6 +63,7 @@ to the naming scheme `GITLAB_#{name in 1_settings.rb in upper case}`.
To set environment variables, follow [these
instructions](https://docs.gitlab.com/omnibus/settings/environment-variables.html).
It's possible to preconfigure the GitLab docker image by adding the environment
It's possible to preconfigure the GitLab Docker image by adding the environment
variable `GITLAB_OMNIBUS_CONFIG` to the `docker run` command.
For more information see the ['preconfigure-docker-container' section in the Omnibus documentation](https://docs.gitlab.com/omnibus/docker/#preconfigure-docker-container).
For more information see the [Pre-configure Docker container](https://docs.gitlab.com/omnibus/docker/#pre-configure-docker-container)
section in the Omnibus documentation.
......@@ -24,7 +24,7 @@ From the server side, if we want to configure SSH we need to set the `sshd`
server to accept the `GIT_PROTOCOL` environment.
In installations using [GitLab Helm Charts](https://docs.gitlab.com/charts/)
and [All-in-one docker image](https://docs.gitlab.com/omnibus/docker/), the SSH
and [All-in-one Docker image](https://docs.gitlab.com/omnibus/docker/), the SSH
service is already configured to accept the `GIT_PROTOCOL` environment and users
need not do anything more.
......
......@@ -98,7 +98,7 @@ auth:
```
CAUTION: **Caution:**
If `auth` is not set up, users will be able to pull docker images without authentication.
If `auth` is not set up, users will be able to pull Docker images without authentication.
## Container Registry domain configuration
......@@ -414,7 +414,7 @@ NOTE: **Note:**
**Installations from source**
Configuring the storage driver is done in your registry config YML file created
when you [deployed your docker registry](https://docs.docker.com/registry/deploying/).
when you [deployed your Docker registry](https://docs.docker.com/registry/deploying/).
`s3` storage driver example:
......@@ -642,7 +642,7 @@ To configure a notification endpoint in Omnibus:
**Installations from source**
Configuring the notification endpoint is done in your registry config YML file created
when you [deployed your docker registry](https://docs.docker.com/registry/deploying/).
when you [deployed your Docker registry](https://docs.docker.com/registry/deploying/).
Example:
......@@ -879,9 +879,9 @@ thus the error above.
While GitLab doesn't support using self-signed certificates with Container
Registry out of the box, it is possible to make it work by
[instructing the docker-daemon to trust the self-signed certificates](https://docs.docker.com/registry/insecure/#use-self-signed-certificates),
mounting the docker-daemon and setting `privileged = false` in the Runner's
`config.toml`. Setting `privileged = true` takes precedence over the docker-daemon:
[instructing the Docker daemon to trust the self-signed certificates](https://docs.docker.com/registry/insecure/#use-self-signed-certificates),
mounting the Docker daemon and setting `privileged = false` in the Runner's
`config.toml`. Setting `privileged = true` takes precedence over the Docker daemon:
```toml
[runners.docker]
......@@ -1008,7 +1008,7 @@ there is likely an issue with the headers forwarded to the registry by NGINX. Th
NGINX configurations should handle this, but it might occur in custom setups where the SSL is
offloaded to a third party reverse proxy.
This problem was discussed in a [docker project issue](https://github.com/docker/distribution/issues/970)
This problem was discussed in a [Docker project issue](https://github.com/docker/distribution/issues/970)
and a simple solution would be to enable relative URLs in the Registry.
**For Omnibus installations**
......
......@@ -330,10 +330,10 @@ feel free to update that page with issues you encounter and solutions.
Setting up Elasticsearch isn't too bad, but it can be a bit finicky and time consuming.
The easiest method is to spin up a docker container with the required version and
The easiest method is to spin up a Docker container with the required version and
bind ports 9200/9300 so it can be used.
The following is an example of running a docker container of Elasticsearch v7.2.0:
The following is an example of running a Docker container of Elasticsearch v7.2.0:
```shell
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.2.0
......@@ -342,7 +342,7 @@ docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elas
From here, you can:
- Grab the IP of the docker container (use `docker inspect <container_id>`)
- Grab the IP of the Docker container (use `docker inspect <container_id>`)
- Use `<IP.add.re.ss:9200>` to communicate with it.
This is a quick method to test out Elasticsearch, but by no means is this a
......
......@@ -16,7 +16,7 @@ are only available internally at GitLab.
## Docker
The following were tested on docker containers running in the cloud. Support Engineers,
The following were tested on Docker containers running in the cloud. Support Engineers,
please see [these docs](https://gitlab.com/gitlab-com/dev-resources/tree/master/dev-resources#running-docker-containers)
on how to run Docker containers on `dev-resources`. Other setups haven't been tested,
but contributions are welcome.
......
......@@ -4,7 +4,7 @@ type: concepts, howto
# Building Docker images with GitLab CI/CD
GitLab CI/CD allows you to use Docker Engine to build and test docker-based projects.
GitLab CI/CD allows you to use Docker Engine to build and test Docker-based projects.
One of the new trends in Continuous Integration/Deployment is to:
......@@ -91,15 +91,15 @@ NOTE: **Note:**
By adding `gitlab-runner` to the `docker` group you are effectively granting `gitlab-runner` full root permissions.
For more information please read [On Docker security: `docker` group considered harmful](https://www.andreas-jung.com/contents/on-docker-security-docker-group-considered-harmful).
### Use docker-in-docker workflow with Docker executor
### Use Docker-in-Docker workflow with Docker executor
The second approach is to use the special docker-in-docker (dind)
The second approach is to use the special Docker-in-Docker (dind)
[Docker image](https://hub.docker.com/_/docker/) with all tools installed
(`docker`) and run the job script in context of that
image in privileged mode.
NOTE: **Note:**
`docker-compose` is not part of docker-in-docker (dind). To use `docker-compose` in your
`docker-compose` is not part of Docker-in-Docker (dind). To use `docker-compose` in your
CI builds, follow the `docker-compose`
[installation instructions](https://docs.docker.com/compose/install/).
......@@ -113,7 +113,7 @@ out the official Docker documentation on
Docker-in-Docker works well, and is the recommended configuration, but it is
not without its own challenges:
- When using docker-in-docker, each job is in a clean environment without the past
- When using Docker-in-Docker, each job is in a clean environment without the past
history. Concurrent jobs work fine because every build gets its own
instance of Docker engine so they won't conflict with each other. But this
also means that jobs can be slower because there's no caching of layers.
......@@ -156,7 +156,7 @@ details.
The Docker daemon supports connection over TLS and it's done by default
for Docker 19.03.8 or higher. This is the **suggested** way to use the
docker-in-docker service and
Docker-in-Docker service and
[GitLab.com Shared Runners](../../user/gitlab_com/index.md#shared-runners)
support this.
......@@ -179,11 +179,11 @@ support this.
The above command will register a new Runner to use the special
`docker:19.03.8` image, which is provided by Docker. **Notice that it's
using the `privileged` mode to start the build and service
containers.** If you want to use [docker-in-docker](https://www.docker.com/blog/docker-can-now-run-within-docker/) mode, you always
containers.** If you want to use [Docker-in-Docker](https://www.docker.com/blog/docker-can-now-run-within-docker/) mode, you always
have to use `privileged = true` in your Docker containers.
This will also mount `/certs/client` for the service and build
container, which is needed for the docker client to use the
container, which is needed for the Docker client to use the
certificates inside of that directory. For more information how
Docker with TLS works check <https://hub.docker.com/_/docker/#tls>.
......@@ -377,7 +377,7 @@ In order to do that, follow the steps:
While the above method avoids using Docker in privileged mode, you should be
aware of the following implications:
- By sharing the docker daemon, you are effectively disabling all
- By sharing the Docker daemon, you are effectively disabling all
the security mechanisms of containers and exposing your host to privilege
escalation which can lead to container breakout. For example, if a project
ran `docker rm -f $(docker ps -a -q)` it would remove the GitLab Runner
......@@ -392,9 +392,9 @@ aware of the following implications:
docker run --rm -t -i -v $(pwd)/src:/home/app/src test-image:latest run_app_tests
```
## Making docker-in-docker builds faster with Docker layer caching
## Making Docker-in-Docker builds faster with Docker layer caching
When using docker-in-docker, Docker will download all layers of your image every
When using Docker-in-Docker, Docker will download all layers of your image every
time you create a build. Recent versions of Docker (Docker 1.13 and above) can
use a pre-existing image as a cache during the `docker build` step, considerably
speeding up the build process.
......@@ -514,7 +514,7 @@ Once you've built a Docker image, you can push it up to the built-in
## Troubleshooting
### docker: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
### `docker: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?`
This is a common error when you are using
[Docker in Docker](#use-docker-in-docker-workflow-with-docker-executor)
......
......@@ -744,7 +744,7 @@ To configure access for `aws_account_id.dkr.ecr.region.amazonaws.com`, follow th
}
```
This configures docker to use the credential helper for a specific registry.
This configures Docker to use the credential helper for a specific registry.
or
......@@ -754,7 +754,7 @@ To configure access for `aws_account_id.dkr.ecr.region.amazonaws.com`, follow th
}
```
This configures docker to use the credential helper for all Amazon ECR registries.
This configures Docker to use the credential helper for all Amazon ECR registries.
- Or, if you are running self-managed Runners,
add the above JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`.
......
......@@ -10,12 +10,12 @@ type: howto
container images from a Dockerfile, inside a container or Kubernetes cluster.
kaniko solves two problems with using the
[docker-in-docker
[Docker-in-Docker
build](using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor) method:
- Docker-in-docker requires [privileged mode](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities)
- Docker-in-Docker requires [privileged mode](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities)
in order to function, which is a significant security concern.
- Docker-in-docker generally incurs a performance penalty and can be quite slow.
- Docker-in-Docker generally incurs a performance penalty and can be quite slow.
## Requirements
......
......@@ -100,7 +100,7 @@ production:
- master
```
We've used the `java:8` [docker
We've used the `java:8` [Docker
image](../../docker/using_docker_images.md) to build
our application as it provides the up-to-date Java 8 JDK on [Docker
Hub](https://hub.docker.com/). We've also added the [`only`
......
......@@ -74,7 +74,7 @@ And this is basically all you need in the `before_script` section.
## How to deploy
As we stated above, we need to deploy the `build` folder from the docker image to our server. To do so, we create a new job:
As we stated above, we need to deploy the `build` folder from the Docker image to our server. To do so, we create a new job:
```yaml
stage_deploy:
......@@ -94,7 +94,7 @@ stage_deploy:
Here's the breakdown:
1. `only:dev` means that this build will run only when something is pushed to the `dev` branch. You can remove this block completely and have everything be ran on every push (but probably this is something you don't want)
1. `ssh-add ...` we will add that private key you added on the web UI to the docker container
1. `ssh-add ...` we will add that private key you added on the web UI to the Docker container
1. We will connect via `ssh` and create a new `_tmp` folder
1. We will connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder
1. We will connect again via `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`.
......
......@@ -13,7 +13,7 @@ using the Shell executor.
While it is possible to test PHP apps on any system, this would require manual
configuration from the developer. To overcome this we will be using the
official [PHP docker image](https://hub.docker.com/_/php) that can be found in Docker Hub.
official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub.
This will allow us to test PHP projects against different versions of PHP.
However, not everything is plug 'n' play, you still need to configure some
......@@ -62,7 +62,7 @@ docker-php-ext-install pdo_mysql
```
You might wonder what `docker-php-ext-install` is. In short, it is a script
provided by the official php docker image that you can use to easily install
provided by the official php Docker image that you can use to easily install
extensions. For more information read the documentation at
<https://hub.docker.com/_/php>.
......@@ -111,7 +111,7 @@ test:app:
### Test against different PHP versions in Docker builds
Testing against multiple versions of PHP is super easy. Just add another job
with a different docker image version and the runner will do the rest:
with a different Docker image version and the runner will do the rest:
```yaml
before_script:
......
......@@ -262,7 +262,7 @@ project.
our application? This virtual machine must have all dependencies to run our application. This is
where a Docker image is needed. The correct image will provide the entire system for us.
As we are focusing on testing (not deploying), you can use the [elixir:latest](https://hub.docker.com/_/elixir) docker image, which already has the
As we are focusing on testing (not deploying), you can use the [elixir:latest](https://hub.docker.com/_/elixir) Docker image, which already has the
dependencies for running Phoenix tests installed, such as Elixir and Erlang:
```yaml
......
......@@ -165,7 +165,7 @@ The next step is to configure a Runner so that it picks the pending jobs.
## Configuring a Runner
In GitLab, Runners run the jobs that you define in `.gitlab-ci.yml`. A Runner
can be a virtual machine, a VPS, a bare-metal machine, a docker container or
can be a virtual machine, a VPS, a bare-metal machine, a Docker container or
even a cluster of containers. GitLab and the Runners communicate through an API,
so the only requirement is that the Runner's machine has network access to the
GitLab server.
......
......@@ -43,7 +43,7 @@ Database: <your_mysql_database>
If you are wondering why we used `mysql` for the `Host`, read more at
[How services are linked to the job](../docker/using_docker_images.md#how-services-are-linked-to-the-job).
You can also use any other docker image available on [Docker Hub](https://hub.docker.com/_/mysql/).
You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/_/mysql/).
For example, to use MySQL 5.5 the service becomes `mysql:5.5`.
The `mysql` image can accept some environment variables. For more details
......
......@@ -45,7 +45,7 @@ Database: nice_marmot
If you are wondering why we used `postgres` for the `Host`, read more at
[How services are linked to the job](../docker/using_docker_images.md#how-services-are-linked-to-the-job).
You can also use any other docker image available on [Docker Hub](https://hub.docker.com/_/postgres).
You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/_/postgres).
For example, to use PostgreSQL 9.3 the service becomes `postgres:9.3`.
The `postgres` image can accept some environment variables. For more details
......
......@@ -30,7 +30,7 @@ Host: redis
And that's it. Redis will now be available to be used within your testing
framework.
You can also use any other docker image available on [Docker Hub](https://hub.docker.com/_/redis).
You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/_/redis).
For example, to use Redis 2.8 the service becomes `redis:2.8`.
## Use Redis with the Shell executor
......
......@@ -93,8 +93,8 @@ The following table lists available parameters for jobs:
| Keyword | Description |
|:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [`script`](#script) | Shell script which is executed by Runner. |
| [`image`](#image) | Use docker images. Also available: `image:name` and `image:entrypoint`. |
| [`services`](#services) | Use docker services images. Also available: `services:name`, `services:alias`, `services:entrypoint`, and `services:command`. |
| [`image`](#image) | Use Docker images. Also available: `image:name` and `image:entrypoint`. |
| [`services`](#services) | Use Docker services images. Also available: `services:name`, `services:alias`, `services:entrypoint`, and `services:command`. |
| [`before_script`](#before_script-and-after_script) | Override a set of commands that are executed before job. |
| [`after_script`](#before_script-and-after_script) | Override a set of commands that are executed after job. |
| [`stage`](#stage) | Defines a job stage (default: `test`). |
......@@ -521,13 +521,13 @@ For:
#### `image:name`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `image`](../docker/using_docker_images.md#available-settings-for-image).
#### `image:entrypoint`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `image`](../docker/using_docker_images.md#available-settings-for-image).
......@@ -543,25 +543,25 @@ For:
##### `services:name`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:alias`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:entrypoint`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
##### `services:command`
An [extended docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
An [extended Docker configuration option](../docker/using_docker_images.md#extended-docker-configuration-options).
For more information, see [Available settings for `services`](../docker/using_docker_images.md#available-settings-for-services).
......@@ -3477,7 +3477,7 @@ If `GIT_FETCH_EXTRA_FLAGS` is:
- Not specified, `git fetch` flags default to `--prune --quiet` along with the default flags.
- Given the value `none`, `git fetch` is executed only with the default flags.
For example, the default flags are `--prune --quiet`, so you can make `git fetch` more verbose by overriding this with just `--prune`:
```yaml
......
# Building a package for testing
While developing a new feature or modifying an existing one, it is helpful if an
installable package (or a docker image) containing those changes is available
installable package (or a Docker image) containing those changes is available
for testing. For this very purpose, a manual job is provided in the GitLab CI/CD
pipeline that can be used to trigger a pipeline in the Omnibus GitLab repository
that will create:
- A deb package for Ubuntu 16.04, available as a build artifact, and
- A docker image, which is pushed to [Omnibus GitLab's container
- A Docker image, which is pushed to [Omnibus GitLab's container
registry](https://gitlab.com/gitlab-org/omnibus-gitlab/container_registry)
(images titled `gitlab-ce` and `gitlab-ee` respectively and image tag is the
commit which triggered the pipeline).
......
......@@ -390,7 +390,7 @@ builds](https://docs.docker.com/develop/develop-images/multistage-build/):
dependencies.
- They generate a small, self-contained image, derived from `Scratch`.
Generated docker images should have the program at their `Entrypoint` to create
Generated Docker images should have the program at their `Entrypoint` to create
portable commands. That way, anyone can run the image, and without parameters
it will display its help message (if `cli` has been used).
......
......@@ -139,7 +139,7 @@ might also help with keeping the image small.
As documented in the [Docker Official Images](https://github.com/docker-library/official-images#tags-and-aliases) project,
it is strongly encouraged that version number tags be given aliases which allows the user to easily refer to the "most recent" release of a particular series.
See also [Docker Tagging: Best practices for tagging and versioning docker images](https://docs.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
See also [Docker Tagging: Best practices for tagging and versioning Docker images](https://docs.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images).
## Command line
......
......@@ -19,7 +19,7 @@ The current stages are:
<https://gitlab.com/gitlab-org/gitlab-foss>.
- `prepare`: This stage includes jobs that prepare artifacts that are needed by
jobs in subsequent stages.
- `build-images`: This stage includes jobs that prepare docker images
- `build-images`: This stage includes jobs that prepare Docker images
that are needed by jobs in subsequent stages or downstream pipelines.
- `fixtures`: This stage includes jobs that prepare fixtures needed by frontend tests.
- `test`: This stage includes most of the tests, DB/migration jobs, and static analysis jobs.
......
......@@ -336,7 +336,7 @@ Snowplow Inspector Chrome Extension is a browser extension for testing frontend
Snowplow Micro is a very small version of a full Snowplow data collection pipeline: small enough that it can be launched by a test suite. Events can be recorded into Snowplow Micro just as they can a full Snowplow pipeline. Micro then exposes an API that can be queried.
Snowplow Micro is a docker-based solution for testing frontend and backend events in a local development environment. You need to modify GDK using the instructions below to set this up.
Snowplow Micro is a Docker-based solution for testing frontend and backend events in a local development environment. You need to modify GDK using the instructions below to set this up.
- Read [Introducing Snowplow Micro](https://snowplowanalytics.com/blog/2019/07/17/introducing-snowplow-micro/)
- Look at the [Snowplow Micro repo](https://github.com/snowplow-incubator/snowplow-micro)
......
......@@ -155,7 +155,7 @@ See [Review Apps](../review_apps.md) for more details about Review Apps.
If you are not [testing code in a merge request](#testing-code-in-merge-requests),
there are two main options for running the tests. If you simply want to run
the existing tests against a live GitLab instance or against a pre-built docker image
the existing tests against a live GitLab instance or against a pre-built Docker image
you can use the [GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md). See also [examples
of the test scenarios you can run via the orchestrator](https://gitlab.com/gitlab-org/gitlab-qa/blob/master/docs/what_tests_can_be_run.md#examples).
......
......@@ -9,11 +9,11 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
|-----|-------------|
| `:elasticsearch` | The test requires an Elasticsearch service. It is used by the [instance-level scenario](https://gitlab.com/gitlab-org/gitlab-qa#definitions) [`Test::Integration::Elasticsearch`](https://gitlab.com/gitlab-org/gitlab/-/blob/72b62b51bdf513e2936301cb6c7c91ec27c35b4d/qa/qa/ee/scenario/test/integration/elasticsearch.rb) to include only tests that require Elasticsearch. |
| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test will also include provisioning of at least one Kubernetes cluster to test against. *This tag is often be paired with `:orchestrated`.* |
| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify GitLab's configuration (for example, Staging). |
| `:orchestrated` | The GitLab instance under test may be [configured by `gitlab-qa`](https://gitlab.com/gitlab-org/gitlab-qa/-/blob/master/docs/what_tests_can_be_run.md#orchestrated-tests) to be different to the default GitLab configuration, or `gitlab-qa` may launch additional services in separate Docker containers, or both. Tests tagged with `:orchestrated` are excluded when testing environments where we can't dynamically modify GitLab's configuration (for example, Staging). |
| `:quarantine` | The test has been [quarantined](https://about.gitlab.com/handbook/engineering/quality/guidelines/debugging-qa-test-failures/#quarantining-tests), will run in a separate job that only includes quarantined tests, and is allowed to fail. The test will be skipped in its regular job so that if it fails it will not hold up the pipeline. |
| `:reliable` | The test has been [promoted to a reliable test](https://about.gitlab.com/handbook/engineering/quality/guidelines/reliable-tests/#promoting-an-existing-test-to-reliable) meaning it passes consistently in all pipelines, including merge requests. |
| `:requires_admin` | The test requires an admin account. Tests with the tag are excluded when run against Canary and Production environments. |
| `:runner` | The test depends on and will set up a GitLab Runner instance, typically to run a pipeline. |
| `:gitaly_ha` | The test will run against a GitLab instance where repositories are stored on redundant Gitaly nodes behind a Praefect node. All nodes are [separate containers](../../../administration/gitaly/praefect.md#requirements-for-configuring-a-gitaly-cluster). Tests that use this tag have a longer setup time since there are three additional containers that need to be started. |
| `:skip_live_env` | The test will be excluded when run against live deployed environments such as Staging, Canary, and Production. |
| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) will provision the Jira Server in a docker container when the `Test::Integration::Jira` test scenario is run.
| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) will provision the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
......@@ -2,13 +2,13 @@
## Jenkins spec
The [`jenkins_build_status_spec`](https://gitlab.com/gitlab-org/gitlab/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb) spins up a Jenkins instance in a docker container based on an image stored in the [GitLab-QA container registry](https://gitlab.com/gitlab-org/gitlab-qa/container_registry).
The docker image it uses is preconfigured with some base data and plugins.
The [`jenkins_build_status_spec`](https://gitlab.com/gitlab-org/gitlab/blob/163c8a8c814db26d11e104d1cb2dcf02eb567dbe/qa/qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb) spins up a Jenkins instance in a Docker container based on an image stored in the [GitLab-QA container registry](https://gitlab.com/gitlab-org/gitlab-qa/container_registry).
The Docker image it uses is preconfigured with some base data and plugins.
The test then configures the GitLab plugin in Jenkins with a URL of the GitLab instance that will be used
to run the tests. Unfortunately, the GitLab Jenkins plugin does not accept ports so `http://localhost:3000` would
not be accepted. Therefore, this requires us to run GitLab on port 80 or inside a docker container.
not be accepted. Therefore, this requires us to run GitLab on port 80 or inside a Docker container.
To start a docker container for GitLab based on the nightly image:
To start a Docker container for GitLab based on the nightly image:
```shell
docker run \
......@@ -24,7 +24,7 @@ To run the tests from the `/qa` directory:
CHROME_HEADLESS=false bin/qa Test::Instance::All http://localhost -- qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb
```
The test will automatically spinup a docker container for Jenkins and tear down once the test completes.
The test will automatically spinup a Docker container for Jenkins and tear down once the test completes.
However, if you need to run Jenkins manually outside of the tests, use this command:
......@@ -46,5 +46,5 @@ only to prevent it from running in the pipelines for live environments such as S
### Troubleshooting
If Jenkins docker container exits without providing any information in the logs, try increasing the memory used by
If Jenkins Docker container exits without providing any information in the logs, try increasing the memory used by
the Docker Engine.
......@@ -31,12 +31,12 @@ locally on either macOS or Linux.
NOTE: **Note:**
The rest of the steps are identical for macOS and Linux.
## Create new docker host
## Create new Docker host
1. Login to Digital Ocean.
1. Generate a new API token at <https://cloud.digitalocean.com/settings/api/tokens>.
This command will create a new DO droplet called `gitlab-test-env-do` that will act as a docker host.
This command will create a new DO droplet called `gitlab-test-env-do` that will act as a Docker host.
NOTE: **Note:**
4GB is the minimum requirement for a Docker host that will run more than one GitLab instance.
......@@ -69,20 +69,20 @@ Resource: <https://docs.docker.com/machine/drivers/digital-ocean/>.
In this example we'll create a GitLab EE 8.10.8 instance.
First connect the docker client to the docker host you created previously.
First connect the Docker client to the Docker host you created previously.
```shell
eval "$(docker-machine env gitlab-test-env-do)"
```
You can add this to your `~/.bash_profile` file to ensure the `docker` client uses the `gitlab-test-env-do` docker host
You can add this to your `~/.bash_profile` file to ensure the `docker` client uses the `gitlab-test-env-do` Docker host
### Create new GitLab container
- HTTP port: `8888`
- SSH port: `2222`
- Set `gitlab_shell_ssh_port` using `--env GITLAB_OMNIBUS_CONFIG`
- Hostname: IP of docker host
- Hostname: IP of Docker host
- Container name: `gitlab-test-8.10`
- GitLab version: **EE** `8.10.8-ee.0`
......@@ -108,7 +108,7 @@ gitlab/gitlab-ee:$VERSION
### Connect to the GitLab container
#### Retrieve the docker host IP
#### Retrieve the Docker host IP
```shell
docker-machine ip gitlab-test-env-do
......
......@@ -253,7 +253,7 @@ related object definitions to be created together, as well as a set of
parameters for those objects.
The template for GitLab resides in the Omnibus GitLab repository under the
docker directory. Let's download it locally with `wget`:
Docker directory. Let's download it locally with `wget`:
```shell
wget https://gitlab.com/gitlab-org/omnibus-gitlab/raw/master/docker/openshift-template.json
......@@ -324,7 +324,7 @@ Now that we configured this, let's see how to manage and scale GitLab.
Setting up GitLab for the first time might take a while depending on your
internet connection and the resources you have attached to the all-in-one VM.
GitLab's docker image is quite big (~500MB), so you'll have to wait until
GitLab's Docker image is quite big (~500MB), so you'll have to wait until
it's downloaded and configured before you use it.
### Watch while GitLab gets deployed
......
......@@ -832,7 +832,7 @@ version of GitLab, the restore command will abort with an error. Install the
For GitLab installations using the Docker image or the GitLab Helm chart on
a Kubernetes cluster, the restore task expects the restore directories to be empty.
However, with docker and Kubernetes volume mounts, some system level directories
However, with Docker and Kubernetes volume mounts, some system level directories
may be created at the volume roots, like `lost+found` directory found in Linux
operating systems. These directories are usually owned by `root`, which can
cause access permission errors since the restore Rake task runs as `git` user.
......@@ -842,7 +842,7 @@ directories are empty.
For both these installation types, the backup tarball has to be available in the
backup location (default location is `/var/opt/gitlab/backups`).
For docker installations, the restore task can be run from host:
For Docker installations, the restore task can be run from host:
```shell
docker exec -it <name of container> gitlab-backup restore
......
......@@ -89,7 +89,7 @@ template:
| `SECURE_BINARIES_DOWNLOAD_IMAGES` | Used to disable jobs | `"true"` |
| `SECURE_BINARIES_PUSH_IMAGES` | Push files to the project registry | `"true"` |
| `SECURE_BINARIES_SAVE_ARTIFACTS` | Also save image archives as artifacts | `"false"` |
| `SECURE_BINARIES_ANALYZER_VERSION` | Default analyzer version (docker tag) | `"2"` |
| `SECURE_BINARIES_ANALYZER_VERSION` | Default analyzer version (Docker tag) | `"2"` |
### Alternate way without the official template
......
......@@ -74,7 +74,7 @@ Follow these steps to enable the container registry. Note that these steps refle
sudo gitlab-ctl reconfigure
```
## Allow the docker daemon to trust the registry and GitLab Runner
## Allow the Docker daemon to trust the registry and GitLab Runner
Provide your Docker daemon with your certs by
[following the steps for using trusted certificates with your registry](../../administration/packages/container_registry.md#using-self-signed-certificates-with-container-registry):
......@@ -125,7 +125,7 @@ Now we must add some additional configuration to our runner:
Make the following changes to `/etc/gitlab-runner/config.toml`:
- Add docker socket to volumes `volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]`
- Add Docker socket to volumes `volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]`
- Add `pull_policy = "if-not-present"` to the executor configuration
Now we can start our Runner:
......
......@@ -462,7 +462,7 @@ Read more about the [solutions for vulnerabilities](../index.md#solutions-for-vu
## Troubleshooting
### docker: Error response from daemon: failed to copy xattrs
### `docker: Error response from daemon: failed to copy xattrs`
When the GitLab Runner uses the Docker executor and NFS is used
(for example, `/var/lib/docker` is on an NFS mount), Container Scanning might fail with
......
......@@ -546,7 +546,7 @@ As a workaround, remove the [`retire.js`](analyzers.md#selecting-specific-analyz
## Troubleshooting
### Error response from daemon: error processing tar file: docker-tar: relocation error
### `Error response from daemon: error processing tar file: docker-tar: relocation error`
This error occurs when the Docker version that runs the Dependency Scanning job is `19.03.00`.
Consider updating to Docker `19.03.1` or greater. Older versions are not
......
......@@ -70,7 +70,7 @@ The scanning tools and vulnerabilities database are updated regularly.
| Secure scanning tool | Vulnerabilities database updates |
|:-------------------------------------------------------------|-------------------------------------------|
| [Container Scanning](container_scanning/index.md) | Uses `clair`. The latest `clair-db` version is used for each job by running the [`latest` docker image tag](https://gitlab.com/gitlab-org/gitlab/blob/438a0a56dc0882f22bdd82e700554525f552d91b/lib/gitlab/ci/templates/Security/Container-Scanning.gitlab-ci.yml#L37). The `clair-db` database [is updated daily according to the author](https://github.com/arminc/clair-local-scan#clair-server-or-local). |
| [Container Scanning](container_scanning/index.md) | Uses `clair`. The latest `clair-db` version is used for each job by running the [`latest` Docker image tag](https://gitlab.com/gitlab-org/gitlab/blob/438a0a56dc0882f22bdd82e700554525f552d91b/lib/gitlab/ci/templates/Security/Container-Scanning.gitlab-ci.yml#L37). The `clair-db` database [is updated daily according to the author](https://github.com/arminc/clair-local-scan#clair-server-or-local). |
| [Dependency Scanning](dependency_scanning/index.md) | Relies on `bundler-audit` (for Rubygems), `retire.js` (for NPM packages), and `gemnasium` (GitLab's own tool for all libraries). Both `bundler-audit` and `retire.js` fetch their vulnerabilities data from GitHub repositories, so vulnerabilities added to `ruby-advisory-db` and `retire.js` are immediately available. The tools themselves are updated once per month if there's a new version. The [Gemnasium DB](https://gitlab.com/gitlab-org/security-products/gemnasium-db) is updated at least once a week. See our [current measurement of time from CVE being issued to our product being updated](https://about.gitlab.com/handbook/engineering/development/performance-indicators/#cve-issue-to-update). |
| [Dynamic Application Security Testing (DAST)](dast/index.md) | The scanning engine is updated on a periodic basis. See the [version of the underlying tool `zaproxy`](https://gitlab.com/gitlab-org/security-products/dast/blob/master/Dockerfile#L1). The scanning rules are downloaded at scan runtime. |
| [Static Application Security Testing (SAST)](sast/index.md) | Relies exclusively on [the tools GitLab wraps](sast/index.md#supported-languages-and-frameworks). The underlying analyzers are updated at least once per month if a relevant update is available. The vulnerabilities database is updated by the upstream tools. |
......
......@@ -557,7 +557,7 @@ security reports without requiring internet access.
## Troubleshooting
### Error response from daemon: error processing tar file: docker-tar: relocation error
### `Error response from daemon: error processing tar file: docker-tar: relocation error`
This error occurs when the Docker version that runs the SAST job is `19.03.0`.
Consider updating to Docker `19.03.1` or greater. Older versions are not
......
......@@ -243,7 +243,7 @@ For private and internal projects:
### Container Registry examples with GitLab CI/CD
If you're using docker-in-docker on your Runners, this is how your `.gitlab-ci.yml`
If you're using Docker-in-Docker on your Runners, this is how your `.gitlab-ci.yml`
should look similar to this:
```yaml
......@@ -350,11 +350,11 @@ or [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes.html) execut
make sure that [`pull_policy`](https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work)
is set to `always`.
### Using a docker-in-docker image from your Container Registry
### Using a Docker-in-Docker image from your Container Registry
If you want to use your own Docker images for docker-in-docker, there are a few
If you want to use your own Docker images for Docker-in-Docker, there are a few
things you need to do in addition to the steps in the
[docker-in-docker](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor) section:
[Docker-in-Docker](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor) section:
1. Update the `image` and `service` to point to your registry.
1. Add a service [alias](../../../ci/yaml/README.md#servicesalias).
......
......@@ -60,7 +60,7 @@ on your code by using GitLab CI/CD and [sitespeed.io](https://www.sitespeed.io)
using Docker-in-Docker.
1. First, set up GitLab Runner with a
[docker-in-docker build](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor).
[Docker-in-Docker build](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor).
1. After configuring the Runner, add a new job to `.gitlab-ci.yml` that generates
the expected report.
1. Define the `performance` job according to your version of GitLab:
......
......@@ -67,7 +67,7 @@ This example shows how to run Code Quality on your code by using GitLab CI/CD an
First, you need GitLab Runner configured:
- For the [docker-in-docker workflow](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor).
- For the [Docker-in-Docker workflow](../../../ci/docker/using_docker_build.md#use-docker-in-docker-workflow-with-docker-executor).
- With enough disk space to handle generated Code Quality files. For example on the [GitLab project](https://gitlab.com/gitlab-org/gitlab) the files are approximately 7 GB.
Once you set up the Runner, include the CodeQuality template in your CI config:
......@@ -120,7 +120,7 @@ This information will be automatically extracted and shown right in the merge re
CAUTION: **Caution:**
On self-managed instances, if a malicious actor compromises the Code Quality job
definition they will be able to execute privileged docker commands on the Runner
definition they will be able to execute privileged Docker commands on the Runner
host. Having proper access control policies mitigates this attack vector by
allowing access only to trusted actors.
......
......@@ -178,7 +178,7 @@ git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.com/<user>/<mydependent
```
It can also be used for system-wide authentication
(only do this in a docker container, it will overwrite ~/.netrc):
(only do this in a Docker container, it will overwrite ~/.netrc):
```shell
echo -e "machine gitlab.com\nlogin gitlab-ci-token\npassword ${CI_JOB_TOKEN}" > ~/.netrc
......
......@@ -55,7 +55,7 @@ Use the switches to enable or disable the following features:
| **Merge Requests** | ✓ | Enables [merge request](../merge_requests/) functionality; also see [Merge request settings](#merge-request-settings) |
| **Forks** | ✓ | Enables [forking](../index.md#fork-a-project) functionality |
| **Pipelines** | ✓ | Enables [CI/CD](../../../ci/README.md) functionality |
| **Container Registry** | | Activates a [registry](../../packages/container_registry/) for your docker images |
| **Container Registry** | | Activates a [registry](../../packages/container_registry/) for your Docker images |
| **Git Large File Storage** | | Enables the use of [large files](../../../topics/git/lfs/index.md#git-large-file-storage-lfs) |
| **Packages** | | Supports configuration of a [package registry](../../../administration/packages/index.md#gitlab-package-registry-administration-premium-only) functionality |
| **Wiki** | ✓ | Enables a separate system for [documentation](../wiki/) |
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment