Commit c60a1173 authored by GitLab Bot's avatar GitLab Bot

Add latest changes from gitlab-org/gitlab@master

parent e1443690
...@@ -197,19 +197,19 @@ For more fine tuning, read also about the ...@@ -197,19 +197,19 @@ For more fine tuning, read also about the
The most common use case of cache is to preserve contents between subsequent The most common use case of cache is to preserve contents between subsequent
runs of jobs for things like dependencies and commonly used libraries runs of jobs for things like dependencies and commonly used libraries
(Nodejs packages, PHP packages, rubygems, Python libraries, etc.), (Node.js packages, PHP packages, rubygems, Python libraries, etc.),
so they don't have to be re-fetched from the public internet. so they don't have to be re-fetched from the public internet.
NOTE: **Note:** NOTE: **Note:**
For more examples, check out our [GitLab CI/CD For more examples, check out our [GitLab CI/CD
templates](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/ci/templates). templates](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/ci/templates).
### Caching Nodejs dependencies ### Caching Node.js dependencies
Assuming your project is using [npm](https://www.npmjs.com/) or Assuming your project is using [npm](https://www.npmjs.com/) or
[Yarn](https://classic.yarnpkg.com/en/) to install the Nodejs dependencies, the [Yarn](https://classic.yarnpkg.com/en/) to install the Node.js dependencies, the
following example defines `cache` globally so that all jobs inherit it. following example defines `cache` globally so that all jobs inherit it.
Nodejs modules are installed in `node_modules/` and are cached per-branch: Node.js modules are installed in `node_modules/` and are cached per-branch:
```yaml ```yaml
# #
......
...@@ -6,7 +6,7 @@ type: tutorial ...@@ -6,7 +6,7 @@ type: tutorial
This guide covers the building of dependencies of a PHP project while compiling assets via an NPM script using [GitLab CI/CD](../../README.md). This guide covers the building of dependencies of a PHP project while compiling assets via an NPM script using [GitLab CI/CD](../../README.md).
While it is possible to create your own image with custom PHP and Node JS versions, for brevity, we will use an existing [Docker image](https://hub.docker.com/r/tetraweb/php/) that contains both PHP and NodeJS installed. While it is possible to create your own image with custom PHP and Node.js versions, for brevity, we will use an existing [Docker image](https://hub.docker.com/r/tetraweb/php/) that contains both PHP and Node.js installed.
```yaml ```yaml
image: tetraweb/php image: tetraweb/php
...@@ -23,7 +23,7 @@ before_script: ...@@ -23,7 +23,7 @@ before_script:
- php -r "unlink('composer-setup.php');" - php -r "unlink('composer-setup.php');"
``` ```
This will make sure we have all requirements ready. Next, we want to run `composer install` to fetch all PHP dependencies and `npm install` to load node packages, then run the `npm` script. We need to append them into `before_script` section: This will make sure we have all requirements ready. Next, we want to run `composer install` to fetch all PHP dependencies and `npm install` to load Node.js packages, then run the `npm` script. We need to append them into `before_script` section:
```yaml ```yaml
before_script: before_script:
......
...@@ -74,7 +74,7 @@ gitlab-runner register \ ...@@ -74,7 +74,7 @@ gitlab-runner register \
--description "ruby:2.6" \ --description "ruby:2.6" \
--executor "docker" \ --executor "docker" \
--docker-image ruby:2.6 \ --docker-image ruby:2.6 \
--docker-postgres latest --docker-services latest
``` ```
With the command above, you create a Runner that uses the [ruby:2.6](https://hub.docker.com/_/ruby) image and uses a [postgres](https://hub.docker.com/_/postgres) database. With the command above, you create a Runner that uses the [ruby:2.6](https://hub.docker.com/_/ruby) image and uses a [postgres](https://hub.docker.com/_/postgres) database.
......
...@@ -102,7 +102,7 @@ and is meant to be a mapping of concepts there to concepts in GitLab. ...@@ -102,7 +102,7 @@ and is meant to be a mapping of concepts there to concepts in GitLab.
The agent section is used to define how a pipeline will be executed. For GitLab, we use the [GitLab Runner](../runners/README.md) The agent section is used to define how a pipeline will be executed. For GitLab, we use the [GitLab Runner](../runners/README.md)
to provide this capability. You can configure your own runners in Kubernetes or on any host, or take advantage to provide this capability. You can configure your own runners in Kubernetes or on any host, or take advantage
of our shared runner fleet (note that the shared runner fleet is only available for GitLab.com users.) The link above will bring you to the documenation which will describe how to get of our shared runner fleet (note that the shared runner fleet is only available for GitLab.com users.) The link above will bring you to the documentation which will describe how to get
up and running quickly. We also support using [tags](../runners/README.md#using-tags) to direct different jobs up and running quickly. We also support using [tags](../runners/README.md#using-tags) to direct different jobs
to different Runners (execution agents). to different Runners (execution agents).
......
...@@ -1641,7 +1641,7 @@ cache: ...@@ -1641,7 +1641,7 @@ cache:
- node_modules - node_modules
``` ```
In this example we are creating a cache for Ruby and Nodejs dependencies that In this example we are creating a cache for Ruby and Node.js dependencies that
is tied to current versions of the `Gemfile.lock` and `package.json` files. Whenever one of is tied to current versions of the `Gemfile.lock` and `package.json` files. Whenever one of
these files changes, a new cache key is computed and a new cache is created. Any future these files changes, a new cache key is computed and a new cache is created. Any future
job runs using the same `Gemfile.lock` and `package.json` with `cache:key:files` will job runs using the same `Gemfile.lock` and `package.json` with `cache:key:files` will
......
...@@ -129,7 +129,7 @@ Component statuses are linked to configuration documentation for each component. ...@@ -129,7 +129,7 @@ Component statuses are linked to configuration documentation for each component.
| [Unicorn (GitLab Rails)](#unicorn) | Handles requests for the web interface and API | [][unicorn-omnibus] | [][unicorn-charts] | [][unicorn-charts] | [](../user/gitlab_com/index.md#unicorn) | [][unicorn-source] | [][gitlab-yml] | CE & EE | | [Unicorn (GitLab Rails)](#unicorn) | Handles requests for the web interface and API | [][unicorn-omnibus] | [][unicorn-charts] | [][unicorn-charts] | [](../user/gitlab_com/index.md#unicorn) | [][unicorn-source] | [][gitlab-yml] | CE & EE |
| [Sidekiq](#sidekiq) | Background jobs processor | [][sidekiq-omnibus] | [][sidekiq-charts] | [](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/index.html) | [](../user/gitlab_com/index.md#sidekiq) | [][gitlab-yml] | [][gitlab-yml] | CE & EE | | [Sidekiq](#sidekiq) | Background jobs processor | [][sidekiq-omnibus] | [][sidekiq-charts] | [](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/index.html) | [](../user/gitlab_com/index.md#sidekiq) | [][gitlab-yml] | [][gitlab-yml] | CE & EE |
| [Gitaly](#gitaly) | Git RPC service for handling all Git calls made by GitLab | [][gitaly-omnibus] | [][gitaly-charts] | [][gitaly-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][gitaly-source] | ✅ | CE & EE | | [Gitaly](#gitaly) | Git RPC service for handling all Git calls made by GitLab | [][gitaly-omnibus] | [][gitaly-charts] | [][gitaly-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][gitaly-source] | ✅ | CE & EE |
| [Praefect](#praefect) | A transparant proxy between any Git client and Gitaly storage nodes. | [][gitaly-omnibus] | [][gitaly-charts] | [][gitaly-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][praefect-source] | ✅ | CE & EE | | [Praefect](#praefect) | A transparent proxy between any Git client and Gitaly storage nodes. | [][gitaly-omnibus] | [][gitaly-charts] | [][gitaly-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][praefect-source] | ✅ | CE & EE |
| [GitLab Workhorse](#gitlab-workhorse) | Smart reverse proxy, handles large HTTP requests | [][workhorse-omnibus] | [][workhorse-charts] | [][workhorse-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][workhorse-source] | ✅ | CE & EE | | [GitLab Workhorse](#gitlab-workhorse) | Smart reverse proxy, handles large HTTP requests | [][workhorse-omnibus] | [][workhorse-charts] | [][workhorse-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][workhorse-source] | ✅ | CE & EE |
| [GitLab Shell](#gitlab-shell) | Handles `git` over SSH sessions | [][shell-omnibus] | [][shell-charts] | [][shell-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][shell-source] | [][gitlab-yml] | CE & EE | | [GitLab Shell](#gitlab-shell) | Handles `git` over SSH sessions | [][shell-omnibus] | [][shell-charts] | [][shell-charts] | [](https://about.gitlab.com/handbook/engineering/infrastructure/production-architecture/#service-architecture) | [][shell-source] | [][gitlab-yml] | CE & EE |
| [GitLab Pages](#gitlab-pages) | Hosts static websites | [][pages-omnibus] | [][pages-charts] | [][pages-charts] | [](../user/gitlab_com/index.md#gitlab-pages) | [][pages-source] | [][pages-gdk] | CE & EE | | [GitLab Pages](#gitlab-pages) | Hosts static websites | [][pages-omnibus] | [][pages-charts] | [][pages-charts] | [](../user/gitlab_com/index.md#gitlab-pages) | [][pages-source] | [][pages-gdk] | CE & EE |
......
...@@ -159,7 +159,7 @@ the issue should be relabelled as ~"group::access" while keeping the original ...@@ -159,7 +159,7 @@ the issue should be relabelled as ~"group::access" while keeping the original
~"devops::create" unchanged. ~"devops::create" unchanged.
We also use stage and group labels to help quantify our [throughput](https://about.gitlab.com/handbook/engineering/management/throughput/). We also use stage and group labels to help quantify our [throughput](https://about.gitlab.com/handbook/engineering/management/throughput/).
Please read [Stage and Group labels in Throughtput](https://about.gitlab.com/handbook/engineering/management/throughput/#stage-and-group-labels-in-throughput) for more information on how the labels are used in this context. Please read [Stage and Group labels in Throughput](https://about.gitlab.com/handbook/engineering/management/throughput/#stage-and-group-labels-in-throughput) for more information on how the labels are used in this context.
### Category labels ### Category labels
......
...@@ -8,7 +8,7 @@ Currently we rely on different sources to present diffs, these include: ...@@ -8,7 +8,7 @@ Currently we rely on different sources to present diffs, these include:
## Deep Dive ## Deep Dive
In Jaunary 2019, Oswaldo Ferreira hosted a [Deep Dive] on GitLab's Diffs and Commenting on Diffs functionality to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube], and the slides on [Google Slides] and in [PDF]. Everything covered in this deep dive was accurate as of GitLab 11.7, and while specific details may have changed since then, it should still serve as a good introduction. In January 2019, Oswaldo Ferreira hosted a [Deep Dive] on GitLab's Diffs and Commenting on Diffs functionality to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube], and the slides on [Google Slides] and in [PDF]. Everything covered in this deep dive was accurate as of GitLab 11.7, and while specific details may have changed since then, it should still serve as a good introduction.
[Deep Dive]: https://gitlab.com/gitlab-org/create-stage/issues/1 [Deep Dive]: https://gitlab.com/gitlab-org/create-stage/issues/1
[recording on YouTube]: https://www.youtube.com/watch?v=K6G3gMcFyek [recording on YouTube]: https://www.youtube.com/watch?v=K6G3gMcFyek
......
...@@ -37,7 +37,7 @@ For instance, it is common practice to use `before_script` to install system lib ...@@ -37,7 +37,7 @@ For instance, it is common practice to use `before_script` to install system lib
a particular project needs before performing SAST or Dependency Scanning. a particular project needs before performing SAST or Dependency Scanning.
Similarly, [`after_script`](../../ci/yaml/README.md#before_script-and-after_script) Similarly, [`after_script`](../../ci/yaml/README.md#before_script-and-after_script)
should not not be used in the job definition, because it may be overriden by users. should not not be used in the job definition, because it may be overridden by users.
### Stage ### Stage
......
...@@ -47,7 +47,7 @@ POST /internal/allowed ...@@ -47,7 +47,7 @@ POST /internal/allowed
| `protocol` | string | yes | SSH when called from GitLab-shell, HTTP or SSH when called from Gitaly | | `protocol` | string | yes | SSH when called from GitLab-shell, HTTP or SSH when called from Gitaly |
| `action` | string | yes | Git command being run (`git-upload-pack`, `git-receive-pack`, `git-upload-archive`) | | `action` | string | yes | Git command being run (`git-upload-pack`, `git-receive-pack`, `git-upload-archive`) |
| `changes` | string | yes | `<oldrev> <newrev> <refname>` when called from Gitaly, The magic string `_any` when called from GitLab Shell | | `changes` | string | yes | `<oldrev> <newrev> <refname>` when called from Gitaly, The magic string `_any` when called from GitLab Shell |
| `check_ip` | string | no | Ip adress from which call to GitLab Shell was made | | `check_ip` | string | no | Ip address from which call to GitLab Shell was made |
Example request: Example request:
......
...@@ -265,7 +265,7 @@ provides helper methods to track exceptions: ...@@ -265,7 +265,7 @@ provides helper methods to track exceptions:
and DOES NOT send the exception to Sentry, and DOES NOT send the exception to Sentry,
1. `Gitlab::ErrorTracking.track_and_raise_for_dev_exception`: this method logs, 1. `Gitlab::ErrorTracking.track_and_raise_for_dev_exception`: this method logs,
sends exception to Sentry (if configured) and re-raises the exception sends exception to Sentry (if configured) and re-raises the exception
for development and test enviroments. for development and test environments.
It is advised to only use `Gitlab::ErrorTracking.track_and_raise_exception` It is advised to only use `Gitlab::ErrorTracking.track_and_raise_exception`
and `Gitlab::ErrorTracking.track_exception` as presented on below examples. and `Gitlab::ErrorTracking.track_exception` as presented on below examples.
......
...@@ -207,7 +207,7 @@ the default by adding the following to your service: ...@@ -207,7 +207,7 @@ the default by adding the following to your service:
- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation - `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
is never run concurrently by multiple workers. is never run concurrently by multiple workers.
- This attribute is the timeout for the `Gitlab::ExclusiveLease`. - This attribute is the timeout for the `Gitlab::ExclusiveLease`.
- It defaults to 2 minutes, but can be overriden if a different timeout is required. - It defaults to 2 minutes, but can be overridden if a different timeout is required.
```ruby ```ruby
self.reactive_cache_lease_timeout = 2.minutes self.reactive_cache_lease_timeout = 2.minutes
......
...@@ -178,7 +178,7 @@ talking to the primary can mitigate this. ...@@ -178,7 +178,7 @@ talking to the primary can mitigate this.
In the second case, existing connections to the newly-demoted replica In the second case, existing connections to the newly-demoted replica
may execute a write query, which would fail. During a failover, it may may execute a write query, which would fail. During a failover, it may
be advantegeous to shut down the PgBouncer talking to the primary to be advantageous to shut down the PgBouncer talking to the primary to
ensure no more traffic arrives for it. The alternative would be to make ensure no more traffic arrives for it. The alternative would be to make
the application aware of the failover event and terminate its the application aware of the failover event and terminate its
connections gracefully. connections gracefully.
......
...@@ -78,7 +78,7 @@ That's not possible if a test leaves the browser logged in when it finishes. Nor ...@@ -78,7 +78,7 @@ That's not possible if a test leaves the browser logged in when it finishes. Nor
For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736> For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736>
Ideally, any actions peformed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-beforeall-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block. Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-beforeall-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
```ruby ```ruby
after(:all) do after(:all) do
......
...@@ -235,7 +235,7 @@ SELECT (START_EVENT_TIME-END_EVENT_TIME) as duration, END_EVENT.timestamp ...@@ -235,7 +235,7 @@ SELECT (START_EVENT_TIME-END_EVENT_TIME) as duration, END_EVENT.timestamp
- Services (`Analytics::CycleAnalytics` module): All `Stage` related actions will be delegated to respective service objects. - Services (`Analytics::CycleAnalytics` module): All `Stage` related actions will be delegated to respective service objects.
- Models (`Analytics::CycleAnalytics` module): Models are used to persist the `Stage` objects `ProjectStage` and `GroupStage`. - Models (`Analytics::CycleAnalytics` module): Models are used to persist the `Stage` objects `ProjectStage` and `GroupStage`.
- Feature classes (`Gitlab::Analytics::CycleAnalytics` module): - Feature classes (`Gitlab::Analytics::CycleAnalytics` module):
- Responsible for composing queries and define feature specific busines logic. - Responsible for composing queries and define feature specific business logic.
- `DataCollector`, `Event`, `StageEvents`, etc. - `DataCollector`, `Event`, `StageEvents`, etc.
## Testing ## Testing
......
...@@ -805,7 +805,7 @@ commands to be wrapped as follows: ...@@ -805,7 +805,7 @@ commands to be wrapped as follows:
/bin/herokuish procfile exec $COMMAND /bin/herokuish procfile exec $COMMAND
``` ```
This might be neccessary, for example, when: This might be necessary, for example, when:
- Attaching using `kubectl exec`. - Attaching using `kubectl exec`.
- Using GitLab's [Web Terminal](../../ci/environments.md#web-terminals). - Using GitLab's [Web Terminal](../../ci/environments.md#web-terminals).
......
...@@ -55,7 +55,7 @@ you can configure a cluster on GKE. Once this is set up, you can follow the step ...@@ -55,7 +55,7 @@ you can configure a cluster on GKE. Once this is set up, you can follow the step
NOTE: **Note** NOTE: **Note**
This guide shows how the WAF can be deployed using Auto DevOps. The WAF This guide shows how the WAF can be deployed using Auto DevOps. The WAF
is avaliable by default to all applications no matter how they are deployed, is available by default to all applications no matter how they are deployed,
as long as they are using Ingress. as long as they are using Ingress.
## Network firewall vs. Web Application Firewall ## Network firewall vs. Web Application Firewall
......
...@@ -12,7 +12,7 @@ need to ensure your own [Runners are configured](../../ci/runners/README.md) and ...@@ -12,7 +12,7 @@ need to ensure your own [Runners are configured](../../ci/runners/README.md) and
[Google OAuth is enabled](../../integration/google.md). [Google OAuth is enabled](../../integration/google.md).
**Note**: GitLab's Web Application Firewall is deployed with [Ingress](../../user/clusters/applications.md#Ingress), **Note**: GitLab's Web Application Firewall is deployed with [Ingress](../../user/clusters/applications.md#Ingress),
so it will be avaliable to your applications no matter how you deploy them to Kubernetes. so it will be available to your applications no matter how you deploy them to Kubernetes.
## Enable or disable ModSecurity ## Enable or disable ModSecurity
......
...@@ -15,9 +15,7 @@ test: ...@@ -15,9 +15,7 @@ test:
- export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DB_HOST}:5432/${POSTGRES_DB}" - export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${DB_HOST}:5432/${POSTGRES_DB}"
- cp -R . /tmp/app - cp -R . /tmp/app
- /bin/herokuish buildpack test - /bin/herokuish buildpack test
only: rules:
- branches - if: '$TEST_DISABLED'
- tags when: never
except: - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
variables:
- $TEST_DISABLED
# frozen_string_literal: true
require 'spec_helper'
describe 'Jobs/Test.gitlab-ci.yml' do
subject(:template) { Gitlab::Template::GitlabCiYmlTemplate.find('Jobs/Test') }
describe 'the created pipeline' do
let_it_be(:user) { create(:admin) }
let_it_be(:project) { create(:project, :repository) }
let(:default_branch) { 'master' }
let(:pipeline_ref) { default_branch }
let(:service) { Ci::CreatePipelineService.new(project, user, ref: pipeline_ref) }
let(:pipeline) { service.execute!(:push) }
let(:build_names) { pipeline.builds.pluck(:name) }
before do
stub_ci_pipeline_yaml_file(template.content)
allow_any_instance_of(Ci::BuildScheduleWorker).to receive(:perform).and_return(true)
allow(project).to receive(:default_branch).and_return(default_branch)
end
context 'on master' do
it 'creates the test job' do
expect(build_names).to contain_exactly('test')
end
end
context 'on another branch' do
let(:pipeline_ref) { 'feature' }
it 'creates the test job' do
expect(build_names).to contain_exactly('test')
end
end
context 'on tag' do
let(:pipeline_ref) { 'v1.0.0' }
it 'creates the test job' do
expect(pipeline).to be_tag
expect(build_names).to contain_exactly('test')
end
end
context 'on merge request' do
let(:service) { MergeRequests::CreatePipelineService.new(project, user) }
let(:merge_request) { create(:merge_request, :simple, source_project: project) }
let(:pipeline) { service.execute(merge_request) }
it 'has no jobs' do
expect(pipeline).to be_merge_request_event
expect(build_names).to be_empty
end
end
context 'TEST_DISABLED is set' do
before do
create(:ci_variable, key: 'TEST_DISABLED', value: 'true', project: project)
end
context 'on master' do
it 'has no jobs' do
expect { pipeline }.to raise_error(Ci::CreatePipelineService::CreateError)
end
end
context 'on another branch' do
let(:pipeline_ref) { 'feature' }
it 'has no jobs' do
expect { pipeline }.to raise_error(Ci::CreatePipelineService::CreateError)
end
end
context 'on tag' do
let(:pipeline_ref) { 'v1.0.0' }
it 'has no jobs' do
expect { pipeline }.to raise_error(Ci::CreatePipelineService::CreateError)
end
end
end
end
end
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment