Commit 0ab753fa authored by Nick Thomas's avatar Nick Thomas

Merge remote-tracking branch 'upstream/master' into ce-to-ee-2017-06-15

parents 58aafaf7 62a80669
Please view this file on the master branch, on stable branches it's out of date. Please view this file on the master branch, on stable branches it's out of date.
## 9.2.6 (2017-06-16)
- Geo: backported fix from 9.3 for big repository sync issues. !2000
- Geo - Properly set tracking database connection and cron jobs on secondary nodes.
- Fix approvers dropdown when creating a merge request from a fork.
- Fixed header being over issue boards when in focus mode.
- Fix bug where files over 2 GB would not be saved in Geo tracking DB.
## 9.2.5 (2017-06-07) ## 9.2.5 (2017-06-07)
- No changes. - No changes.
......
...@@ -2,6 +2,21 @@ ...@@ -2,6 +2,21 @@
documentation](doc/development/changelog.md) for instructions on adding your own documentation](doc/development/changelog.md) for instructions on adding your own
entry. entry.
## 9.2.6 (2017-06-16)
- Fix the last coverage in trace log should be extracted. !11128 (dosuken123)
- Respect merge, instead of push, permissions for protected actions. !11648
- Fix pipeline_schedules pages throwing error 500. !11706 (dosuken123)
- Make backup task to continue on corrupt repositories. !11962
- Fix incorrect ETag cache key when relative instance URL is used. !11964
- Fix math rendering on blob pages.
- Invalidate cache for issue and MR counters more granularly.
- Fix terminals support for Kubernetes Service.
- Fix LFS timeouts when trying to save large files.
- Strip trailing whitespaces in submodule URLs.
- Make sure reCAPTCHA configuration is loaded when spam checks are initiated.
- Remove foreigh key on ci_trigger_schedules only if it exists.
## 9.2.5 (2017-06-07) ## 9.2.5 (2017-06-07)
- No changes. - No changes.
......
...@@ -236,11 +236,9 @@ class IssuableBaseService < BaseService ...@@ -236,11 +236,9 @@ class IssuableBaseService < BaseService
) )
if old_assignees != issuable.assignees if old_assignees != issuable.assignees
## EE-specific
new_assignees = issuable.assignees.to_a new_assignees = issuable.assignees.to_a
affected_assignees = (old_assignees + new_assignees) - (old_assignees & new_assignees) affected_assignees = (old_assignees + new_assignees) - (old_assignees & new_assignees)
invalidate_cache_counts(affected_assignees.compact, issuable) invalidate_cache_counts(affected_assignees.compact, issuable)
## EE-specific
end end
after_update(issuable) after_update(issuable)
......
...@@ -3,10 +3,13 @@ ...@@ -3,10 +3,13 @@
- return unless issuable.is_a?(MergeRequest) - return unless issuable.is_a?(MergeRequest)
- return if issuable.closed_without_fork? - return if issuable.closed_without_fork?
<<<<<<< HEAD
-# This check is duplicated below to avoid CE -> EE merge conflicts. -# This check is duplicated below to avoid CE -> EE merge conflicts.
-# This comment and the following line should only exist in CE. -# This comment and the following line should only exist in CE.
- return unless issuable.can_remove_source_branch?(current_user) - return unless issuable.can_remove_source_branch?(current_user)
=======
>>>>>>> upstream/master
- if issuable.can_remove_source_branch?(current_user) - if issuable.can_remove_source_branch?(current_user)
.form-group .form-group
.col-sm-10.col-sm-offset-2 .col-sm-10.col-sm-offset-2
...@@ -17,11 +20,4 @@ ...@@ -17,11 +20,4 @@
= check_box_tag 'merge_request[force_remove_source_branch]', '1', initial_checkbox_value = check_box_tag 'merge_request[force_remove_source_branch]', '1', initial_checkbox_value
Remove source branch when merge request is accepted. Remove source branch when merge request is accepted.
.form-group = render 'shared/issuable/form/ee/squash_merge_param', issuable: issuable
.col-sm-10.col-sm-offset-2
.checkbox
= label_tag 'merge_request[squash]' do
= hidden_field_tag 'merge_request[squash]', '0', id: nil
= check_box_tag 'merge_request[squash]', '1', issuable.squash
Squash commits when merge request is accepted.
= link_to 'About this feature', help_page_path('user/project/merge_requests/squash_and_merge')
.form-group
.col-sm-10.col-sm-offset-2
.checkbox
= label_tag 'merge_request[squash]' do
= hidden_field_tag 'merge_request[squash]', '0', id: nil
= check_box_tag 'merge_request[squash]', '1', issuable.squash
Squash commits when merge request is accepted.
= link_to 'About this feature', help_page_path('user/project/merge_requests/squash_and_merge')
---
title: Geo - Properly set tracking database connection and cron jobs on secondary nodes
merge_request:
author:
---
title: Fix approvers dropdown when creating a merge request from a fork
merge_request:
author:
---
title: Fixed header being over issue boards when in focus mode
merge_request:
author:
---
title: Fix bug where files over 2 GB would not be saved in Geo tracking DB
merge_request:
author:
---
title: Fix the last coverage in trace log should be extracted
merge_request: 11128
author: dosuken123
---
title: Fix pipeline_schedules pages throwing error 500
merge_request: 11706
author: dosuken123
---
title: Fix incorrect ETag cache key when relative instance URL is used
merge_request: 11964
author:
---
title: Invalidate cache for issue and MR counters more granularly
merge_request:
author:
---
title: Make backup task to continue on corrupt repositories
merge_request: 11962
author:
---
title: Respect merge, instead of push, permissions for protected actions
merge_request: 11648
author:
---
title: Fix terminals support for Kubernetes Service
merge_request:
author:
---
title: Fix LFS timeouts when trying to save large files
merge_request:
author:
---
title: Strip trailing whitespaces in submodule URLs
merge_request:
author:
---
title: Remove foreigh key on ci_trigger_schedules only if it exists
merge_request:
author:
...@@ -4,10 +4,17 @@ ...@@ -4,10 +4,17 @@
> [Amazon Elasticsearch][aws-elasticsearch] was [introduced][ee-1305] in GitLab > [Amazon Elasticsearch][aws-elasticsearch] was [introduced][ee-1305] in GitLab
> EE 9.0. > EE 9.0.
[Elasticsearch] is a flexible, scalable and powerful search service. ## Why do you need this?
If you want to keep GitLab's search fast when dealing with huge amount of data, [Elasticsearch] is a flexible, scalable and powerful search service that saves developers time. Instead of developers creating duplicate code and wasting time, they can now search for code within other teams that will help their own project.
you should consider [enabling Elasticsearch](#enable-elasticsearch).
## Who needs this?
1. My team uses a plugin to find code from different teams
2. Are developers from different teams creating the same code for their own projects?
3. Are you looking to enable innersourcing?
4. Do you you want to keep GitLab's search fast when dealing with huge amount of data?
If you answered yes to any of these, you should consider [enabling Elasticsearch](#enable-elasticsearch).
GitLab leverages the search capabilities of Elasticsearch and enables it when GitLab leverages the search capabilities of Elasticsearch and enables it when
searching in: searching in:
......
...@@ -118,10 +118,13 @@ feature 'Jobs', :feature do ...@@ -118,10 +118,13 @@ feature 'Jobs', :feature do
before do before do
visit namespace_project_job_path(project.namespace, project, job) visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
end end
it 'shows status name', :js do it 'shows status name', :js do
expect(page).to have_css('.ci-status.ci-success', text: 'passed') expect(page).to have_css('.ci-status.ci-success', text: 'passed')
=======
>>>>>>> upstream/master
end end
it 'shows commit`s data' do it 'shows commit`s data' do
...@@ -353,12 +356,12 @@ feature 'Jobs', :feature do ...@@ -353,12 +356,12 @@ feature 'Jobs', :feature do
end end
end end
context 'build project is over shared runners limit' do context 'job project is over shared runners limit' do
let(:group) { create(:group, :with_used_build_minutes_limit) } let(:group) { create(:group, :with_used_build_minutes_limit) }
let(:project) { create(:project, namespace: group, shared_runners_enabled: true) } let(:project) { create(:project, namespace: group, shared_runners_enabled: true) }
it 'displays a warning message' do it 'displays a warning message' do
visit namespace_project_build_path(project.namespace, project, build) visit namespace_project_job_path(project.namespace, project, job)
expect(page).to have_content('You have used all your shared Runners pipeline minutes.') expect(page).to have_content('You have used all your shared Runners pipeline minutes.')
end end
...@@ -370,7 +373,11 @@ feature 'Jobs', :feature do ...@@ -370,7 +373,11 @@ feature 'Jobs', :feature do
before do before do
job.run! job.run!
visit namespace_project_job_path(project.namespace, project, job) visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
find('.js-cancel-job').click() find('.js-cancel-job').click()
=======
click_link "Cancel"
>>>>>>> upstream/master
end end
it 'loads the page and shows all needed controls' do it 'loads the page and shows all needed controls' do
...@@ -378,6 +385,19 @@ feature 'Jobs', :feature do ...@@ -378,6 +385,19 @@ feature 'Jobs', :feature do
expect(page).to have_content 'Retry' expect(page).to have_content 'Retry'
end end
end end
<<<<<<< HEAD
=======
context "Job from other project" do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
page.driver.post(cancel_namespace_project_job_path(project.namespace, project, job2))
end
it { expect(page.status_code).to eq(404) }
end
>>>>>>> upstream/master
end end
describe "POST /:project/jobs/:id/retry" do describe "POST /:project/jobs/:id/retry" do
...@@ -385,8 +405,15 @@ feature 'Jobs', :feature do ...@@ -385,8 +405,15 @@ feature 'Jobs', :feature do
before do before do
job.run! job.run!
visit namespace_project_job_path(project.namespace, project, job) visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
find('.js-cancel-job').click() find('.js-cancel-job').click()
find('.js-retry-button').trigger('click') find('.js-retry-button').trigger('click')
=======
click_link 'Cancel'
page.within('.build-header') do
click_link 'Retry job'
end
>>>>>>> upstream/master
end end
it 'shows the right status and buttons', :js do it 'shows the right status and buttons', :js do
...@@ -397,6 +424,20 @@ feature 'Jobs', :feature do ...@@ -397,6 +424,20 @@ feature 'Jobs', :feature do
end end
end end
<<<<<<< HEAD
=======
context "Job from other project" do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
click_link 'Cancel'
page.driver.post(retry_namespace_project_job_path(project.namespace, project, job2))
end
it { expect(page).to have_http_status(404) }
end
>>>>>>> upstream/master
context "Job that current user is not allowed to retry" do context "Job that current user is not allowed to retry" do
before do before do
job.run! job.run!
...@@ -470,9 +511,24 @@ feature 'Jobs', :feature do ...@@ -470,9 +511,24 @@ feature 'Jobs', :feature do
Capybara.current_session.driver.headers = { 'X-Sendfile-Type' => 'X-Sendfile' } Capybara.current_session.driver.headers = { 'X-Sendfile-Type' => 'X-Sendfile' }
job.run! job.run!
<<<<<<< HEAD
end end
context 'when job has trace in file', :js do context 'when job has trace in file', :js do
=======
allow_any_instance_of(Gitlab::Ci::Trace).to receive(:paths)
.and_return(paths)
visit namespace_project_job_path(project.namespace, project, job)
end
context 'when job has trace in file', :js do
let(:paths) do
[existing_file]
end
>>>>>>> upstream/master
before do before do
allow_any_instance_of(Gitlab::Ci::Trace) allow_any_instance_of(Gitlab::Ci::Trace)
.to receive(:paths) .to receive(:paths)
......
...@@ -11,7 +11,11 @@ describe API::Jobs, :api do ...@@ -11,7 +11,11 @@ describe API::Jobs, :api do
ref: project.default_branch) ref: project.default_branch)
end end
<<<<<<< HEAD
let!(:job) { create(:ci_build, pipeline: pipeline) } let!(:job) { create(:ci_build, pipeline: pipeline) }
=======
let(:job) { create(:ci_build, pipeline: pipeline) }
>>>>>>> upstream/master
let(:user) { create(:user) } let(:user) { create(:user) }
let(:api_user) { user } let(:api_user) { user }
...@@ -26,6 +30,10 @@ describe API::Jobs, :api do ...@@ -26,6 +30,10 @@ describe API::Jobs, :api do
let(:query) { Hash.new } let(:query) { Hash.new }
before do before do
<<<<<<< HEAD
=======
job
>>>>>>> upstream/master
get api("/projects/#{project.id}/jobs", api_user), query get api("/projects/#{project.id}/jobs", api_user), query
end end
...@@ -89,6 +97,10 @@ describe API::Jobs, :api do ...@@ -89,6 +97,10 @@ describe API::Jobs, :api do
let(:query) { Hash.new } let(:query) { Hash.new }
before do before do
<<<<<<< HEAD
=======
job
>>>>>>> upstream/master
get api("/projects/#{project.id}/pipelines/#{pipeline.id}/jobs", api_user), query get api("/projects/#{project.id}/pipelines/#{pipeline.id}/jobs", api_user), query
end end
......
...@@ -14,6 +14,9 @@ module FilteredSearchHelpers ...@@ -14,6 +14,9 @@ module FilteredSearchHelpers
filtered_search.set(search) filtered_search.set(search)
if submit if submit
# Wait for the lazy author/assignee tokens that
# swap out the username with an avatar and name
wait_for_requests
filtered_search.send_keys(:enter) filtered_search.send_keys(:enter)
end end
end end
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment