Commit 0ab753fa authored by Nick Thomas's avatar Nick Thomas

Merge remote-tracking branch 'upstream/master' into ce-to-ee-2017-06-15

parents 58aafaf7 62a80669
Please view this file on the master branch, on stable branches it's out of date.
## 9.2.6 (2017-06-16)
- Geo: backported fix from 9.3 for big repository sync issues. !2000
- Geo - Properly set tracking database connection and cron jobs on secondary nodes.
- Fix approvers dropdown when creating a merge request from a fork.
- Fixed header being over issue boards when in focus mode.
- Fix bug where files over 2 GB would not be saved in Geo tracking DB.
## 9.2.5 (2017-06-07)
- No changes.
......
......@@ -2,6 +2,21 @@
documentation](doc/development/changelog.md) for instructions on adding your own
entry.
## 9.2.6 (2017-06-16)
- Fix the last coverage in trace log should be extracted. !11128 (dosuken123)
- Respect merge, instead of push, permissions for protected actions. !11648
- Fix pipeline_schedules pages throwing error 500. !11706 (dosuken123)
- Make backup task to continue on corrupt repositories. !11962
- Fix incorrect ETag cache key when relative instance URL is used. !11964
- Fix math rendering on blob pages.
- Invalidate cache for issue and MR counters more granularly.
- Fix terminals support for Kubernetes Service.
- Fix LFS timeouts when trying to save large files.
- Strip trailing whitespaces in submodule URLs.
- Make sure reCAPTCHA configuration is loaded when spam checks are initiated.
- Remove foreigh key on ci_trigger_schedules only if it exists.
## 9.2.5 (2017-06-07)
- No changes.
......
......@@ -236,11 +236,9 @@ class IssuableBaseService < BaseService
)
if old_assignees != issuable.assignees
## EE-specific
new_assignees = issuable.assignees.to_a
affected_assignees = (old_assignees + new_assignees) - (old_assignees & new_assignees)
invalidate_cache_counts(affected_assignees.compact, issuable)
## EE-specific
end
after_update(issuable)
......
......@@ -3,10 +3,13 @@
- return unless issuable.is_a?(MergeRequest)
- return if issuable.closed_without_fork?
<<<<<<< HEAD
-# This check is duplicated below to avoid CE -> EE merge conflicts.
-# This comment and the following line should only exist in CE.
- return unless issuable.can_remove_source_branch?(current_user)
=======
>>>>>>> upstream/master
- if issuable.can_remove_source_branch?(current_user)
.form-group
.col-sm-10.col-sm-offset-2
......@@ -17,11 +20,4 @@
= check_box_tag 'merge_request[force_remove_source_branch]', '1', initial_checkbox_value
Remove source branch when merge request is accepted.
.form-group
.col-sm-10.col-sm-offset-2
.checkbox
= label_tag 'merge_request[squash]' do
= hidden_field_tag 'merge_request[squash]', '0', id: nil
= check_box_tag 'merge_request[squash]', '1', issuable.squash
Squash commits when merge request is accepted.
= link_to 'About this feature', help_page_path('user/project/merge_requests/squash_and_merge')
= render 'shared/issuable/form/ee/squash_merge_param', issuable: issuable
.form-group
.col-sm-10.col-sm-offset-2
.checkbox
= label_tag 'merge_request[squash]' do
= hidden_field_tag 'merge_request[squash]', '0', id: nil
= check_box_tag 'merge_request[squash]', '1', issuable.squash
Squash commits when merge request is accepted.
= link_to 'About this feature', help_page_path('user/project/merge_requests/squash_and_merge')
---
title: Geo - Properly set tracking database connection and cron jobs on secondary nodes
merge_request:
author:
---
title: Fix approvers dropdown when creating a merge request from a fork
merge_request:
author:
---
title: Fixed header being over issue boards when in focus mode
merge_request:
author:
---
title: Fix bug where files over 2 GB would not be saved in Geo tracking DB
merge_request:
author:
---
title: Fix the last coverage in trace log should be extracted
merge_request: 11128
author: dosuken123
---
title: Fix pipeline_schedules pages throwing error 500
merge_request: 11706
author: dosuken123
---
title: Fix incorrect ETag cache key when relative instance URL is used
merge_request: 11964
author:
---
title: Invalidate cache for issue and MR counters more granularly
merge_request:
author:
---
title: Make backup task to continue on corrupt repositories
merge_request: 11962
author:
---
title: Respect merge, instead of push, permissions for protected actions
merge_request: 11648
author:
---
title: Fix terminals support for Kubernetes Service
merge_request:
author:
---
title: Fix LFS timeouts when trying to save large files
merge_request:
author:
---
title: Strip trailing whitespaces in submodule URLs
merge_request:
author:
---
title: Remove foreigh key on ci_trigger_schedules only if it exists
merge_request:
author:
......@@ -4,10 +4,17 @@
> [Amazon Elasticsearch][aws-elasticsearch] was [introduced][ee-1305] in GitLab
> EE 9.0.
[Elasticsearch] is a flexible, scalable and powerful search service.
## Why do you need this?
If you want to keep GitLab's search fast when dealing with huge amount of data,
you should consider [enabling Elasticsearch](#enable-elasticsearch).
[Elasticsearch] is a flexible, scalable and powerful search service that saves developers time. Instead of developers creating duplicate code and wasting time, they can now search for code within other teams that will help their own project.
## Who needs this?
1. My team uses a plugin to find code from different teams
2. Are developers from different teams creating the same code for their own projects?
3. Are you looking to enable innersourcing?
4. Do you you want to keep GitLab's search fast when dealing with huge amount of data?
If you answered yes to any of these, you should consider [enabling Elasticsearch](#enable-elasticsearch).
GitLab leverages the search capabilities of Elasticsearch and enables it when
searching in:
......
......@@ -118,10 +118,13 @@ feature 'Jobs', :feature do
before do
visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
end
it 'shows status name', :js do
expect(page).to have_css('.ci-status.ci-success', text: 'passed')
=======
>>>>>>> upstream/master
end
it 'shows commit`s data' do
......@@ -353,12 +356,12 @@ feature 'Jobs', :feature do
end
end
context 'build project is over shared runners limit' do
context 'job project is over shared runners limit' do
let(:group) { create(:group, :with_used_build_minutes_limit) }
let(:project) { create(:project, namespace: group, shared_runners_enabled: true) }
it 'displays a warning message' do
visit namespace_project_build_path(project.namespace, project, build)
visit namespace_project_job_path(project.namespace, project, job)
expect(page).to have_content('You have used all your shared Runners pipeline minutes.')
end
......@@ -370,7 +373,11 @@ feature 'Jobs', :feature do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
find('.js-cancel-job').click()
=======
click_link "Cancel"
>>>>>>> upstream/master
end
it 'loads the page and shows all needed controls' do
......@@ -378,6 +385,19 @@ feature 'Jobs', :feature do
expect(page).to have_content 'Retry'
end
end
<<<<<<< HEAD
=======
context "Job from other project" do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
page.driver.post(cancel_namespace_project_job_path(project.namespace, project, job2))
end
it { expect(page.status_code).to eq(404) }
end
>>>>>>> upstream/master
end
describe "POST /:project/jobs/:id/retry" do
......@@ -385,8 +405,15 @@ feature 'Jobs', :feature do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
<<<<<<< HEAD
find('.js-cancel-job').click()
find('.js-retry-button').trigger('click')
=======
click_link 'Cancel'
page.within('.build-header') do
click_link 'Retry job'
end
>>>>>>> upstream/master
end
it 'shows the right status and buttons', :js do
......@@ -397,6 +424,20 @@ feature 'Jobs', :feature do
end
end
<<<<<<< HEAD
=======
context "Job from other project" do
before do
job.run!
visit namespace_project_job_path(project.namespace, project, job)
click_link 'Cancel'
page.driver.post(retry_namespace_project_job_path(project.namespace, project, job2))
end
it { expect(page).to have_http_status(404) }
end
>>>>>>> upstream/master
context "Job that current user is not allowed to retry" do
before do
job.run!
......@@ -470,9 +511,24 @@ feature 'Jobs', :feature do
Capybara.current_session.driver.headers = { 'X-Sendfile-Type' => 'X-Sendfile' }
job.run!
<<<<<<< HEAD
end
context 'when job has trace in file', :js do
=======
allow_any_instance_of(Gitlab::Ci::Trace).to receive(:paths)
.and_return(paths)
visit namespace_project_job_path(project.namespace, project, job)
end
context 'when job has trace in file', :js do
let(:paths) do
[existing_file]
end
>>>>>>> upstream/master
before do
allow_any_instance_of(Gitlab::Ci::Trace)
.to receive(:paths)
......
......@@ -11,7 +11,11 @@ describe API::Jobs, :api do
ref: project.default_branch)
end
<<<<<<< HEAD
let!(:job) { create(:ci_build, pipeline: pipeline) }
=======
let(:job) { create(:ci_build, pipeline: pipeline) }
>>>>>>> upstream/master
let(:user) { create(:user) }
let(:api_user) { user }
......@@ -26,6 +30,10 @@ describe API::Jobs, :api do
let(:query) { Hash.new }
before do
<<<<<<< HEAD
=======
job
>>>>>>> upstream/master
get api("/projects/#{project.id}/jobs", api_user), query
end
......@@ -89,6 +97,10 @@ describe API::Jobs, :api do
let(:query) { Hash.new }
before do
<<<<<<< HEAD
=======
job
>>>>>>> upstream/master
get api("/projects/#{project.id}/pipelines/#{pipeline.id}/jobs", api_user), query
end
......
......@@ -14,6 +14,9 @@ module FilteredSearchHelpers
filtered_search.set(search)
if submit
# Wait for the lazy author/assignee tokens that
# swap out the username with an avatar and name
wait_for_requests
filtered_search.send_keys(:enter)
end
end
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment