Commit c5896b8d authored by Douwe Maan's avatar Douwe Maan

Merge branch 'master' into '3839-ci-cd-only-github-projects-fe'

# Conflicts:
#   config/sidekiq_queues.yml
parents 5c2592da f6a35a3c
Please view this file on the master branch, on stable branches it's out of date. Please view this file on the master branch, on stable branches it's out of date.
## 10.5.3 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.5.2 (2018-02-25) ## 10.5.2 (2018-02-25)
- No changes. - No changes.
...@@ -82,6 +94,18 @@ Please view this file on the master branch, on stable branches it's out of date. ...@@ -82,6 +94,18 @@ Please view this file on the master branch, on stable branches it's out of date.
- Remove unaproved typo check in sast:container report. - Remove unaproved typo check in sast:container report.
## 10.4.5 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.4.4 (2018-02-16) ## 10.4.4 (2018-02-16)
### Fixed (4 changes) ### Fixed (4 changes)
...@@ -183,6 +207,18 @@ Please view this file on the master branch, on stable branches it's out of date. ...@@ -183,6 +207,18 @@ Please view this file on the master branch, on stable branches it's out of date.
- Make scoped issue board specs more reliable. - Make scoped issue board specs more reliable.
## 10.3.8 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.3.7 (2018-02-05) ## 10.3.7 (2018-02-05)
### Security (1 change) ### Security (1 change)
......
...@@ -2,6 +2,13 @@ ...@@ -2,6 +2,13 @@
documentation](doc/development/changelog.md) for instructions on adding your own documentation](doc/development/changelog.md) for instructions on adding your own
entry. entry.
## 10.5.3 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.5.2 (2018-02-25) ## 10.5.2 (2018-02-25)
### Fixed (7 changes) ### Fixed (7 changes)
...@@ -219,6 +226,13 @@ entry. ...@@ -219,6 +226,13 @@ entry.
- Adds empty state illustration for pending job. - Adds empty state illustration for pending job.
## 10.4.5 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.4.4 (2018-02-16) ## 10.4.4 (2018-02-16)
### Security (1 change) ### Security (1 change)
...@@ -443,6 +457,13 @@ entry. ...@@ -443,6 +457,13 @@ entry.
- Use a background migration for issues.closed_at. - Use a background migration for issues.closed_at.
## 10.3.8 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.3.7 (2018-02-05) ## 10.3.7 (2018-02-05)
### Security (4 changes) ### Security (4 changes)
......
...@@ -107,16 +107,16 @@ gem 'carrierwave', '~> 1.2' ...@@ -107,16 +107,16 @@ gem 'carrierwave', '~> 1.2'
gem 'dropzonejs-rails', '~> 0.7.1' gem 'dropzonejs-rails', '~> 0.7.1'
# for backups # for backups
gem 'fog-aws', '~> 1.4' gem 'fog-aws', '~> 2.0'
gem 'fog-core', '~> 1.44' gem 'fog-core', '~> 1.44'
gem 'fog-google', '~> 0.5' gem 'fog-google', '~> 1.3'
gem 'fog-local', '~> 0.3' gem 'fog-local', '~> 0.3'
gem 'fog-openstack', '~> 0.1' gem 'fog-openstack', '~> 0.1'
gem 'fog-rackspace', '~> 0.1.1' gem 'fog-rackspace', '~> 0.1.1'
gem 'fog-aliyun', '~> 0.2.0' gem 'fog-aliyun', '~> 0.2.0'
# for Google storage # for Google storage
gem 'google-api-client', '~> 0.13.6' gem 'google-api-client', '~> 0.19'
# for aws storage # for aws storage
gem 'unf', '~> 0.1.4' gem 'unf', '~> 0.1.4'
......
...@@ -213,7 +213,7 @@ GEM ...@@ -213,7 +213,7 @@ GEM
et-orbi (1.0.3) et-orbi (1.0.3)
tzinfo tzinfo
eventmachine (1.0.8) eventmachine (1.0.8)
excon (0.57.1) excon (0.60.0)
execjs (2.6.0) execjs (2.6.0)
expression_parser (0.9.0) expression_parser (0.9.0)
factory_bot (4.8.2) factory_bot (4.8.2)
...@@ -255,19 +255,20 @@ GEM ...@@ -255,19 +255,20 @@ GEM
fog-json (~> 1.0) fog-json (~> 1.0)
ipaddress (~> 0.8) ipaddress (~> 0.8)
xml-simple (~> 1.1) xml-simple (~> 1.1)
fog-aws (1.4.0) fog-aws (2.0.1)
fog-core (~> 1.38) fog-core (~> 1.38)
fog-json (~> 1.0) fog-json (~> 1.0)
fog-xml (~> 0.1) fog-xml (~> 0.1)
ipaddress (~> 0.8) ipaddress (~> 0.8)
fog-core (1.44.3) fog-core (1.45.0)
builder builder
excon (~> 0.49) excon (~> 0.58)
formatador (~> 0.2) formatador (~> 0.2)
fog-google (0.5.3) fog-google (1.3.0)
fog-core fog-core
fog-json fog-json
fog-xml fog-xml
google-api-client (~> 0.19.1)
fog-json (1.0.2) fog-json (1.0.2)
fog-core (~> 1.0) fog-core (~> 1.0)
multi_json (~> 1.10) multi_json (~> 1.10)
...@@ -358,9 +359,9 @@ GEM ...@@ -358,9 +359,9 @@ GEM
json json
multi_json multi_json
request_store (>= 1.0) request_store (>= 1.0)
google-api-client (0.13.6) google-api-client (0.19.8)
addressable (~> 2.5, >= 2.5.1) addressable (~> 2.5, >= 2.5.1)
googleauth (~> 0.5) googleauth (>= 0.5, < 0.7.0)
httpclient (>= 2.8.1, < 3.0) httpclient (>= 2.8.1, < 3.0)
mime-types (~> 3.0) mime-types (~> 3.0)
representable (~> 3.0) representable (~> 3.0)
...@@ -531,7 +532,7 @@ GEM ...@@ -531,7 +532,7 @@ GEM
mini_portile2 (2.3.0) mini_portile2 (2.3.0)
minitest (5.7.0) minitest (5.7.0)
mousetrap-rails (1.4.6) mousetrap-rails (1.4.6)
multi_json (1.12.2) multi_json (1.13.1)
multi_xml (0.6.0) multi_xml (0.6.0)
multipart-post (2.0.0) multipart-post (2.0.0)
mustermann (1.0.0) mustermann (1.0.0)
...@@ -1077,9 +1078,9 @@ DEPENDENCIES ...@@ -1077,9 +1078,9 @@ DEPENDENCIES
flipper-active_record (~> 0.11.0) flipper-active_record (~> 0.11.0)
flipper-active_support_cache_store (~> 0.11.0) flipper-active_support_cache_store (~> 0.11.0)
fog-aliyun (~> 0.2.0) fog-aliyun (~> 0.2.0)
fog-aws (~> 1.4) fog-aws (~> 2.0)
fog-core (~> 1.44) fog-core (~> 1.44)
fog-google (~> 0.5) fog-google (~> 1.3)
fog-local (~> 0.3) fog-local (~> 0.3)
fog-openstack (~> 0.1) fog-openstack (~> 0.1)
fog-rackspace (~> 0.1.1) fog-rackspace (~> 0.1.1)
...@@ -1101,7 +1102,7 @@ DEPENDENCIES ...@@ -1101,7 +1102,7 @@ DEPENDENCIES
gollum-lib (~> 4.2) gollum-lib (~> 4.2)
gollum-rugged_adapter (~> 0.4.4) gollum-rugged_adapter (~> 0.4.4)
gon (~> 6.1.0) gon (~> 6.1.0)
google-api-client (~> 0.13.6) google-api-client (~> 0.19)
google-protobuf (= 3.5.1) google-protobuf (= 3.5.1)
gpgme gpgme
grape (~> 1.0) grape (~> 1.0)
......
...@@ -140,3 +140,4 @@ export default { ...@@ -140,3 +140,4 @@ export default {
</div> </div>
</div> </div>
</template> </template>
<script>
/* global ListIssue */
import _ from 'underscore';
import eventHub from '../eventhub';
import loadingIcon from '../../vue_shared/components/loading_icon.vue';
import Api from '../../api';
export default {
name: 'BoardProjectSelect',
components: {
loadingIcon,
},
props: {
groupId: {
type: Number,
required: true,
default: 0,
},
},
data() {
return {
loading: true,
selectedProject: {},
};
},
computed: {
selectedProjectName() {
return this.selectedProject.name || 'Select a project';
},
},
mounted() {
$(this.$refs.projectsDropdown).glDropdown({
filterable: true,
filterRemote: true,
search: {
fields: ['name_with_namespace'],
},
clicked: ({ $el, e }) => {
e.preventDefault();
this.selectedProject = {
id: $el.data('project-id'),
name: $el.data('project-name'),
};
eventHub.$emit('setSelectedProject', this.selectedProject);
},
selectable: true,
data: (term, callback) => {
this.loading = true;
return Api.groupProjects(this.groupId, term, (projects) => {
this.loading = false;
callback(projects);
});
},
renderRow(project) {
return `
<li>
<a href='#' class='dropdown-menu-link' data-project-id="${project.id}" data-project-name="${project.name}">
${_.escape(project.name)}
</a>
</li>
`;
},
text: project => project.name,
});
},
};
</script>
<template>
<div>
<label class="label-light prepend-top-10">
Project
</label>
<div
ref="projectsDropdown"
class="dropdown"
>
<button
class="dropdown-menu-toggle wide"
type="button"
data-toggle="dropdown"
aria-expanded="false"
>
{{ selectedProjectName }}
<i
class="fa fa-chevron-down"
aria-hidden="true"
>
</i>
</button>
<div class="dropdown-menu dropdown-menu-selectable dropdown-menu-full-width">
<div class="dropdown-title">
<span>Projects</span>
<button
aria-label="Close"
type="button"
class="dropdown-title-button dropdown-menu-close"
>
<i
aria-hidden="true"
data-hidden="true"
class="fa fa-times dropdown-menu-close-icon"
>
</i>
</button>
</div>
<div class="dropdown-input">
<input
class="dropdown-input-field"
type="search"
placeholder="Search projects"
/>
<i
aria-hidden="true"
data-hidden="true"
class="fa fa-search dropdown-input-search"
>
</i>
</div>
<div class="dropdown-content"></div>
<div class="dropdown-loading">
<loading-icon />
</div>
</div>
</div>
</div>
</template>
...@@ -13,6 +13,7 @@ import sidebarEventHub from '~/sidebar/event_hub'; // eslint-disable-line import ...@@ -13,6 +13,7 @@ import sidebarEventHub from '~/sidebar/event_hub'; // eslint-disable-line import
import './models/issue'; import './models/issue';
import './models/list'; import './models/list';
import './models/milestone'; import './models/milestone';
import './models/project';
import './models/assignee'; import './models/assignee';
import './stores/boards_store'; import './stores/boards_store';
import './stores/modal_store'; import './stores/modal_store';
......
export default class IssueProject {
constructor(obj) {
this.id = obj.id;
this.path = obj.path;
}
}
...@@ -117,7 +117,10 @@ ...@@ -117,7 +117,10 @@
</script> </script>
<template> <template>
<section class="settings no-animate expanded"> <section
id="cluster-applications"
class="settings no-animate expanded"
>
<div class="settings-header"> <div class="settings-header">
<h4> <h4>
{{ s__('ClusterIntegration|Applications') }} {{ s__('ClusterIntegration|Applications') }}
......
...@@ -7,34 +7,82 @@ ...@@ -7,34 +7,82 @@
import EmptyState from './empty_state.vue'; import EmptyState from './empty_state.vue';
import MonitoringStore from '../stores/monitoring_store'; import MonitoringStore from '../stores/monitoring_store';
import eventHub from '../event_hub'; import eventHub from '../event_hub';
import { convertPermissionToBoolean } from '../../lib/utils/common_utils';
export default { export default {
components: { components: {
Graph, Graph,
GraphGroup, GraphGroup,
EmptyState, EmptyState,
}, },
data() { props: {
const metricsData = document.querySelector('#prometheus-graphs').dataset; hasMetrics: {
const store = new MonitoringStore(); type: Boolean,
required: false,
default: true,
},
showLegend: {
type: Boolean,
required: false,
default: true,
},
showPanels: {
type: Boolean,
required: false,
default: true,
},
forceSmallGraph: {
type: Boolean,
required: false,
default: false,
},
documentationPath: {
type: String,
required: true,
},
settingsPath: {
type: String,
required: true,
},
clustersPath: {
type: String,
required: true,
},
tagsPath: {
type: String,
required: true,
},
projectPath: {
type: String,
required: true,
},
metricsEndpoint: {
type: String,
required: true,
},
deploymentEndpoint: {
type: String,
required: false,
default: null,
},
emptyGettingStartedSvgPath: {
type: String,
required: true,
},
emptyLoadingSvgPath: {
type: String,
required: true,
},
emptyUnableToConnectSvgPath: {
type: String,
required: true,
},
},
data() {
return { return {
store, store: new MonitoringStore(),
state: 'gettingStarted', state: 'gettingStarted',
hasMetrics: convertPermissionToBoolean(metricsData.hasMetrics),
documentationPath: metricsData.documentationPath,
settingsPath: metricsData.settingsPath,
clustersPath: metricsData.clustersPath,
tagsPath: metricsData.tagsPath,
projectPath: metricsData.projectPath,
metricsEndpoint: metricsData.additionalMetrics,
deploymentEndpoint: metricsData.deploymentEndpoint,
emptyGettingStartedSvgPath: metricsData.emptyGettingStartedSvgPath,
emptyLoadingSvgPath: metricsData.emptyLoadingSvgPath,
emptyUnableToConnectSvgPath: metricsData.emptyUnableToConnectSvgPath,
showEmptyState: true, showEmptyState: true,
updateAspectRatio: false, updateAspectRatio: false,
updatedAspectRatios: 0, updatedAspectRatios: 0,
...@@ -67,6 +115,7 @@ ...@@ -67,6 +115,7 @@
window.addEventListener('resize', this.resizeThrottled, false); window.addEventListener('resize', this.resizeThrottled, false);
} }
}, },
methods: { methods: {
getGraphsData() { getGraphsData() {
this.state = 'loading'; this.state = 'loading';
...@@ -115,6 +164,7 @@ ...@@ -115,6 +164,7 @@
v-for="(groupData, index) in store.groups" v-for="(groupData, index) in store.groups"
:key="index" :key="index"
:name="groupData.group" :name="groupData.group"
:show-panels="showPanels"
> >
<graph <graph
v-for="(graphData, index) in groupData.metrics" v-for="(graphData, index) in groupData.metrics"
...@@ -125,6 +175,8 @@ ...@@ -125,6 +175,8 @@
:deployment-data="store.deploymentData" :deployment-data="store.deploymentData"
:project-path="projectPath" :project-path="projectPath"
:tags-path="tagsPath" :tags-path="tagsPath"
:show-legend="showLegend"
:small-graph="forceSmallGraph"
/> />
</graph-group> </graph-group>
</div> </div>
......
...@@ -52,6 +52,16 @@ ...@@ -52,6 +52,16 @@
type: String, type: String,
required: true, required: true,
}, },
showLegend: {
type: Boolean,
required: false,
default: true,
},
smallGraph: {
type: Boolean,
required: false,
default: false,
},
}, },
data() { data() {
...@@ -130,7 +140,7 @@ ...@@ -130,7 +140,7 @@
const breakpointSize = bp.getBreakpointSize(); const breakpointSize = bp.getBreakpointSize();
const query = this.graphData.queries[0]; const query = this.graphData.queries[0];
this.margin = measurements.large.margin; this.margin = measurements.large.margin;
if (breakpointSize === 'xs' || breakpointSize === 'sm') { if (this.smallGraph || breakpointSize === 'xs' || breakpointSize === 'sm') {
this.graphHeight = 300; this.graphHeight = 300;
this.margin = measurements.small.margin; this.margin = measurements.small.margin;
this.measurements = measurements.small; this.measurements = measurements.small;
...@@ -182,7 +192,9 @@ ...@@ -182,7 +192,9 @@
this.graphHeightOffset, this.graphHeightOffset,
); );
if (this.timeSeries.length > 3) { if (!this.showLegend) {
this.baseGraphHeight -= 50;
} else if (this.timeSeries.length > 3) {
this.baseGraphHeight = this.baseGraphHeight += (this.timeSeries.length - 3) * 20; this.baseGraphHeight = this.baseGraphHeight += (this.timeSeries.length - 3) * 20;
} }
...@@ -255,6 +267,7 @@ ...@@ -255,6 +267,7 @@
:time-series="timeSeries" :time-series="timeSeries"
:unit-of-display="unitOfDisplay" :unit-of-display="unitOfDisplay"
:current-data-index="currentDataIndex" :current-data-index="currentDataIndex"
:show-legend-group="showLegend"
/> />
<svg <svg
class="graph-data" class="graph-data"
......
...@@ -39,6 +39,11 @@ ...@@ -39,6 +39,11 @@
type: Number, type: Number,
required: true, required: true,
}, },
showLegendGroup: {
type: Boolean,
required: false,
default: true,
},
}, },
data() { data() {
return { return {
...@@ -57,8 +62,9 @@ ...@@ -57,8 +62,9 @@
}, },
rectTransform() { rectTransform() {
const yCoordinate = ((this.graphHeight - this.margin.top) / 2) const yCoordinate = (((this.graphHeight - this.margin.top)
+ (this.yLabelWidth / 2) + 10 || 0; + this.measurements.axisLabelLineOffset) / 2)
+ (this.yLabelWidth / 2) || 0;
return `translate(0, ${yCoordinate}) rotate(-90)`; return `translate(0, ${yCoordinate}) rotate(-90)`;
}, },
...@@ -166,39 +172,41 @@ ...@@ -166,39 +172,41 @@
> >
Time Time
</text> </text>
<g <template v-if="showLegendGroup">
class="legend-group" <g
v-for="(series, index) in timeSeries" class="legend-group"
:key="index" v-for="(series, index) in timeSeries"
:transform="translateLegendGroup(index)" :key="index"
> :transform="translateLegendGroup(index)"
<line
:stroke="series.lineColor"
:stroke-width="measurements.legends.height"
:stroke-dasharray="strokeDashArray(series.lineStyle)"
:x1="measurements.legends.offsetX"
:x2="measurements.legends.offsetX + measurements.legends.width"
:y1="graphHeight - measurements.legends.offsetY"
:y2="graphHeight - measurements.legends.offsetY"
/>
<text
v-if="timeSeries.length > 1"
class="legend-metric-title"
ref="legendTitleSvg"
x="38"
:y="graphHeight - 30"
>
{{ createSeriesString(index, series) }}
</text>
<text
v-else
class="legend-metric-title"
ref="legendTitleSvg"
x="38"
:y="graphHeight - 30"
> >
{{ legendTitle }} {{ formatMetricUsage(series) }} <line
</text> :stroke="series.lineColor"
</g> :stroke-width="measurements.legends.height"
:stroke-dasharray="strokeDashArray(series.lineStyle)"
:x1="measurements.legends.offsetX"
:x2="measurements.legends.offsetX + measurements.legends.width"
:y1="graphHeight - measurements.legends.offsetY"
:y2="graphHeight - measurements.legends.offsetY"
/>
<text
v-if="timeSeries.length > 1"
class="legend-metric-title"
ref="legendTitleSvg"
x="38"
:y="graphHeight - 30"
>
{{ createSeriesString(index, series) }}
</text>
<text
v-else
class="legend-metric-title"
ref="legendTitleSvg"
x="38"
:y="graphHeight - 30"
>
{{ legendTitle }} {{ formatMetricUsage(series) }}
</text>
</g>
</template>
</g> </g>
</template> </template>
...@@ -5,12 +5,20 @@ ...@@ -5,12 +5,20 @@
type: String, type: String,
required: true, required: true,
}, },
showPanels: {
type: Boolean,
required: false,
default: true,
},
}, },
}; };
</script> </script>
<template> <template>
<div class="panel panel-default prometheus-panel"> <div
v-if="showPanels"
class="panel panel-default prometheus-panel"
>
<div class="panel-heading"> <div class="panel-heading">
<h4>{{ name }}</h4> <h4>{{ name }}</h4>
</div> </div>
...@@ -18,4 +26,10 @@ ...@@ -18,4 +26,10 @@
<slot></slot> <slot></slot>
</div> </div>
</div> </div>
<div
v-else
class="prometheus-graph-group"
>
<slot></slot>
</div>
</template> </template>
import Vue from 'vue'; import Vue from 'vue';
import { convertPermissionToBoolean } from '~/lib/utils/common_utils';
import Dashboard from './components/dashboard.vue'; import Dashboard from './components/dashboard.vue';
export default () => new Vue({ export default () => {
el: '#prometheus-graphs', const el = document.getElementById('prometheus-graphs');
render: createElement => createElement(Dashboard),
}); if (el && el.dataset) {
// eslint-disable-next-line no-new
new Vue({
el,
render(createElement) {
return createElement(Dashboard, {
props: {
...el.dataset,
hasMetrics: convertPermissionToBoolean(el.dataset.hasMetrics),
},
});
},
});
}
};
...@@ -40,6 +40,9 @@ export default class MonitoringService { ...@@ -40,6 +40,9 @@ export default class MonitoringService {
} }
getDeploymentData() { getDeploymentData() {
if (!this.deploymentEndpoint) {
return Promise.resolve([]);
}
return backOffRequest(() => axios.get(this.deploymentEndpoint)) return backOffRequest(() => axios.get(this.deploymentEndpoint))
.then(resp => resp.data) .then(resp => resp.data)
.then((response) => { .then((response) => {
......
export default {
animation: 200,
forceFallback: true,
fallbackClass: 'is-dragging',
fallbackOnBody: true,
ghostClass: 'is-ghost',
};
...@@ -529,7 +529,8 @@ ...@@ -529,7 +529,8 @@
} }
> text { > text {
font-size: 12px; fill: $theme-gray-600;
font-size: 10px;
} }
} }
...@@ -573,3 +574,17 @@ ...@@ -573,3 +574,17 @@
} }
} }
} }
// EE-only
.cluster-health-graphs {
.prometheus-state {
.state-svg img {
max-height: 120px;
}
.state-description,
.state-button {
display: none;
}
}
}
module UploadsActions module UploadsActions
include Gitlab::Utils::StrongMemoize include Gitlab::Utils::StrongMemoize
include SendFileUpload
UPLOAD_MOUNTS = %w(avatar attachment file logo header_logo).freeze UPLOAD_MOUNTS = %w(avatar attachment file logo header_logo).freeze
...@@ -26,14 +27,11 @@ module UploadsActions ...@@ -26,14 +27,11 @@ module UploadsActions
def show def show
return render_404 unless uploader&.exists? return render_404 unless uploader&.exists?
if uploader.file_storage? expires_in 0.seconds, must_revalidate: true, private: true
disposition = uploader.image_or_video? ? 'inline' : 'attachment'
expires_in 0.seconds, must_revalidate: true, private: true
send_file uploader.file.path, disposition: disposition disposition = uploader.image_or_video? ? 'inline' : 'attachment'
else
redirect_to uploader.url send_upload(uploader, attachment: uploader.filename, disposition: disposition)
end
end end
private private
......
...@@ -64,6 +64,22 @@ class Projects::ClustersController < Projects::ApplicationController ...@@ -64,6 +64,22 @@ class Projects::ClustersController < Projects::ApplicationController
end end
end end
def metrics
return render_404 unless prometheus_adapter&.can_query?
respond_to do |format|
format.json do
metrics = prometheus_adapter.query(:cluster) || {}
if metrics.any?
render json: metrics
else
head :no_content
end
end
end
end
private private
def cluster def cluster
...@@ -71,6 +87,12 @@ class Projects::ClustersController < Projects::ApplicationController ...@@ -71,6 +87,12 @@ class Projects::ClustersController < Projects::ApplicationController
.present(current_user: current_user) .present(current_user: current_user)
end end
def prometheus_adapter
return unless cluster&.application_prometheus&.installed?
cluster.application_prometheus
end
def update_params def update_params
if cluster.managed? if cluster.managed?
params.require(:cluster).permit( params.require(:cluster).permit(
......
...@@ -2,6 +2,7 @@ class Projects::GroupLinksController < Projects::ApplicationController ...@@ -2,6 +2,7 @@ class Projects::GroupLinksController < Projects::ApplicationController
layout 'project_settings' layout 'project_settings'
before_action :authorize_admin_project! before_action :authorize_admin_project!
before_action :authorize_admin_project_member!, only: [:update] before_action :authorize_admin_project_member!, only: [:update]
before_action :authorize_group_share!, only: [:create]
def index def index
redirect_to namespace_project_settings_members_path redirect_to namespace_project_settings_members_path
...@@ -42,6 +43,10 @@ class Projects::GroupLinksController < Projects::ApplicationController ...@@ -42,6 +43,10 @@ class Projects::GroupLinksController < Projects::ApplicationController
protected protected
def authorize_group_share!
access_denied! unless project.allowed_to_share_with_group?
end
def group_link_params def group_link_params
params.require(:group_link).permit(:group_access, :expires_at) params.require(:group_link).permit(:group_access, :expires_at)
end end
......
...@@ -17,20 +17,23 @@ class Projects::LfsStorageController < Projects::GitHttpClientController ...@@ -17,20 +17,23 @@ class Projects::LfsStorageController < Projects::GitHttpClientController
def upload_authorize def upload_authorize
set_workhorse_internal_api_content_type set_workhorse_internal_api_content_type
render json: Gitlab::Workhorse.lfs_upload_ok(oid, size)
authorized = LfsObjectUploader.workhorse_authorize
authorized.merge!(LfsOid: oid, LfsSize: size)
render json: authorized
end end
def upload_finalize def upload_finalize
unless tmp_filename if store_file!(oid, size)
render_lfs_forbidden
return
end
if store_file(oid, size, tmp_filename)
head 200 head 200
else else
render plain: 'Unprocessable entity', status: 422 render plain: 'Unprocessable entity', status: 422
end end
rescue ActiveRecord::RecordInvalid
render_400
rescue ObjectStorage::RemoteStoreError
render_lfs_forbidden
end end
private private
...@@ -51,38 +54,28 @@ class Projects::LfsStorageController < Projects::GitHttpClientController ...@@ -51,38 +54,28 @@ class Projects::LfsStorageController < Projects::GitHttpClientController
params[:size].to_i params[:size].to_i
end end
def tmp_filename def store_file!(oid, size)
name = request.headers['X-Gitlab-Lfs-Tmp'] object = LfsObject.find_by(oid: oid, size: size)
return if name.include?('/') unless object&.file&.exists?
return unless oid.present? && name.start_with?(oid) object = create_file!(oid, size)
end
name
end
def store_file(oid, size, tmp_file) return unless object
# Define tmp_file_path early because we use it in "ensure"
tmp_file_path = File.join(LfsObjectUploader.workhorse_upload_path, tmp_file)
object = LfsObject.find_or_create_by(oid: oid, size: size) link_to_project!(object)
file_exists = object.file.exists? || move_tmp_file_to_storage(object, tmp_file_path)
file_exists && link_to_project(object)
ensure
FileUtils.rm_f(tmp_file_path)
end end
def move_tmp_file_to_storage(object, path) def create_file!(oid, size)
File.open(path) do |f| LfsObject.new(oid: oid, size: size).tap do |object|
object.file = f object.file.store_workhorse_file!(params, :file)
object.save!
end end
object.file.store!
object.save
end end
def link_to_project(object) def link_to_project!(object)
if object && !object.projects.exists?(storage_project.id) if object && !object.projects.exists?(storage_project.id)
object.projects << storage_project object.projects << storage_project
object.save object.save!
end end
end end
end end
...@@ -3,7 +3,8 @@ class Projects::ServicesController < Projects::ApplicationController ...@@ -3,7 +3,8 @@ class Projects::ServicesController < Projects::ApplicationController
# Authorize # Authorize
before_action :authorize_admin_project! before_action :authorize_admin_project!
before_action :service, only: [:edit, :update, :test] before_action :ensure_service_enabled
before_action :service
respond_to :html respond_to :html
...@@ -23,26 +24,30 @@ class Projects::ServicesController < Projects::ApplicationController ...@@ -23,26 +24,30 @@ class Projects::ServicesController < Projects::ApplicationController
end end
def test def test
message = {} if @service.can_test?
render json: service_test_response, status: :ok
else
render json: {}, status: :not_found
end
end
if @service.can_test? && @service.update_attributes(service_params[:service]) private
def service_test_response
if @service.update_attributes(service_params[:service])
data = @service.test_data(project, current_user) data = @service.test_data(project, current_user)
outcome = @service.test(data) outcome = @service.test(data)
unless outcome[:success] if outcome[:success]
message = { error: true, message: 'Test failed.', service_response: outcome[:result].to_s } {}
else
{ error: true, message: 'Test failed.', service_response: outcome[:result].to_s }
end end
status = :ok
else else
status = :not_found { error: true, message: 'Validations failed.', service_response: @service.errors.full_messages.join(',') }
end end
render json: message, status: status
end end
private
def success_message def success_message
if @service.active? if @service.active?
"#{@service.title} activated." "#{@service.title} activated."
...@@ -54,4 +59,8 @@ class Projects::ServicesController < Projects::ApplicationController ...@@ -54,4 +59,8 @@ class Projects::ServicesController < Projects::ApplicationController
def service def service
@service ||= @project.find_or_initialize_service(params[:id]) @service ||= @project.find_or_initialize_service(params[:id])
end end
def ensure_service_enabled
render_404 unless service
end
end end
...@@ -450,6 +450,10 @@ module ProjectsHelper ...@@ -450,6 +450,10 @@ module ProjectsHelper
end end
end end
def project_can_be_shared?
!membership_locked? || @project.allowed_to_share_with_group?
end
def membership_locked? def membership_locked?
if @project.group && @project.group.membership_lock if @project.group && @project.group.membership_lock
true true
...@@ -458,6 +462,24 @@ module ProjectsHelper ...@@ -458,6 +462,24 @@ module ProjectsHelper
end end
end end
def share_project_description
share_with_group = @project.allowed_to_share_with_group?
share_with_members = !membership_locked?
project_name = content_tag(:strong, @project.name)
member_message = "You can add a new member to #{project_name}"
description =
if share_with_group && share_with_members
"#{member_message} or share it with another group."
elsif share_with_group
"You can share #{project_name} with another group."
elsif share_with_members
"#{member_message}."
end
description.to_s.html_safe
end
def readme_cache_key def readme_cache_key
sha = @project.commit.try(:sha) || 'nil' sha = @project.commit.try(:sha) || 'nil'
[@project.full_path, sha, "readme"].join('-') [@project.full_path, sha, "readme"].join('-')
......
...@@ -62,7 +62,8 @@ module Ci ...@@ -62,7 +62,8 @@ module Ci
schedule: 4, schedule: 4,
api: 5, api: 5,
external: 6, external: 6,
pipeline: 7 pipeline: 7,
chat: 8
} }
enum config_source: { enum config_source: {
......
...@@ -9,6 +9,7 @@ class Commit ...@@ -9,6 +9,7 @@ class Commit
include Mentionable include Mentionable
include Referable include Referable
include StaticModel include StaticModel
include ::Gitlab::Utils::StrongMemoize
attr_mentionable :safe_message, pipeline: :single_line attr_mentionable :safe_message, pipeline: :single_line
...@@ -225,11 +226,13 @@ class Commit ...@@ -225,11 +226,13 @@ class Commit
end end
def parents def parents
@parents ||= parent_ids.map { |id| project.commit(id) } @parents ||= parent_ids.map { |oid| Commit.lazy(project, oid) }
end end
def parent def parent
@parent ||= project.commit(self.parent_id) if self.parent_id strong_memoize(:parent) do
project.commit_by(oid: self.parent_id) if self.parent_id
end
end end
def notes def notes
......
...@@ -9,6 +9,12 @@ class LfsObject < ActiveRecord::Base ...@@ -9,6 +9,12 @@ class LfsObject < ActiveRecord::Base
mount_uploader :file, LfsObjectUploader mount_uploader :file, LfsObjectUploader
before_save :update_file_store
def update_file_store
self.file_store = file.object_store
end
def project_allowed_access?(project) def project_allowed_access?(project)
projects.exists?(project.lfs_storage_project.id) projects.exists?(project.lfs_storage_project.id)
end end
......
...@@ -281,7 +281,8 @@ class Project < ActiveRecord::Base ...@@ -281,7 +281,8 @@ class Project < ActiveRecord::Base
scope :without_storage_feature, ->(feature) { where('storage_version < :version OR storage_version IS NULL', version: HASHED_STORAGE_FEATURES[feature]) } scope :without_storage_feature, ->(feature) { where('storage_version < :version OR storage_version IS NULL', version: HASHED_STORAGE_FEATURES[feature]) }
scope :with_unmigrated_storage, -> { where('storage_version < :version OR storage_version IS NULL', version: LATEST_STORAGE_VERSION) } scope :with_unmigrated_storage, -> { where('storage_version < :version OR storage_version IS NULL', version: LATEST_STORAGE_VERSION) }
scope :sorted_by_activity, -> { reorder(last_activity_at: :desc) } # last_activity_at is throttled every minute, but last_repository_updated_at is updated with every push
scope :sorted_by_activity, -> { reorder("GREATEST(COALESCE(last_activity_at, '1970-01-01'), COALESCE(last_repository_updated_at, '1970-01-01')) DESC") }
scope :sorted_by_stars, -> { reorder('projects.star_count DESC') } scope :sorted_by_stars, -> { reorder('projects.star_count DESC') }
scope :in_namespace, ->(namespace_ids) { where(namespace_id: namespace_ids) } scope :in_namespace, ->(namespace_ids) { where(namespace_id: namespace_ids) }
...@@ -789,7 +790,7 @@ class Project < ActiveRecord::Base ...@@ -789,7 +790,7 @@ class Project < ActiveRecord::Base
end end
def last_activity_date def last_activity_date
last_repository_updated_at || last_activity_at || updated_at [last_activity_at, last_repository_updated_at, updated_at].compact.max
end end
def project_id def project_id
......
class SlackSlashCommandsService < SlashCommandsService class SlackSlashCommandsService < SlashCommandsService
prepend EE::SlackSlashCommandsService
include TriggersHelper include TriggersHelper
def title def title
......
# To add new service you should build a class inherited from Service # To add new service you should build a class inherited from Service
# and implement a set of methods # and implement a set of methods
class Service < ActiveRecord::Base class Service < ActiveRecord::Base
prepend EE::Service
include Sortable include Sortable
include Importable include Importable
...@@ -129,6 +130,17 @@ class Service < ActiveRecord::Base ...@@ -129,6 +130,17 @@ class Service < ActiveRecord::Base
fields fields
end end
def configurable_events
events = self.class.supported_events
# No need to disable individual triggers when there is only one
if events.count == 1
[]
else
events
end
end
def supported_events def supported_events
self.class.supported_events self.class.supported_events
end end
...@@ -242,8 +254,6 @@ class Service < ActiveRecord::Base ...@@ -242,8 +254,6 @@ class Service < ActiveRecord::Base
gemnasium gemnasium
hipchat hipchat
irker irker
jenkins
jenkins_deprecated
jira jira
kubernetes kubernetes
mattermost_slash_commands mattermost_slash_commands
......
...@@ -12,6 +12,7 @@ class Upload < ActiveRecord::Base ...@@ -12,6 +12,7 @@ class Upload < ActiveRecord::Base
validates :uploader, presence: true validates :uploader, presence: true
scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) } scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(store: ObjectStorage::Store::REMOTE) }
before_save :calculate_checksum!, if: :foreground_checksummable? before_save :calculate_checksum!, if: :foreground_checksummable?
after_commit :schedule_checksum, if: :checksummable? after_commit :schedule_checksum, if: :checksummable?
......
...@@ -3,6 +3,7 @@ module Ci ...@@ -3,6 +3,7 @@ module Ci
attr_reader :pipeline attr_reader :pipeline
SEQUENCE = [Gitlab::Ci::Pipeline::Chain::Build, SEQUENCE = [Gitlab::Ci::Pipeline::Chain::Build,
EE::Gitlab::Ci::Pipeline::Chain::RemoveUnwantedChatJobs,
Gitlab::Ci::Pipeline::Chain::Validate::Abilities, Gitlab::Ci::Pipeline::Chain::Validate::Abilities,
Gitlab::Ci::Pipeline::Chain::Validate::Repository, Gitlab::Ci::Pipeline::Chain::Validate::Repository,
Gitlab::Ci::Pipeline::Chain::Validate::Config, Gitlab::Ci::Pipeline::Chain::Validate::Config,
...@@ -29,7 +30,8 @@ module Ci ...@@ -29,7 +30,8 @@ module Ci
current_user: current_user, current_user: current_user,
# EE specific # EE specific
allow_mirror_update: mirror_update allow_mirror_update: mirror_update,
chat_data: params[:chat_data]
) )
sequence = Gitlab::Ci::Pipeline::Chain::Sequence sequence = Gitlab::Ci::Pipeline::Chain::Sequence
......
...@@ -2,11 +2,6 @@ class LfsObjectUploader < GitlabUploader ...@@ -2,11 +2,6 @@ class LfsObjectUploader < GitlabUploader
extend Workhorse::UploadPath extend Workhorse::UploadPath
include ObjectStorage::Concern include ObjectStorage::Concern
# LfsObject are in `tmp/upload` instead of `tmp/uploads`
def self.workhorse_upload_path
File.join(root, 'tmp/upload')
end
storage_options Gitlab.config.lfs storage_options Gitlab.config.lfs
def filename def filename
......
class UntrustedRegexpValidator < ActiveModel::EachValidator
def validate_each(record, attribute, value)
return unless value
Gitlab::UntrustedRegexp.new(value)
rescue RegexpError => e
record.errors.add(attribute, "not valid RE2 syntax: #{e.message}")
end
end
...@@ -22,6 +22,10 @@ ...@@ -22,6 +22,10 @@
.js-cluster-application-notice .js-cluster-application-notice
.flash-container .flash-container
-# EE-specific
- if @cluster.project.feature_available?(:cluster_health)
= render 'health'
%section.settings.no-animate.expanded#cluster-integration %section.settings.no-animate.expanded#cluster-integration
= render 'banner' = render 'banner'
= render 'integration_form' = render 'integration_form'
......
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
#{ commit_text.html_safe } #{ commit_text.html_safe }
- if show_project_name - if show_project_name
%span.project_namespace %span.project_namespace
= project.name_with_namespace = project.full_name
.commit-actions.flex-row.hidden-xs .commit-actions.flex-row.hidden-xs
- if request.xhr? - if request.xhr?
......
...@@ -15,7 +15,8 @@ ...@@ -15,7 +15,8 @@
"empty-getting-started-svg-path": image_path('illustrations/monitoring/getting_started.svg'), "empty-getting-started-svg-path": image_path('illustrations/monitoring/getting_started.svg'),
"empty-loading-svg-path": image_path('illustrations/monitoring/loading.svg'), "empty-loading-svg-path": image_path('illustrations/monitoring/loading.svg'),
"empty-unable-to-connect-svg-path": image_path('illustrations/monitoring/unable_to_connect.svg'), "empty-unable-to-connect-svg-path": image_path('illustrations/monitoring/unable_to_connect.svg'),
"additional-metrics": additional_metrics_project_environment_path(@project, @environment, format: :json), "metrics-endpoint": additional_metrics_project_environment_path(@project, @environment, format: :json),
"deployment-endpoint": project_environment_deployments_path(@project, @environment, format: :json),
"project-path": project_path(@project), "project-path": project_path(@project),
"tags-path": project_tags_path(@project), "tags-path": project_tags_path(@project),
"has-metrics": "#{@environment.has_metrics?}", deployment_endpoint: project_environment_deployments_path(@project, @environment, format: :json) } } "has-metrics": "#{@environment.has_metrics?}" } }
- page_title "Members" - page_title "Members"
- can_admin_project_members = can?(current_user, :admin_project_member, @project)
.row.prepend-top-default .row.prepend-top-default
.col-lg-12 .col-lg-12
%h4 - if project_can_be_shared?
Project members %h4
- if can?(current_user, :admin_project_member, @project) Project members
%p - if can_admin_project_members
You can add a new member to %p= share_project_description
%strong= @project.name - else
or share it with another group. %p
- else Members can be added by project
%p %i Masters
Members can be added by project or
%i Masters %i Owners
or
%i Owners
.light .light
- if can?(current_user, :admin_project_member, @project) - if can_admin_project_members && project_can_be_shared?
%ul.nav-links.gitlab-tabs{ role: 'tablist' } - if !membership_locked? && @project.allowed_to_share_with_group?
- if !membership_locked? %ul.nav-links.gitlab-tabs{ role: 'tablist' }
%li.active{ role: 'presentation' } %li.active{ role: 'presentation' }
%a{ href: '#add-member-pane', id: 'add-member-tab', data: { toggle: 'tab' }, role: 'tab' } Add member %a{ href: '#add-member-pane', id: 'add-member-tab', data: { toggle: 'tab' }, role: 'tab' } Add member
- if @project.allowed_to_share_with_group?
%li{ role: 'presentation', class: ('active' if membership_locked?) } %li{ role: 'presentation', class: ('active' if membership_locked?) }
%a{ href: '#share-with-group-pane', id: 'share-with-group-tab', data: { toggle: 'tab' }, role: 'tab' } Share with group %a{ href: '#share-with-group-pane', id: 'share-with-group-tab', data: { toggle: 'tab' }, role: 'tab' } Share with group
.tab-content.gitlab-tab-content .tab-content.gitlab-tab-content
- if !membership_locked?
.tab-pane.active{ id: 'add-member-pane', role: 'tabpanel' } .tab-pane.active{ id: 'add-member-pane', role: 'tabpanel' }
= render 'projects/project_members/new_project_member', tab_title: 'Add member' = render 'projects/project_members/new_project_member', tab_title: 'Add member'
.tab-pane{ id: 'share-with-group-pane', role: 'tabpanel', class: ('active' if membership_locked?) } .tab-pane{ id: 'share-with-group-pane', role: 'tabpanel', class: ('active' if membership_locked?) }
= render 'projects/project_members/new_shared_group', tab_title: 'Share with group' = render 'projects/project_members/new_shared_group', tab_title: 'Share with group'
- elsif !membership_locked?
.add-member= render 'projects/project_members/new_project_member', tab_title: 'Add member'
- elsif @project.allowed_to_share_with_group?
.share-with-group= render 'projects/project_members/new_shared_group', tab_title: 'Share with group'
= render 'shared/members/requests', membership_source: @project, requesters: @requesters = render 'shared/members/requests', membership_source: @project, requesters: @requesters
.clearfix .clearfix
......
...@@ -5,6 +5,9 @@ ...@@ -5,6 +5,9 @@
= boolean_to_icon @service.activated? = boolean_to_icon @service.activated?
%p= @service.description %p= @service.description
- if @service.respond_to?(:detailed_description)
%p= @service.detailed_description
.col-lg-9 .col-lg-9
= form_for(@service, as: :service, url: project_service_path(@project, @service.to_param), method: :put, html: { class: 'gl-show-field-errors form-horizontal integration-settings-form js-integration-settings-form', data: { 'can-test' => @service.can_test?, 'test-url' => test_project_service_path(@project, @service) } }) do |form| = form_for(@service, as: :service, url: project_service_path(@project, @service.to_param), method: :put, html: { class: 'gl-show-field-errors form-horizontal integration-settings-form js-integration-settings-form', data: { 'can-test' => @service.can_test?, 'test-url' => test_project_service_path(@project, @service) } }) do |form|
= render 'shared/service_settings', form: form, subject: @service = render 'shared/service_settings', form: form, subject: @service
......
- @no_container = true - @no_container = true
- @sort ||= sort_value_recently_updated - @sort ||= sort_value_recently_updated
- page_title s_('TagsPage|Tags') - page_title s_('TagsPage|Tags')
- add_to_breadcrumbs("Repository", project_tree_path(@project))
.flex-list{ class: container_class } .flex-list{ class: container_class }
.top-area.adjust .top-area.adjust
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
- if @project - if @project
= file_name = file_name
- else - else
#{project.name_with_namespace}: #{project.full_name}:
%i= file_name %i= file_name
- if blob.data - if blob.data
.file-content.code.term .file-content.code.term
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
- if @project - if @project
= wiki_blob.basename = wiki_blob.basename
- else - else
#{project.name_with_namespace}: #{project.full_name}:
%i= wiki_blob.basename %i= wiki_blob.basename
.file-content.code.term .file-content.code.term
= render 'shared/file_highlight', blob: wiki_blob, first_line_number: wiki_blob.startline = render 'shared/file_highlight', blob: wiki_blob, first_line_number: wiki_blob.startline
...@@ -13,12 +13,12 @@ ...@@ -13,12 +13,12 @@
.col-sm-10 .col-sm-10
= form.check_box :active, disabled: disable_fields_service?(@service) = form.check_box :active, disabled: disable_fields_service?(@service)
- if @service.supported_events.present? - if @service.configurable_events.present?
.form-group .form-group
= form.label :url, "Trigger", class: 'control-label' = form.label :url, "Trigger", class: 'control-label'
.col-sm-10 .col-sm-10
- @service.supported_events.each do |event| - @service.configurable_events.each do |event|
%div %div
= form.check_box service_event_field_name(event), class: 'pull-left' = form.check_box service_event_field_name(event), class: 'pull-left'
.prepend-left-20 .prepend-left-20
......
...@@ -48,7 +48,6 @@ ...@@ -48,7 +48,6 @@
- pipeline_default:build_trace_sections - pipeline_default:build_trace_sections
- pipeline_default:pipeline_metrics - pipeline_default:pipeline_metrics
- pipeline_default:pipeline_notification - pipeline_default:pipeline_notification
- pipeline_default:update_head_pipeline_for_merge_request
- pipeline_hooks:build_hooks - pipeline_hooks:build_hooks
- pipeline_hooks:pipeline_hooks - pipeline_hooks:pipeline_hooks
- pipeline_processing:build_finished - pipeline_processing:build_finished
...@@ -58,6 +57,7 @@ ...@@ -58,6 +57,7 @@
- pipeline_processing:pipeline_success - pipeline_processing:pipeline_success
- pipeline_processing:pipeline_update - pipeline_processing:pipeline_update
- pipeline_processing:stage_update - pipeline_processing:stage_update
- pipeline_processing:update_head_pipeline_for_merge_request
- repository_check:repository_check_clear - repository_check:repository_check_clear
- repository_check:repository_check_single_repository - repository_check:repository_check_single_repository
...@@ -145,3 +145,4 @@ ...@@ -145,3 +145,4 @@
- rebase - rebase
- repository_update_mirror - repository_update_mirror
- repository_update_remote_mirror - repository_update_remote_mirror
- chat_notification
class BuildFinishedWorker class BuildFinishedWorker
prepend EE::BuildFinishedWorker
include ApplicationWorker include ApplicationWorker
include PipelineQueue include PipelineQueue
......
...@@ -2,6 +2,8 @@ class UpdateHeadPipelineForMergeRequestWorker ...@@ -2,6 +2,8 @@ class UpdateHeadPipelineForMergeRequestWorker
include ApplicationWorker include ApplicationWorker
include PipelineQueue include PipelineQueue
queue_namespace :pipeline_processing
def perform(merge_request_id) def perform(merge_request_id)
merge_request = MergeRequest.find(merge_request_id) merge_request = MergeRequest.find(merge_request_id)
pipeline = Ci::Pipeline.where(project: merge_request.source_project, ref: merge_request.source_branch).last pipeline = Ci::Pipeline.where(project: merge_request.source_project, ref: merge_request.source_branch).last
......
---
title: Project can no longer be shared between groups when both member and group locks
are active
merge_request:
author:
type: security
---
title: Prevent new push rules from using non-RE2 regexes
merge_request:
author:
type: security
---
title: Fix LDAP group sync no longer configurable for regular users
merge_request:
author:
type: fixed
---
title: Remove extra breadcrumb on tags
merge_request: 17562
author: Takuya Noguchi
type: fixed
---
title: Started translation into Turkish, Indonesian and Filipino
merge_request: 17526
author:
type: other
---
title: Add one group board to Libre
merge_request:
author:
type: added
---
title: Fix project dashboard showing the wrong timestamps
merge_request:
author:
type: fixed
---
title: Upgrade GitLab Workhorse to 4.0.0
merge_request:
author:
type: added
...@@ -149,6 +149,7 @@ production: &base ...@@ -149,6 +149,7 @@ production: &base
# enabled: false # enabled: false
# remote_directory: artifacts # The bucket name # remote_directory: artifacts # The bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true) # background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
# connection: # connection:
# provider: AWS # Only AWS supported at the moment # provider: AWS # Only AWS supported at the moment
# aws_access_key_id: AWS_ACCESS_KEY_ID # aws_access_key_id: AWS_ACCESS_KEY_ID
...@@ -164,6 +165,7 @@ production: &base ...@@ -164,6 +165,7 @@ production: &base
enabled: false enabled: false
remote_directory: lfs-objects # Bucket name remote_directory: lfs-objects # Bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true) # background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
connection: connection:
provider: AWS provider: AWS
aws_access_key_id: AWS_ACCESS_KEY_ID aws_access_key_id: AWS_ACCESS_KEY_ID
...@@ -183,6 +185,7 @@ production: &base ...@@ -183,6 +185,7 @@ production: &base
enabled: true enabled: true
remote_directory: uploads # Bucket name remote_directory: uploads # Bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true) # background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
connection: connection:
provider: AWS provider: AWS
aws_access_key_id: AWS_ACCESS_KEY_ID aws_access_key_id: AWS_ACCESS_KEY_ID
...@@ -791,7 +794,7 @@ test: ...@@ -791,7 +794,7 @@ test:
provider: AWS # Only AWS supported at the moment provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1 region: us-east-1
artifacts: artifacts:
path: tmp/tests/artifacts path: tmp/tests/artifacts
enabled: true enabled: true
...@@ -805,7 +808,7 @@ test: ...@@ -805,7 +808,7 @@ test:
provider: AWS # Only AWS supported at the moment provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1 region: us-east-1
uploads: uploads:
storage_path: tmp/tests/public storage_path: tmp/tests/public
enabled: true enabled: true
...@@ -815,7 +818,7 @@ test: ...@@ -815,7 +818,7 @@ test:
provider: AWS # Only AWS supported at the moment provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1 region: us-east-1
gitlab: gitlab:
host: localhost host: localhost
port: 80 port: 80
......
...@@ -351,6 +351,7 @@ Settings.artifacts['object_store'] ||= Settingslogic.new({}) ...@@ -351,6 +351,7 @@ Settings.artifacts['object_store'] ||= Settingslogic.new({})
Settings.artifacts['object_store']['enabled'] = false if Settings.artifacts['object_store']['enabled'].nil? Settings.artifacts['object_store']['enabled'] = false if Settings.artifacts['object_store']['enabled'].nil?
Settings.artifacts['object_store']['remote_directory'] ||= nil Settings.artifacts['object_store']['remote_directory'] ||= nil
Settings.artifacts['object_store']['background_upload'] = true if Settings.artifacts['object_store']['background_upload'].nil? Settings.artifacts['object_store']['background_upload'] = true if Settings.artifacts['object_store']['background_upload'].nil?
Settings.artifacts['object_store']['proxy_download'] = false if Settings.artifacts['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy # Convert upload connection settings to use string keys, to make Fog happy
Settings.artifacts['object_store']['connection']&.deep_stringify_keys! Settings.artifacts['object_store']['connection']&.deep_stringify_keys!
...@@ -396,7 +397,9 @@ Settings.lfs['storage_path'] = Settings.absolute(Settings.lfs['storage_path'] || ...@@ -396,7 +397,9 @@ Settings.lfs['storage_path'] = Settings.absolute(Settings.lfs['storage_path'] ||
Settings.lfs['object_store'] ||= Settingslogic.new({}) Settings.lfs['object_store'] ||= Settingslogic.new({})
Settings.lfs['object_store']['enabled'] = false if Settings.lfs['object_store']['enabled'].nil? Settings.lfs['object_store']['enabled'] = false if Settings.lfs['object_store']['enabled'].nil?
Settings.lfs['object_store']['remote_directory'] ||= nil Settings.lfs['object_store']['remote_directory'] ||= nil
Settings.lfs['object_store']['direct_upload'] = false if Settings.lfs['object_store']['direct_upload'].nil?
Settings.lfs['object_store']['background_upload'] = true if Settings.lfs['object_store']['background_upload'].nil? Settings.lfs['object_store']['background_upload'] = true if Settings.lfs['object_store']['background_upload'].nil?
Settings.lfs['object_store']['proxy_download'] = false if Settings.lfs['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy # Convert upload connection settings to use string keys, to make Fog happy
Settings.lfs['object_store']['connection']&.deep_stringify_keys! Settings.lfs['object_store']['connection']&.deep_stringify_keys!
...@@ -410,6 +413,7 @@ Settings.uploads['object_store'] ||= Settingslogic.new({}) ...@@ -410,6 +413,7 @@ Settings.uploads['object_store'] ||= Settingslogic.new({})
Settings.uploads['object_store']['enabled'] = false if Settings.uploads['object_store']['enabled'].nil? Settings.uploads['object_store']['enabled'] = false if Settings.uploads['object_store']['enabled'].nil?
Settings.uploads['object_store']['remote_directory'] ||= 'uploads' Settings.uploads['object_store']['remote_directory'] ||= 'uploads'
Settings.uploads['object_store']['background_upload'] = true if Settings.uploads['object_store']['background_upload'].nil? Settings.uploads['object_store']['background_upload'] = true if Settings.uploads['object_store']['background_upload'].nil?
Settings.uploads['object_store']['proxy_download'] = false if Settings.uploads['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy # Convert upload connection settings to use string keys, to make Fog happy
Settings.uploads['object_store']['connection']&.deep_stringify_keys! Settings.uploads['object_store']['connection']&.deep_stringify_keys!
......
...@@ -28,16 +28,4 @@ if File.exist?(aws_file) ...@@ -28,16 +28,4 @@ if File.exist?(aws_file)
# when fog_public is false and provider is AWS or Google, defaults to 600 # when fog_public is false and provider is AWS or Google, defaults to 600
config.fog_authenticated_url_expiration = 1 << 29 config.fog_authenticated_url_expiration = 1 << 29
end end
# Mocking Fog requests, based on: https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Test-Fog-based-uploaders
if Rails.env.test?
Fog.mock!
connection = ::Fog::Storage.new(
aws_access_key_id: AWS_CONFIG['access_key_id'],
aws_secret_access_key: AWS_CONFIG['secret_access_key'],
provider: 'AWS',
region: AWS_CONFIG['region']
)
connection.directories.create(key: AWS_CONFIG['bucket'])
end
end end
- group: Cluster Health
priority: 1
metrics:
- title: "CPU Usage"
y_label: "CPU"
required_metrics: ['container_cpu_usage_seconds_total']
weight: 1
queries:
- query_range: 'avg(sum(rate(container_cpu_usage_seconds_total{id="/"}[15m])) by (job)) without (job)'
label: Usage
unit: "cores"
- query_range: 'sum(kube_node_status_capacity_cpu_cores{kubernetes_namespace="gitlab-managed-apps"})'
label: Capacity
unit: "cores"
- title: "Memory usage"
y_label: "Memory"
required_metrics: ['container_memory_usage_bytes']
weight: 1
queries:
- query_range: 'avg(sum(container_memory_usage_bytes{id="/"}) by (job)) without (job) / 2^30'
label: Usage
unit: "GiB"
- query_range: 'sum(kube_node_status_capacity_memory_bytes{kubernetes_namespace="gitlab-managed-apps"})/2^30'
label: Capacity
unit: "GiB"
\ No newline at end of file
...@@ -69,7 +69,7 @@ constraints(ProjectUrlConstrainer.new) do ...@@ -69,7 +69,7 @@ constraints(ProjectUrlConstrainer.new) do
end end
end end
resources :services, constraints: { id: %r{[^/]+} }, only: [:index, :edit, :update] do resources :services, constraints: { id: %r{[^/]+} }, only: [:edit, :update] do
member do member do
put :test put :test
end end
...@@ -244,6 +244,7 @@ constraints(ProjectUrlConstrainer.new) do ...@@ -244,6 +244,7 @@ constraints(ProjectUrlConstrainer.new) do
member do member do
get :status, format: :json get :status, format: :json
get :metrics, format: :json
scope :applications do scope :applications do
post '/:application', to: 'clusters/applications#create', as: :install_applications post '/:application', to: 'clusters/applications#create', as: :install_applications
......
...@@ -74,6 +74,7 @@ ...@@ -74,6 +74,7 @@
# EE-specific queues # EE-specific queues
- [ldap_group_sync, 2] - [ldap_group_sync, 2]
- [create_github_webhook, 2] - [create_github_webhook, 2]
- [chat_notification, 2]
- [geo, 1] - [geo, 1]
- [repository_remove_remote, 1] - [repository_remove_remote, 1]
- [repository_update_mirror, 1] - [repository_update_mirror, 1]
......
class AddRegexpUsesRe2ToPushRules < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
def up
# Default value to true for new values while keeping NULL for existing ones
add_column :push_rules, :regexp_uses_re2, :boolean
change_column_default :push_rules, :regexp_uses_re2, true
end
def down
remove_column :push_rules, :regexp_uses_re2
end
end
class AddGroupIdToBoardsCe < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
disable_ddl_transaction!
DOWNTIME = false
def up
return if group_id_exists?
add_column :boards, :group_id, :integer
add_foreign_key :boards, :namespaces, column: :group_id, on_delete: :cascade
add_concurrent_index :boards, :group_id
change_column_null :boards, :project_id, true
end
def down
return unless group_id_exists?
remove_foreign_key :boards, column: :group_id
remove_index :boards, :group_id if index_exists? :boards, :group_id
remove_column :boards, :group_id
execute "DELETE from boards WHERE project_id IS NULL"
change_column_null :boards, :project_id, false
end
private
def group_id_exists?
column_exists?(:boards, :group_id)
end
end
class MigrateUpdateHeadPipelineForMergeRequestSidekiqQueue < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
def up
sidekiq_queue_migrate 'pipeline_default:update_head_pipeline_for_merge_request',
to: 'pipeline_processing:update_head_pipeline_for_merge_request'
end
def down
sidekiq_queue_migrate 'pipeline_processing:update_head_pipeline_for_merge_request',
to: 'pipeline_default:update_head_pipeline_for_merge_request'
end
end
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
# #
# It's strongly recommended that you check this file into your version control system. # It's strongly recommended that you check this file into your version control system.
ActiveRecord::Schema.define(version: 20180306074045) do ActiveRecord::Schema.define(version: 20180307012445) do
# These are extensions that must be enabled in order to support this database # These are extensions that must be enabled in order to support this database
enable_extension "plpgsql" enable_extension "plpgsql"
...@@ -434,6 +434,14 @@ ActiveRecord::Schema.define(version: 20180306074045) do ...@@ -434,6 +434,14 @@ ActiveRecord::Schema.define(version: 20180306074045) do
add_index "ci_job_artifacts", ["job_id", "file_type"], name: "index_ci_job_artifacts_on_job_id_and_file_type", unique: true, using: :btree add_index "ci_job_artifacts", ["job_id", "file_type"], name: "index_ci_job_artifacts_on_job_id_and_file_type", unique: true, using: :btree
add_index "ci_job_artifacts", ["project_id"], name: "index_ci_job_artifacts_on_project_id", using: :btree add_index "ci_job_artifacts", ["project_id"], name: "index_ci_job_artifacts_on_project_id", using: :btree
create_table "ci_pipeline_chat_data", id: :bigserial, force: :cascade do |t|
t.integer "pipeline_id", null: false
t.integer "chat_name_id", null: false
t.text "response_url", null: false
end
add_index "ci_pipeline_chat_data", ["pipeline_id"], name: "index_ci_pipeline_chat_data_on_pipeline_id", unique: true, using: :btree
create_table "ci_pipeline_schedule_variables", force: :cascade do |t| create_table "ci_pipeline_schedule_variables", force: :cascade do |t|
t.string "key", null: false t.string "key", null: false
t.text "value" t.text "value"
...@@ -2061,6 +2069,7 @@ ActiveRecord::Schema.define(version: 20180306074045) do ...@@ -2061,6 +2069,7 @@ ActiveRecord::Schema.define(version: 20180306074045) do
t.string "branch_name_regex" t.string "branch_name_regex"
t.boolean "reject_unsigned_commits" t.boolean "reject_unsigned_commits"
t.boolean "commit_committer_check" t.boolean "commit_committer_check"
t.boolean "regexp_uses_re2", default: true
end end
add_index "push_rules", ["is_sample"], name: "index_push_rules_on_is_sample", where: "is_sample", using: :btree add_index "push_rules", ["is_sample"], name: "index_push_rules_on_is_sample", where: "is_sample", using: :btree
...@@ -2536,6 +2545,8 @@ ActiveRecord::Schema.define(version: 20180306074045) do ...@@ -2536,6 +2545,8 @@ ActiveRecord::Schema.define(version: 20180306074045) do
add_foreign_key "ci_group_variables", "namespaces", column: "group_id", name: "fk_33ae4d58d8", on_delete: :cascade add_foreign_key "ci_group_variables", "namespaces", column: "group_id", name: "fk_33ae4d58d8", on_delete: :cascade
add_foreign_key "ci_job_artifacts", "ci_builds", column: "job_id", on_delete: :cascade add_foreign_key "ci_job_artifacts", "ci_builds", column: "job_id", on_delete: :cascade
add_foreign_key "ci_job_artifacts", "projects", on_delete: :cascade add_foreign_key "ci_job_artifacts", "projects", on_delete: :cascade
add_foreign_key "ci_pipeline_chat_data", "chat_names", on_delete: :cascade
add_foreign_key "ci_pipeline_chat_data", "ci_pipelines", column: "pipeline_id", on_delete: :cascade
add_foreign_key "ci_pipeline_schedule_variables", "ci_pipeline_schedules", column: "pipeline_schedule_id", name: "fk_41c35fda51", on_delete: :cascade add_foreign_key "ci_pipeline_schedule_variables", "ci_pipeline_schedules", column: "pipeline_schedule_id", name: "fk_41c35fda51", on_delete: :cascade
add_foreign_key "ci_pipeline_schedules", "projects", name: "fk_8ead60fcc4", on_delete: :cascade add_foreign_key "ci_pipeline_schedules", "projects", name: "fk_8ead60fcc4", on_delete: :cascade
add_foreign_key "ci_pipeline_schedules", "users", column: "owner_id", name: "fk_9ea99f58d2", on_delete: :nullify add_foreign_key "ci_pipeline_schedules", "users", column: "owner_id", name: "fk_9ea99f58d2", on_delete: :nullify
......
...@@ -340,7 +340,6 @@ data before running `pg_basebackup`. ...@@ -340,7 +340,6 @@ data before running `pg_basebackup`.
echo Cleaning up old cluster directory echo Cleaning up old cluster directory
sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data
rm -f /tmp/postgresql.trigger
echo Starting base backup as the replicator user echo Starting base backup as the replicator user
echo Enter the password for $USER@$HOST echo Enter the password for $USER@$HOST
...@@ -350,7 +349,6 @@ data before running `pg_basebackup`. ...@@ -350,7 +349,6 @@ data before running `pg_basebackup`.
sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_ sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
standby_mode = 'on' standby_mode = 'on'
primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE' primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE'
trigger_file = '/tmp/postgresql.trigger'
_EOF1_ _EOF1_
" "
......
...@@ -99,6 +99,32 @@ artifacts, you can use an object storage like AWS S3 instead. ...@@ -99,6 +99,32 @@ artifacts, you can use an object storage like AWS S3 instead.
This configuration relies on valid AWS credentials to be configured already. This configuration relies on valid AWS credentials to be configured already.
Use an [Object storage option][ee-os] like AWS S3 to store job artifacts. Use an [Object storage option][ee-os] like AWS S3 to store job artifacts.
### Object Storage Settings
For source installations the following settings are nested under `artifacts:` and then `object_store:`. On omnibus installs they are prefixed by `artifacts_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Artfacts will be stored| |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
#### S3 compatible connection settings
The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
| Setting | Description | Default |
|---------|-------------|---------|
| `provider` | Always `AWS` for compatible hosts | AWS |
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
**In Omnibus installations:** **In Omnibus installations:**
_The artifacts are stored by default in _The artifacts are stored by default in
......
...@@ -57,6 +57,32 @@ If you don't want to use the local disk where GitLab is installed to store the ...@@ -57,6 +57,32 @@ If you don't want to use the local disk where GitLab is installed to store the
uploads, you can use an object storage provider like AWS S3 instead. uploads, you can use an object storage provider like AWS S3 instead.
This configuration relies on valid AWS credentials to be configured already. This configuration relies on valid AWS credentials to be configured already.
### Object Storage Settings
For source installations the following settings are nested under `uploads:` and then `object_store:`. On omnibus installs they are prefixed by `uploads_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Uploads will be stored| |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
#### S3 compatible connection settings
The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
| Setting | Description | Default |
|---------|-------------|---------|
| `provider` | Always `AWS` for compatible hosts | AWS |
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
**In Omnibus installations:** **In Omnibus installations:**
_The uploads are stored by default in _The uploads are stored by default in
......
...@@ -42,7 +42,7 @@ following locations: ...@@ -42,7 +42,7 @@ following locations:
- [Group milestones](group_milestones.md) - [Group milestones](group_milestones.md)
- [Namespaces](namespaces.md) - [Namespaces](namespaces.md)
- [Notes](notes.md) (comments) - [Notes](notes.md) (comments)
- [Threaded comments](discussions.md) - [Discussions](discussions.md) (threaded comments)
- [Notification settings](notification_settings.md) - [Notification settings](notification_settings.md)
- [Open source license templates](templates/licenses.md) - [Open source license templates](templates/licenses.md)
- [Pages Domains](pages_domains.md) - [Pages Domains](pages_domains.md)
......
# Notes API # Notes API
Notes are comments on snippets, issues or merge requests. Notes are comments on snippets, issues, merge requests or epics.
## Issues ## Issues
......
...@@ -139,6 +139,14 @@ CREATE EXTENSION pg_trgm; ...@@ -139,6 +139,14 @@ CREATE EXTENSION pg_trgm;
On some systems you may need to install an additional package (e.g. On some systems you may need to install an additional package (e.g.
`postgresql-contrib`) for this extension to become available. `postgresql-contrib`) for this extension to become available.
#### Additional requirements for GitLab Geo
If you are using [GitLab Geo](https://docs.gitlab.com/ee/development/geo.html), the [tracking database](https://docs.gitlab.com/ee/development/geo.html#geo-tracking-database) also requires the `postgres_fdw` extension.
```
CREATE EXTENSION postgres_fdw;
```
## Unicorn Workers ## Unicorn Workers
It's possible to increase the amount of unicorn workers and this will usually help to reduce the response time of the applications and increase the ability to handle parallel requests. It's possible to increase the amount of unicorn workers and this will usually help to reduce the response time of the applications and increase the ability to handle parallel requests.
......
...@@ -66,14 +66,14 @@ The following options are available. ...@@ -66,14 +66,14 @@ The following options are available.
| Check whether committer is the current authenticated user | **Premium** 10.2 | GitLab will reject any commit that was not committed by the current authenticated user | | Check whether committer is the current authenticated user | **Premium** 10.2 | GitLab will reject any commit that was not committed by the current authenticated user |
| Check whether commit is signed through GPG | **Premium** 10.1 | Reject commit when it is not signed through GPG. Read [signing commits with GPG][signing-commits]. | | Check whether commit is signed through GPG | **Premium** 10.1 | Reject commit when it is not signed through GPG. Read [signing commits with GPG][signing-commits]. |
| Prevent committing secrets to Git | **Starter** 8.12 | GitLab will reject any files that are likely to contain secrets. Read [what files are forbidden](#prevent-pushing-secrets-to-the-repository). | | Prevent committing secrets to Git | **Starter** 8.12 | GitLab will reject any files that are likely to contain secrets. Read [what files are forbidden](#prevent-pushing-secrets-to-the-repository). |
| Restrict by commit message | **Starter** 7.10 | Only commit messages that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any commit message. | | Restrict by commit message | **Starter** 7.10 | Only commit messages that match this regular expression are allowed to be pushed. Leave empty to allow any commit message. Uses multiline mode, which can be disabled using `(?-m)`. |
| Restrict by branch name | **Starter** 9.3 | Only branch names that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any branch name. | | Restrict by branch name | **Starter** 9.3 | Only branch names that match this regular expression are allowed to be pushed. Leave empty to allow any branch name. |
| Restrict by commit author's email | **Starter** 7.10 | Only commit author's email that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any email. | | Restrict by commit author's email | **Starter** 7.10 | Only commit author's email that match this regular expression are allowed to be pushed. Leave empty to allow any email. |
| Prohibited file names | **Starter** 7.10 | Any committed filenames that match this Ruby regular expression are not allowed to be pushed. Leave empty to allow any filenames. | | Prohibited file names | **Starter** 7.10 | Any committed filenames that match this regular expression are not allowed to be pushed. Leave empty to allow any filenames. |
| Maximum file size | **Starter** 7.12 | Pushes that contain added or updated files that exceed this file size (in MB) are rejected. Set to 0 to allow files of any size. | | Maximum file size | **Starter** 7.12 | Pushes that contain added or updated files that exceed this file size (in MB) are rejected. Set to 0 to allow files of any size. |
>**Tip:** >**Tip:**
You can check your regular expressions at <http://rubular.com>. GitLab uses [RE2 syntax](https://github.com/google/re2/wiki/Syntax) for regular expressions in push rules. You can check your regular expressions at <https://regex-golang.appspot.com>.
## Prevent pushing secrets to the repository ## Prevent pushing secrets to the repository
......
# GitHub Project Integration
GitLab provides integration for updating pipeline statuses on GitHub. This is especially useful if using GitLab for CI/CD only.
![Pipeline status update on GitHub](img/github_status_check_pipeline_update.png)
## Configuration
### Complete these steps on GitHub
This integration requires a [GitHub API token](https://github.com/settings/tokens) with `repo:status` access granted:
1. Go to your "Personal access tokens" page at https://github.com/settings/tokens
1. Click "Generate New Token"
1. Ensure that `repo:status` is checked and click "Generate token"
1. Copy the generated token to use on GitLab
### Complete these steps on GitLab
1. Navigate to the project you want to configure.
1. Navigate to the [Integrations page](project_services.md#accessing-the-project-services)
1. Click "GitHub".
1. Select the "Active" checkbox.
1. Paste the token you've generated on GitHub
1. Enter the path to your project on GitHub, such as "https://github.com/your-name/YourProject/"
1. Save or optionally click "Test Settings".
![Configure GitHub Project Integration](img/github_configuration.png)
...@@ -35,6 +35,7 @@ Click on the service links to see further configuration instructions and details ...@@ -35,6 +35,7 @@ Click on the service links to see further configuration instructions and details
| External Wiki | Replaces the link to the internal wiki with a link to an external wiki | | External Wiki | Replaces the link to the internal wiki with a link to an external wiki |
| Flowdock | Flowdock is a collaboration web app for technical teams | | Flowdock | Flowdock is a collaboration web app for technical teams |
| Gemnasium | Gemnasium monitors your project dependencies and alerts you about updates and security vulnerabilities | | Gemnasium | Gemnasium monitors your project dependencies and alerts you about updates and security vulnerabilities |
| [GitHub](github.md) | Sends pipeline notifications to GitHub |
| [HipChat](hipchat.md) | Private group chat and IM | | [HipChat](hipchat.md) | Private group chat and IM |
| [Irker (IRC gateway)](irker.md) | Send IRC messages, on update, to a list of recipients through an Irker gateway | | [Irker (IRC gateway)](irker.md) | Send IRC messages, on update, to a list of recipients through an Irker gateway |
| [JIRA](jira.md) | JIRA issue tracker | | [JIRA](jira.md) | JIRA issue tracker |
......
...@@ -329,6 +329,16 @@ Click the button at the top right to toggle focus mode on and off. In focus mode ...@@ -329,6 +329,16 @@ Click the button at the top right to toggle focus mode on and off. In focus mode
[Developers and up](../permissions.md) can use all the functionality of the [Developers and up](../permissions.md) can use all the functionality of the
Issue Board, that is create/delete lists and drag issues around. Issue Board, that is create/delete lists and drag issues around.
## Group Issue Board
>Introduced in GitLab 10.6
Group issue board is analogous to project-level issue board and it is accessible at the group
navigation level. A group-level issue board allows you to view all issues from all projects in that group
(currently, it does not see issues from projects in subgroups). Similarly, you can only filter by group labels for these
boards. When updating milestones and labels for an issue through the sidebar update mechanism, again only
group-level objects are available.
## Tips ## Tips
A few things to remember: A few things to remember:
......
...@@ -63,7 +63,9 @@ For source installations the following settings are nested under `lfs:` and then ...@@ -63,7 +63,9 @@ For source installations the following settings are nested under `lfs:` and then
|---------|-------------|---------| |---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` | | `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where LFS objects will be stored| | | `remote_directory` | The bucket name where LFS objects will be stored| |
| `direct_upload` | Set to true to enable direct upload of LFS without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` | | `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | | | `connection` | Various connection options described below | |
#### S3 compatible connection settings #### S3 compatible connection settings
......
...@@ -40,8 +40,7 @@ ...@@ -40,8 +40,7 @@
createEpic() { createEpic() {
this.creating = true; this.creating = true;
this.service.createEpic(this.title) this.service.createEpic(this.title)
.then(res => res.json()) .then(({ data }) => {
.then((data) => {
visitUrl(data.web_url); visitUrl(data.web_url);
}) })
.catch(() => { .catch(() => {
......
import Vue from 'vue'; import axios from '~/lib/utils/axios_utils';
import VueResource from 'vue-resource';
Vue.use(VueResource);
export default class NewEpicService { export default class NewEpicService {
constructor(endpoint) { constructor(endpoint) {
this.endpoint = endpoint; this.endpoint = endpoint;
this.resource = Vue.resource(this.endpoint, {});
} }
createEpic(title) { createEpic(title) {
return this.resource.save({ return axios.post(this.endpoint, {
title, title,
}); });
} }
......
import Vue from 'vue';
import Dashboard from '~/monitoring/components/dashboard.vue';
export default () => {
const el = document.getElementById('prometheus-graphs');
if (el && el.dataset) {
// eslint-disable-next-line no-new
new Vue({
el,
render(createElement) {
return createElement(Dashboard, {
props: {
...el.dataset,
showLegend: false,
showPanels: false,
forceSmallGraph: true,
},
});
},
});
}
};
import '~/pages/projects/clusters/show';
import initClusterHealth from './cluster_health';
document.addEventListener('DOMContentLoaded', initClusterHealth);
...@@ -4,7 +4,8 @@ module EE ...@@ -4,7 +4,8 @@ module EE
:jenkins_url, :jenkins_url,
:multiproject_enabled, :multiproject_enabled,
:pass_unstable, :pass_unstable,
:project_name :project_name,
:repository_url
].freeze ].freeze
def allowed_service_params def allowed_service_params
......
module SendFileUpload module SendFileUpload
def send_upload(file_upload, send_params: {}, redirect_params: {}, attachment: nil) def send_upload(file_upload, send_params: {}, redirect_params: {}, attachment: nil, disposition: 'attachment')
if attachment if attachment
redirect_params[:query] = { "response-content-disposition" => "attachment;filename=#{attachment.inspect}" } redirect_params[:query] = { "response-content-disposition" => "#{disposition};filename=#{attachment.inspect}" }
send_params.merge!(filename: attachment, disposition: 'attachment') send_params.merge!(filename: attachment, disposition: disposition)
end end
if file_upload.file_storage? if file_upload.file_storage?
send_file file_upload.path, send_params send_file file_upload.path, send_params
elsif file_upload.class.proxy_download_enabled?
Gitlab::Workhorse.send_url(file_upload.url(**redirect_params))
else else
redirect_to file_upload.url(**redirect_params) redirect_to file_upload.url(**redirect_params)
end end
......
module Geo module Geo
class AttachmentRegistryFinder < FileRegistryFinder class AttachmentRegistryFinder < FileRegistryFinder
def attachments def attachments
relation = if selective_sync?
if selective_sync? Upload.where(group_uploads.or(project_uploads).or(other_uploads))
Upload.where(group_uploads.or(project_uploads).or(other_uploads)) else
else Upload.all
Upload.all end
end end
relation.with_files_stored_locally def local_attachments
attachments.with_files_stored_locally
end end
def count_attachments def count_local_attachments
attachments.count local_attachments.count
end end
def count_synced_attachments def count_synced_attachments
...@@ -49,20 +50,20 @@ module Geo ...@@ -49,20 +50,20 @@ module Geo
# Find limited amount of non replicated attachments. # Find limited amount of non replicated attachments.
# #
# You can pass a list with `except_registry_ids:` so you can exclude items you # You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet # already scheduled but haven't finished and aren't persisted to the database yet
# #
# TODO: Alternative here is to use some sort of window function with a cursor instead # TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want # of simply limiting the query and passing a list of items we don't want
# #
# @param [Integer] batch_size used to limit the results returned # @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query # @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_attachments(batch_size:, except_registry_ids: []) def find_unsynced_attachments(batch_size:, except_file_ids: [])
relation = relation =
if use_legacy_queries? if use_legacy_queries?
legacy_find_unsynced_attachments(except_registry_ids: except_registry_ids) legacy_find_unsynced_attachments(except_file_ids: except_file_ids)
else else
fdw_find_unsynced_attachments(except_registry_ids: except_registry_ids) fdw_find_unsynced_attachments(except_file_ids: except_file_ids)
end end
relation.limit(batch_size) relation.limit(batch_size)
...@@ -106,31 +107,40 @@ module Geo ...@@ -106,31 +107,40 @@ module Geo
# #
def fdw_find_synced_attachments def fdw_find_synced_attachments
fdw_find_attachments.merge(Geo::FileRegistry.synced) fdw_find_local_attachments.merge(Geo::FileRegistry.synced)
end end
def fdw_find_failed_attachments def fdw_find_failed_attachments
fdw_find_attachments.merge(Geo::FileRegistry.failed) fdw_find_local_attachments.merge(Geo::FileRegistry.failed)
end end
def fdw_find_attachments def fdw_find_local_attachments
fdw_table = Geo::Fdw::Upload.table_name fdw_attachments.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_attachments_table}.id")
Geo::Fdw::Upload.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_table}.id")
.with_files_stored_locally .with_files_stored_locally
.merge(Geo::FileRegistry.attachments) .merge(Geo::FileRegistry.attachments)
end end
def fdw_find_unsynced_attachments(except_registry_ids:) def fdw_find_unsynced_attachments(except_file_ids:)
fdw_table = Geo::Fdw::Upload.table_name
upload_types = Geo::FileService::DEFAULT_OBJECT_TYPES.map { |val| "'#{val}'" }.join(',') upload_types = Geo::FileService::DEFAULT_OBJECT_TYPES.map { |val| "'#{val}'" }.join(',')
Geo::Fdw::Upload.joins("LEFT OUTER JOIN file_registry fdw_attachments.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_table}.id ON file_registry.file_id = #{fdw_attachments_table}.id
AND file_registry.file_type IN (#{upload_types})") AND file_registry.file_type IN (#{upload_types})")
.with_files_stored_locally .with_files_stored_locally
.where(file_registry: { id: nil }) .where(file_registry: { id: nil })
.where.not(id: except_registry_ids) .where.not(id: except_file_ids)
end
def fdw_attachments
if selective_sync?
Geo::Fdw::Upload.where(group_uploads.or(project_uploads).or(other_uploads))
else
Geo::Fdw::Upload.all
end
end
def fdw_attachments_table
Geo::Fdw::Upload.table_name
end end
# #
...@@ -139,7 +149,7 @@ module Geo ...@@ -139,7 +149,7 @@ module Geo
def legacy_find_synced_attachments def legacy_find_synced_attachments
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
attachments, local_attachments,
Geo::FileRegistry.attachments.synced.pluck(:file_id), Geo::FileRegistry.attachments.synced.pluck(:file_id),
Upload Upload
) )
...@@ -147,18 +157,18 @@ module Geo ...@@ -147,18 +157,18 @@ module Geo
def legacy_find_failed_attachments def legacy_find_failed_attachments
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
attachments, local_attachments,
Geo::FileRegistry.attachments.failed.pluck(:file_id), Geo::FileRegistry.attachments.failed.pluck(:file_id),
Upload Upload
) )
end end
def legacy_find_unsynced_attachments(except_registry_ids:) def legacy_find_unsynced_attachments(except_file_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: Geo::FileService::DEFAULT_OBJECT_TYPES, except_registry_ids: except_registry_ids) registry_file_ids = legacy_pluck_registry_file_ids(file_types: Geo::FileService::DEFAULT_OBJECT_TYPES) | except_file_ids
legacy_left_outer_join_registry_ids( legacy_left_outer_join_registry_ids(
attachments, local_attachments,
registry_ids, registry_file_ids,
Upload Upload
) )
end end
......
...@@ -6,9 +6,8 @@ module Geo ...@@ -6,9 +6,8 @@ module Geo
protected protected
def legacy_pluck_registry_ids(file_types:, except_registry_ids:) def legacy_pluck_registry_file_ids(file_types:)
ids = Geo::FileRegistry.where(file_type: file_types).pluck(:file_id) Geo::FileRegistry.where(file_type: file_types).pluck(:file_id)
(ids + except_registry_ids).uniq
end end
end end
end end
module Geo module Geo
class JobArtifactRegistryFinder < FileRegistryFinder class JobArtifactRegistryFinder < FileRegistryFinder
def count_job_artifacts def count_job_artifacts
job_artifacts.count local_job_artifacts.count
end end
def count_synced_job_artifacts def count_synced_job_artifacts
relation = if aggregate_pushdown_supported?
if selective_sync? find_synced_job_artifacts.count
legacy_find_synced_job_artifacts else
else legacy_find_synced_job_artifacts.count
find_synced_job_artifacts_registries end
end
relation.count
end end
def count_failed_job_artifacts def count_failed_job_artifacts
relation = if aggregate_pushdown_supported?
if selective_sync? find_failed_job_artifacts.count
legacy_find_failed_job_artifacts else
else legacy_find_failed_job_artifacts.count
find_failed_job_artifacts_registries end
end
relation.count
end end
# Find limited amount of non replicated lfs objects. # Find limited amount of non replicated lfs objects.
# #
# You can pass a list with `except_registry_ids:` so you can exclude items you # You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet # already scheduled but haven't finished and aren't persisted to the database yet
# #
# TODO: Alternative here is to use some sort of window function with a cursor instead # TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want # of simply limiting the query and passing a list of items we don't want
# #
# @param [Integer] batch_size used to limit the results returned # @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query # @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_job_artifacts(batch_size:, except_registry_ids: []) def find_unsynced_job_artifacts(batch_size:, except_file_ids: [])
relation = relation =
if use_legacy_queries? if use_legacy_queries?
legacy_find_unsynced_job_artifacts(except_registry_ids: except_registry_ids) legacy_find_unsynced_job_artifacts(except_file_ids: except_file_ids)
else else
fdw_find_unsynced_job_artifacts(except_registry_ids: except_registry_ids) fdw_find_unsynced_job_artifacts(except_file_ids: except_file_ids)
end end
relation.limit(batch_size) relation.limit(batch_size)
end end
def job_artifacts def job_artifacts
relation = if selective_sync?
if selective_sync? Ci::JobArtifact.joins(:project).where(projects: { id: current_node.projects })
Ci::JobArtifact.joins(:project).where(projects: { id: current_node.projects }) else
else Ci::JobArtifact.all
Ci::JobArtifact.all end
end end
relation.with_files_stored_locally def local_job_artifacts
job_artifacts.with_files_stored_locally
end end
private private
def find_synced_job_artifacts_registries def find_synced_job_artifacts
Geo::FileRegistry.job_artifacts.synced if use_legacy_queries?
legacy_find_synced_job_artifacts
else
fdw_find_job_artifacts.merge(Geo::FileRegistry.synced)
end
end end
def find_failed_job_artifacts_registries def find_failed_job_artifacts
Geo::FileRegistry.job_artifacts.failed if use_legacy_queries?
legacy_find_failed_job_artifacts
else
fdw_find_job_artifacts.merge(Geo::FileRegistry.failed)
end
end end
# #
# FDW accessors # FDW accessors
# #
def fdw_find_unsynced_job_artifacts(except_registry_ids:) def fdw_find_job_artifacts
fdw_table = Geo::Fdw::Ci::JobArtifact.table_name fdw_job_artifacts.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_jobs_artifacts_table}.id")
.with_files_stored_locally
.merge(Geo::FileRegistry.job_artifacts)
end
Geo::Fdw::Ci::JobArtifact.joins("LEFT OUTER JOIN file_registry def fdw_find_unsynced_job_artifacts(except_file_ids:)
ON file_registry.file_id = #{fdw_table}.id fdw_job_artifacts.joins("LEFT OUTER JOIN file_registry
AND file_registry.file_type = 'job_artifact'") ON file_registry.file_id = #{fdw_job_artifacts_table}.id
AND file_registry.file_type = 'job_artifact'")
.with_files_stored_locally .with_files_stored_locally
.where(file_registry: { id: nil }) .where(file_registry: { id: nil })
.where.not(id: except_registry_ids) .where.not(id: except_file_ids)
end
def fdw_job_artifacts
if selective_sync?
Geo::Fdw::Ci::JobArtifact.joins(:project).where(projects: { id: current_node.projects })
else
Geo::Fdw::Ci::JobArtifact.all
end
end
def fdw_job_artifacts_table
Geo::Fdw::Ci::JobArtifact.table_name
end end
# #
...@@ -89,26 +108,26 @@ module Geo ...@@ -89,26 +108,26 @@ module Geo
def legacy_find_synced_job_artifacts def legacy_find_synced_job_artifacts
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
job_artifacts, local_job_artifacts,
find_synced_job_artifacts_registries.pluck(:file_id), Geo::FileRegistry.job_artifacts.synced.pluck(:file_id),
Ci::JobArtifact Ci::JobArtifact
) )
end end
def legacy_find_failed_job_artifacts def legacy_find_failed_job_artifacts
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
job_artifacts, local_job_artifacts,
find_failed_job_artifacts_registries.pluck(:file_id), Geo::FileRegistry.job_artifacts.failed.pluck(:file_id),
Ci::JobArtifact Ci::JobArtifact
) )
end end
def legacy_find_unsynced_job_artifacts(except_registry_ids:) def legacy_find_unsynced_job_artifacts(except_file_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: :job_artifact, except_registry_ids: except_registry_ids) registry_file_ids = legacy_pluck_registry_file_ids(file_types: :job_artifact) | except_file_ids
legacy_left_outer_join_registry_ids( legacy_left_outer_join_registry_ids(
job_artifacts, local_job_artifacts,
registry_ids, registry_file_ids,
Ci::JobArtifact Ci::JobArtifact
) )
end end
......
module Geo module Geo
class LfsObjectRegistryFinder < FileRegistryFinder class LfsObjectRegistryFinder < FileRegistryFinder
def count_lfs_objects def count_lfs_objects
lfs_objects.count local_lfs_objects.count
end end
def count_synced_lfs_objects def count_synced_lfs_objects
relation = if aggregate_pushdown_supported?
if selective_sync? find_synced_lfs_objects.count
legacy_find_synced_lfs_objects else
else legacy_find_synced_lfs_objects.count
find_synced_lfs_objects_registries end
end
relation.count
end end
def count_failed_lfs_objects def count_failed_lfs_objects
relation = if aggregate_pushdown_supported?
if selective_sync? find_failed_lfs_objects.count
legacy_find_failed_lfs_objects else
else legacy_find_failed_lfs_objects.count
find_failed_lfs_objects_registries end
end
relation.count
end end
# Find limited amount of non replicated lfs objects. # Find limited amount of non replicated lfs objects.
# #
# You can pass a list with `except_registry_ids:` so you can exclude items you # You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet # already scheduled but haven't finished and aren't persisted to the database yet
# #
# TODO: Alternative here is to use some sort of window function with a cursor instead # TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want # of simply limiting the query and passing a list of items we don't want
# #
# @param [Integer] batch_size used to limit the results returned # @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query # @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_lfs_objects(batch_size:, except_registry_ids: []) def find_unsynced_lfs_objects(batch_size:, except_file_ids: [])
relation = relation =
if use_legacy_queries? if use_legacy_queries?
legacy_find_unsynced_lfs_objects(except_registry_ids: except_registry_ids) legacy_find_unsynced_lfs_objects(except_file_ids: except_file_ids)
else else
fdw_find_unsynced_lfs_objects(except_registry_ids: except_registry_ids) fdw_find_unsynced_lfs_objects(except_file_ids: except_file_ids)
end end
relation.limit(batch_size) relation.limit(batch_size)
end end
def lfs_objects def lfs_objects
relation = if selective_sync?
if selective_sync? LfsObject.joins(:projects).where(projects: { id: current_node.projects })
LfsObject.joins(:projects).where(projects: { id: current_node.projects }) else
else LfsObject.all
LfsObject.all end
end end
relation.with_files_stored_locally def local_lfs_objects
lfs_objects.with_files_stored_locally
end end
private private
def find_synced_lfs_objects_registries def find_synced_lfs_objects
Geo::FileRegistry.lfs_objects.synced if use_legacy_queries?
legacy_find_synced_lfs_objects
else
fdw_find_lfs_objects.merge(Geo::FileRegistry.synced)
end
end end
def find_failed_lfs_objects_registries def find_failed_lfs_objects
Geo::FileRegistry.lfs_objects.failed if use_legacy_queries?
legacy_find_failed_lfs_objects
else
fdw_find_lfs_objects.merge(Geo::FileRegistry.failed)
end
end end
# #
# FDW accessors # FDW accessors
# #
def fdw_find_unsynced_lfs_objects(except_registry_ids:) def fdw_find_lfs_objects
fdw_table = Geo::Fdw::LfsObject.table_name fdw_lfs_objects.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_lfs_objects_table}.id")
.with_files_stored_locally
.merge(Geo::FileRegistry.lfs_objects)
end
# Filter out objects in object storage (this is done in GeoNode#lfs_objects) def fdw_find_unsynced_lfs_objects(except_file_ids:)
Geo::Fdw::LfsObject.joins("LEFT OUTER JOIN file_registry fdw_lfs_objects.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_table}.id ON file_registry.file_id = #{fdw_lfs_objects_table}.id
AND file_registry.file_type = 'lfs'") AND file_registry.file_type = 'lfs'")
.with_files_stored_locally .with_files_stored_locally
.where(file_registry: { id: nil }) .where(file_registry: { id: nil })
.where.not(id: except_registry_ids) .where.not(id: except_file_ids)
end
def fdw_lfs_objects
if selective_sync?
Geo::Fdw::LfsObject.joins(:project).where(projects: { id: current_node.projects })
else
Geo::Fdw::LfsObject.all
end
end
def fdw_lfs_objects_table
Geo::Fdw::LfsObject.table_name
end end
# #
...@@ -90,26 +108,26 @@ module Geo ...@@ -90,26 +108,26 @@ module Geo
def legacy_find_synced_lfs_objects def legacy_find_synced_lfs_objects
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
lfs_objects, local_lfs_objects,
find_synced_lfs_objects_registries.pluck(:file_id), Geo::FileRegistry.lfs_objects.synced.pluck(:file_id),
LfsObject LfsObject
) )
end end
def legacy_find_failed_lfs_objects def legacy_find_failed_lfs_objects
legacy_inner_join_registry_ids( legacy_inner_join_registry_ids(
lfs_objects, local_lfs_objects,
find_failed_lfs_objects_registries.pluck(:file_id), Geo::FileRegistry.lfs_objects.failed.pluck(:file_id),
LfsObject LfsObject
) )
end end
def legacy_find_unsynced_lfs_objects(except_registry_ids:) def legacy_find_unsynced_lfs_objects(except_file_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: :lfs, except_registry_ids: except_registry_ids) registry_file_ids = legacy_pluck_registry_file_ids(file_types: :lfs) | except_file_ids
legacy_left_outer_join_registry_ids( legacy_left_outer_join_registry_ids(
lfs_objects, local_lfs_objects,
registry_ids, registry_file_ids,
LfsObject LfsObject
) )
end end
......
...@@ -18,7 +18,7 @@ module AuditLogsHelper ...@@ -18,7 +18,7 @@ module AuditLogsHelper
def admin_project_dropdown_label(default_label) def admin_project_dropdown_label(default_label)
if @entity if @entity
@entity.name_with_namespace @entity.full_name
else else
default_label default_label
end end
......
module Ci
class PipelineChatData < ActiveRecord::Base
self.table_name = 'ci_pipeline_chat_data'
belongs_to :chat_name
validates :pipeline_id, presence: true
validates :chat_name_id, presence: true
validates :response_url, presence: true
end
end
module EE module EE
module Ci module Ci
module Pipeline module Pipeline
extend ActiveSupport::Concern
EE_FAILURE_REASONS = { EE_FAILURE_REASONS = {
activity_limit_exceeded: 20, activity_limit_exceeded: 20,
size_limit_exceeded: 21 size_limit_exceeded: 21
}.freeze }.freeze
included do
has_one :chat_data, class_name: 'Ci::PipelineChatData'
end
def predefined_variables def predefined_variables
result = super result = super
result << { key: 'CI_PIPELINE_SOURCE', value: source.to_s, public: true } result << { key: 'CI_PIPELINE_SOURCE', value: source.to_s, public: true }
......
...@@ -12,6 +12,7 @@ module EE ...@@ -12,6 +12,7 @@ module EE
after_destroy :log_geo_event after_destroy :log_geo_event
scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) } scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(file_store: ObjectStorage::Store::REMOTE) }
end end
def local_store? def local_store?
......
...@@ -29,6 +29,7 @@ module EE ...@@ -29,6 +29,7 @@ module EE
has_one :index_status has_one :index_status
has_one :jenkins_service has_one :jenkins_service
has_one :jenkins_deprecated_service has_one :jenkins_deprecated_service
has_one :github_service
has_many :approvers, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent has_many :approvers, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent
has_many :approver_groups, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent has_many :approver_groups, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent
...@@ -462,6 +463,10 @@ module EE ...@@ -462,6 +463,10 @@ module EE
disabled_services.push('jenkins', 'jenkins_deprecated') disabled_services.push('jenkins', 'jenkins_deprecated')
end end
unless feature_available?(:github_project_service_integration)
disabled_services.push('github')
end
disabled_services disabled_services
end end
end end
......
module EE
module Service
extend ActiveSupport::Concern
module ClassMethods
extend ::Gitlab::Utils::Override
override :available_services_names
def available_services_names
ee_service_names = %w[
github
jenkins
jenkins_deprecated
]
(super + ee_service_names).sort_by(&:downcase)
end
end
end
end
module EE
module SlackSlashCommandsService
def chat_responder
::Gitlab::Chat::Responder::Slack
end
end
end
...@@ -4,6 +4,7 @@ module Geo ...@@ -4,6 +4,7 @@ module Geo
self.table_name = Gitlab::Geo::Fdw.table('lfs_objects') self.table_name = Gitlab::Geo::Fdw.table('lfs_objects')
scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) } scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(file_store: LfsObjectUploader::Store::REMOTE) }
end end
end end
end end
...@@ -4,6 +4,7 @@ module Geo ...@@ -4,6 +4,7 @@ module Geo
self.table_name = Gitlab::Geo::Fdw.table('uploads') self.table_name = Gitlab::Geo::Fdw.table('uploads')
scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) } scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(store: ObjectStorage::Store::REMOTE) }
end end
end end
end end
...@@ -5,4 +5,5 @@ class Geo::FileRegistry < Geo::BaseRegistry ...@@ -5,4 +5,5 @@ class Geo::FileRegistry < Geo::BaseRegistry
scope :lfs_objects, -> { where(file_type: :lfs) } scope :lfs_objects, -> { where(file_type: :lfs) }
scope :job_artifacts, -> { where(file_type: :job_artifact) } scope :job_artifacts, -> { where(file_type: :job_artifact) }
scope :attachments, -> { where(file_type: Geo::FileService::DEFAULT_OBJECT_TYPES) } scope :attachments, -> { where(file_type: Geo::FileService::DEFAULT_OBJECT_TYPES) }
scope :stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
end end
...@@ -105,7 +105,7 @@ class GeoNodeStatus < ActiveRecord::Base ...@@ -105,7 +105,7 @@ class GeoNodeStatus < ActiveRecord::Base
self.wikis_count = projects_finder.count_wikis self.wikis_count = projects_finder.count_wikis
self.lfs_objects_count = lfs_objects_finder.count_lfs_objects self.lfs_objects_count = lfs_objects_finder.count_lfs_objects
self.job_artifacts_count = job_artifacts_finder.count_job_artifacts self.job_artifacts_count = job_artifacts_finder.count_job_artifacts
self.attachments_count = attachments_finder.count_attachments self.attachments_count = attachments_finder.count_local_attachments
self.last_successful_status_check_at = Time.now self.last_successful_status_check_at = Time.now
self.storage_shards = StorageShard.all self.storage_shards = StorageShard.all
......
...@@ -42,6 +42,7 @@ class License < ActiveRecord::Base ...@@ -42,6 +42,7 @@ class License < ActiveRecord::Base
extended_audit_events extended_audit_events
file_locks file_locks
geo geo
github_project_service_integration
jira_dev_panel_integration jira_dev_panel_integration
ldap_group_sync_filter ldap_group_sync_filter
multiple_clusters multiple_clusters
...@@ -60,9 +61,11 @@ class License < ActiveRecord::Base ...@@ -60,9 +61,11 @@ class License < ActiveRecord::Base
EEU_FEATURES = EEP_FEATURES + %i[ EEU_FEATURES = EEP_FEATURES + %i[
sast sast
sast_container sast_container
cluster_health
dast dast
epics epics
ide ide
chatops
].freeze ].freeze
# List all features available for early adopters, # List all features available for early adopters,
...@@ -318,6 +321,7 @@ class License < ActiveRecord::Base ...@@ -318,6 +321,7 @@ class License < ActiveRecord::Base
def reset_current def reset_current
self.class.reset_current self.class.reset_current
Gitlab::Chat.flush_available_cache
end end
def reset_license def reset_license
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment