Commit c5896b8d authored by Douwe Maan's avatar Douwe Maan

Merge branch 'master' into '3839-ci-cd-only-github-projects-fe'

# Conflicts:
#   config/sidekiq_queues.yml
parents 5c2592da f6a35a3c
Please view this file on the master branch, on stable branches it's out of date.
## 10.5.3 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.5.2 (2018-02-25)
- No changes.
......@@ -82,6 +94,18 @@ Please view this file on the master branch, on stable branches it's out of date.
- Remove unaproved typo check in sast:container report.
## 10.4.5 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.4.4 (2018-02-16)
### Fixed (4 changes)
......@@ -183,6 +207,18 @@ Please view this file on the master branch, on stable branches it's out of date.
- Make scoped issue board specs more reliable.
## 10.3.8 (2018-03-01)
### Security (2 changes)
- Project can no longer be shared between groups when both member and group locks are active.
- Prevent new push rules from using non-RE2 regexes.
### Fixed (1 change)
- Fix LDAP group sync no longer configurable for regular users.
## 10.3.7 (2018-02-05)
### Security (1 change)
......
......@@ -2,6 +2,13 @@
documentation](doc/development/changelog.md) for instructions on adding your own
entry.
## 10.5.3 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.5.2 (2018-02-25)
### Fixed (7 changes)
......@@ -219,6 +226,13 @@ entry.
- Adds empty state illustration for pending job.
## 10.4.5 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.4.4 (2018-02-16)
### Security (1 change)
......@@ -443,6 +457,13 @@ entry.
- Use a background migration for issues.closed_at.
## 10.3.8 (2018-03-01)
### Security (1 change)
- Ensure that OTP backup codes are always invalidated.
## 10.3.7 (2018-02-05)
### Security (4 changes)
......
......@@ -107,16 +107,16 @@ gem 'carrierwave', '~> 1.2'
gem 'dropzonejs-rails', '~> 0.7.1'
# for backups
gem 'fog-aws', '~> 1.4'
gem 'fog-aws', '~> 2.0'
gem 'fog-core', '~> 1.44'
gem 'fog-google', '~> 0.5'
gem 'fog-google', '~> 1.3'
gem 'fog-local', '~> 0.3'
gem 'fog-openstack', '~> 0.1'
gem 'fog-rackspace', '~> 0.1.1'
gem 'fog-aliyun', '~> 0.2.0'
# for Google storage
gem 'google-api-client', '~> 0.13.6'
gem 'google-api-client', '~> 0.19'
# for aws storage
gem 'unf', '~> 0.1.4'
......
......@@ -213,7 +213,7 @@ GEM
et-orbi (1.0.3)
tzinfo
eventmachine (1.0.8)
excon (0.57.1)
excon (0.60.0)
execjs (2.6.0)
expression_parser (0.9.0)
factory_bot (4.8.2)
......@@ -255,19 +255,20 @@ GEM
fog-json (~> 1.0)
ipaddress (~> 0.8)
xml-simple (~> 1.1)
fog-aws (1.4.0)
fog-aws (2.0.1)
fog-core (~> 1.38)
fog-json (~> 1.0)
fog-xml (~> 0.1)
ipaddress (~> 0.8)
fog-core (1.44.3)
fog-core (1.45.0)
builder
excon (~> 0.49)
excon (~> 0.58)
formatador (~> 0.2)
fog-google (0.5.3)
fog-google (1.3.0)
fog-core
fog-json
fog-xml
google-api-client (~> 0.19.1)
fog-json (1.0.2)
fog-core (~> 1.0)
multi_json (~> 1.10)
......@@ -358,9 +359,9 @@ GEM
json
multi_json
request_store (>= 1.0)
google-api-client (0.13.6)
google-api-client (0.19.8)
addressable (~> 2.5, >= 2.5.1)
googleauth (~> 0.5)
googleauth (>= 0.5, < 0.7.0)
httpclient (>= 2.8.1, < 3.0)
mime-types (~> 3.0)
representable (~> 3.0)
......@@ -531,7 +532,7 @@ GEM
mini_portile2 (2.3.0)
minitest (5.7.0)
mousetrap-rails (1.4.6)
multi_json (1.12.2)
multi_json (1.13.1)
multi_xml (0.6.0)
multipart-post (2.0.0)
mustermann (1.0.0)
......@@ -1077,9 +1078,9 @@ DEPENDENCIES
flipper-active_record (~> 0.11.0)
flipper-active_support_cache_store (~> 0.11.0)
fog-aliyun (~> 0.2.0)
fog-aws (~> 1.4)
fog-aws (~> 2.0)
fog-core (~> 1.44)
fog-google (~> 0.5)
fog-google (~> 1.3)
fog-local (~> 0.3)
fog-openstack (~> 0.1)
fog-rackspace (~> 0.1.1)
......@@ -1101,7 +1102,7 @@ DEPENDENCIES
gollum-lib (~> 4.2)
gollum-rugged_adapter (~> 0.4.4)
gon (~> 6.1.0)
google-api-client (~> 0.13.6)
google-api-client (~> 0.19)
google-protobuf (= 3.5.1)
gpgme
grape (~> 1.0)
......
......@@ -140,3 +140,4 @@ export default {
</div>
</div>
</template>
<script>
/* global ListIssue */
import _ from 'underscore';
import eventHub from '../eventhub';
import loadingIcon from '../../vue_shared/components/loading_icon.vue';
import Api from '../../api';
export default {
name: 'BoardProjectSelect',
components: {
loadingIcon,
},
props: {
groupId: {
type: Number,
required: true,
default: 0,
},
},
data() {
return {
loading: true,
selectedProject: {},
};
},
computed: {
selectedProjectName() {
return this.selectedProject.name || 'Select a project';
},
},
mounted() {
$(this.$refs.projectsDropdown).glDropdown({
filterable: true,
filterRemote: true,
search: {
fields: ['name_with_namespace'],
},
clicked: ({ $el, e }) => {
e.preventDefault();
this.selectedProject = {
id: $el.data('project-id'),
name: $el.data('project-name'),
};
eventHub.$emit('setSelectedProject', this.selectedProject);
},
selectable: true,
data: (term, callback) => {
this.loading = true;
return Api.groupProjects(this.groupId, term, (projects) => {
this.loading = false;
callback(projects);
});
},
renderRow(project) {
return `
<li>
<a href='#' class='dropdown-menu-link' data-project-id="${project.id}" data-project-name="${project.name}">
${_.escape(project.name)}
</a>
</li>
`;
},
text: project => project.name,
});
},
};
</script>
<template>
<div>
<label class="label-light prepend-top-10">
Project
</label>
<div
ref="projectsDropdown"
class="dropdown"
>
<button
class="dropdown-menu-toggle wide"
type="button"
data-toggle="dropdown"
aria-expanded="false"
>
{{ selectedProjectName }}
<i
class="fa fa-chevron-down"
aria-hidden="true"
>
</i>
</button>
<div class="dropdown-menu dropdown-menu-selectable dropdown-menu-full-width">
<div class="dropdown-title">
<span>Projects</span>
<button
aria-label="Close"
type="button"
class="dropdown-title-button dropdown-menu-close"
>
<i
aria-hidden="true"
data-hidden="true"
class="fa fa-times dropdown-menu-close-icon"
>
</i>
</button>
</div>
<div class="dropdown-input">
<input
class="dropdown-input-field"
type="search"
placeholder="Search projects"
/>
<i
aria-hidden="true"
data-hidden="true"
class="fa fa-search dropdown-input-search"
>
</i>
</div>
<div class="dropdown-content"></div>
<div class="dropdown-loading">
<loading-icon />
</div>
</div>
</div>
</div>
</template>
......@@ -13,6 +13,7 @@ import sidebarEventHub from '~/sidebar/event_hub'; // eslint-disable-line import
import './models/issue';
import './models/list';
import './models/milestone';
import './models/project';
import './models/assignee';
import './stores/boards_store';
import './stores/modal_store';
......
export default class IssueProject {
constructor(obj) {
this.id = obj.id;
this.path = obj.path;
}
}
......@@ -117,7 +117,10 @@
</script>
<template>
<section class="settings no-animate expanded">
<section
id="cluster-applications"
class="settings no-animate expanded"
>
<div class="settings-header">
<h4>
{{ s__('ClusterIntegration|Applications') }}
......
......@@ -7,34 +7,82 @@
import EmptyState from './empty_state.vue';
import MonitoringStore from '../stores/monitoring_store';
import eventHub from '../event_hub';
import { convertPermissionToBoolean } from '../../lib/utils/common_utils';
export default {
components: {
Graph,
GraphGroup,
EmptyState,
},
data() {
const metricsData = document.querySelector('#prometheus-graphs').dataset;
const store = new MonitoringStore();
props: {
hasMetrics: {
type: Boolean,
required: false,
default: true,
},
showLegend: {
type: Boolean,
required: false,
default: true,
},
showPanels: {
type: Boolean,
required: false,
default: true,
},
forceSmallGraph: {
type: Boolean,
required: false,
default: false,
},
documentationPath: {
type: String,
required: true,
},
settingsPath: {
type: String,
required: true,
},
clustersPath: {
type: String,
required: true,
},
tagsPath: {
type: String,
required: true,
},
projectPath: {
type: String,
required: true,
},
metricsEndpoint: {
type: String,
required: true,
},
deploymentEndpoint: {
type: String,
required: false,
default: null,
},
emptyGettingStartedSvgPath: {
type: String,
required: true,
},
emptyLoadingSvgPath: {
type: String,
required: true,
},
emptyUnableToConnectSvgPath: {
type: String,
required: true,
},
},
data() {
return {
store,
store: new MonitoringStore(),
state: 'gettingStarted',
hasMetrics: convertPermissionToBoolean(metricsData.hasMetrics),
documentationPath: metricsData.documentationPath,
settingsPath: metricsData.settingsPath,
clustersPath: metricsData.clustersPath,
tagsPath: metricsData.tagsPath,
projectPath: metricsData.projectPath,
metricsEndpoint: metricsData.additionalMetrics,
deploymentEndpoint: metricsData.deploymentEndpoint,
emptyGettingStartedSvgPath: metricsData.emptyGettingStartedSvgPath,
emptyLoadingSvgPath: metricsData.emptyLoadingSvgPath,
emptyUnableToConnectSvgPath: metricsData.emptyUnableToConnectSvgPath,
showEmptyState: true,
updateAspectRatio: false,
updatedAspectRatios: 0,
......@@ -67,6 +115,7 @@
window.addEventListener('resize', this.resizeThrottled, false);
}
},
methods: {
getGraphsData() {
this.state = 'loading';
......@@ -115,6 +164,7 @@
v-for="(groupData, index) in store.groups"
:key="index"
:name="groupData.group"
:show-panels="showPanels"
>
<graph
v-for="(graphData, index) in groupData.metrics"
......@@ -125,6 +175,8 @@
:deployment-data="store.deploymentData"
:project-path="projectPath"
:tags-path="tagsPath"
:show-legend="showLegend"
:small-graph="forceSmallGraph"
/>
</graph-group>
</div>
......
......@@ -52,6 +52,16 @@
type: String,
required: true,
},
showLegend: {
type: Boolean,
required: false,
default: true,
},
smallGraph: {
type: Boolean,
required: false,
default: false,
},
},
data() {
......@@ -130,7 +140,7 @@
const breakpointSize = bp.getBreakpointSize();
const query = this.graphData.queries[0];
this.margin = measurements.large.margin;
if (breakpointSize === 'xs' || breakpointSize === 'sm') {
if (this.smallGraph || breakpointSize === 'xs' || breakpointSize === 'sm') {
this.graphHeight = 300;
this.margin = measurements.small.margin;
this.measurements = measurements.small;
......@@ -182,7 +192,9 @@
this.graphHeightOffset,
);
if (this.timeSeries.length > 3) {
if (!this.showLegend) {
this.baseGraphHeight -= 50;
} else if (this.timeSeries.length > 3) {
this.baseGraphHeight = this.baseGraphHeight += (this.timeSeries.length - 3) * 20;
}
......@@ -255,6 +267,7 @@
:time-series="timeSeries"
:unit-of-display="unitOfDisplay"
:current-data-index="currentDataIndex"
:show-legend-group="showLegend"
/>
<svg
class="graph-data"
......
......@@ -39,6 +39,11 @@
type: Number,
required: true,
},
showLegendGroup: {
type: Boolean,
required: false,
default: true,
},
},
data() {
return {
......@@ -57,8 +62,9 @@
},
rectTransform() {
const yCoordinate = ((this.graphHeight - this.margin.top) / 2)
+ (this.yLabelWidth / 2) + 10 || 0;
const yCoordinate = (((this.graphHeight - this.margin.top)
+ this.measurements.axisLabelLineOffset) / 2)
+ (this.yLabelWidth / 2) || 0;
return `translate(0, ${yCoordinate}) rotate(-90)`;
},
......@@ -166,6 +172,7 @@
>
Time
</text>
<template v-if="showLegendGroup">
<g
class="legend-group"
v-for="(series, index) in timeSeries"
......@@ -200,5 +207,6 @@
{{ legendTitle }} {{ formatMetricUsage(series) }}
</text>
</g>
</template>
</g>
</template>
......@@ -5,12 +5,20 @@
type: String,
required: true,
},
showPanels: {
type: Boolean,
required: false,
default: true,
},
},
};
</script>
<template>
<div class="panel panel-default prometheus-panel">
<div
v-if="showPanels"
class="panel panel-default prometheus-panel"
>
<div class="panel-heading">
<h4>{{ name }}</h4>
</div>
......@@ -18,4 +26,10 @@
<slot></slot>
</div>
</div>
<div
v-else
class="prometheus-graph-group"
>
<slot></slot>
</div>
</template>
import Vue from 'vue';
import { convertPermissionToBoolean } from '~/lib/utils/common_utils';
import Dashboard from './components/dashboard.vue';
export default () => new Vue({
el: '#prometheus-graphs',
render: createElement => createElement(Dashboard),
});
export default () => {
const el = document.getElementById('prometheus-graphs');
if (el && el.dataset) {
// eslint-disable-next-line no-new
new Vue({
el,
render(createElement) {
return createElement(Dashboard, {
props: {
...el.dataset,
hasMetrics: convertPermissionToBoolean(el.dataset.hasMetrics),
},
});
},
});
}
};
......@@ -40,6 +40,9 @@ export default class MonitoringService {
}
getDeploymentData() {
if (!this.deploymentEndpoint) {
return Promise.resolve([]);
}
return backOffRequest(() => axios.get(this.deploymentEndpoint))
.then(resp => resp.data)
.then((response) => {
......
export default {
animation: 200,
forceFallback: true,
fallbackClass: 'is-dragging',
fallbackOnBody: true,
ghostClass: 'is-ghost',
};
......@@ -529,7 +529,8 @@
}
> text {
font-size: 12px;
fill: $theme-gray-600;
font-size: 10px;
}
}
......@@ -573,3 +574,17 @@
}
}
}
// EE-only
.cluster-health-graphs {
.prometheus-state {
.state-svg img {
max-height: 120px;
}
.state-description,
.state-button {
display: none;
}
}
}
module UploadsActions
include Gitlab::Utils::StrongMemoize
include SendFileUpload
UPLOAD_MOUNTS = %w(avatar attachment file logo header_logo).freeze
......@@ -26,14 +27,11 @@ module UploadsActions
def show
return render_404 unless uploader&.exists?
if uploader.file_storage?
disposition = uploader.image_or_video? ? 'inline' : 'attachment'
expires_in 0.seconds, must_revalidate: true, private: true
send_file uploader.file.path, disposition: disposition
else
redirect_to uploader.url
end
disposition = uploader.image_or_video? ? 'inline' : 'attachment'
send_upload(uploader, attachment: uploader.filename, disposition: disposition)
end
private
......
......@@ -64,6 +64,22 @@ class Projects::ClustersController < Projects::ApplicationController
end
end
def metrics
return render_404 unless prometheus_adapter&.can_query?
respond_to do |format|
format.json do
metrics = prometheus_adapter.query(:cluster) || {}
if metrics.any?
render json: metrics
else
head :no_content
end
end
end
end
private
def cluster
......@@ -71,6 +87,12 @@ class Projects::ClustersController < Projects::ApplicationController
.present(current_user: current_user)
end
def prometheus_adapter
return unless cluster&.application_prometheus&.installed?
cluster.application_prometheus
end
def update_params
if cluster.managed?
params.require(:cluster).permit(
......
......@@ -2,6 +2,7 @@ class Projects::GroupLinksController < Projects::ApplicationController
layout 'project_settings'
before_action :authorize_admin_project!
before_action :authorize_admin_project_member!, only: [:update]
before_action :authorize_group_share!, only: [:create]
def index
redirect_to namespace_project_settings_members_path
......@@ -42,6 +43,10 @@ class Projects::GroupLinksController < Projects::ApplicationController
protected
def authorize_group_share!
access_denied! unless project.allowed_to_share_with_group?
end
def group_link_params
params.require(:group_link).permit(:group_access, :expires_at)
end
......
......@@ -17,20 +17,23 @@ class Projects::LfsStorageController < Projects::GitHttpClientController
def upload_authorize
set_workhorse_internal_api_content_type
render json: Gitlab::Workhorse.lfs_upload_ok(oid, size)
end
def upload_finalize
unless tmp_filename
render_lfs_forbidden
return
authorized = LfsObjectUploader.workhorse_authorize
authorized.merge!(LfsOid: oid, LfsSize: size)
render json: authorized
end
if store_file(oid, size, tmp_filename)
def upload_finalize
if store_file!(oid, size)
head 200
else
render plain: 'Unprocessable entity', status: 422
end
rescue ActiveRecord::RecordInvalid
render_400
rescue ObjectStorage::RemoteStoreError
render_lfs_forbidden
end
private
......@@ -51,38 +54,28 @@ class Projects::LfsStorageController < Projects::GitHttpClientController
params[:size].to_i
end
def tmp_filename
name = request.headers['X-Gitlab-Lfs-Tmp']
return if name.include?('/')
return unless oid.present? && name.start_with?(oid)
name
def store_file!(oid, size)
object = LfsObject.find_by(oid: oid, size: size)
unless object&.file&.exists?
object = create_file!(oid, size)
end
def store_file(oid, size, tmp_file)
# Define tmp_file_path early because we use it in "ensure"
tmp_file_path = File.join(LfsObjectUploader.workhorse_upload_path, tmp_file)
return unless object
object = LfsObject.find_or_create_by(oid: oid, size: size)
file_exists = object.file.exists? || move_tmp_file_to_storage(object, tmp_file_path)
file_exists && link_to_project(object)
ensure
FileUtils.rm_f(tmp_file_path)
link_to_project!(object)
end
def move_tmp_file_to_storage(object, path)
File.open(path) do |f|
object.file = f
def create_file!(oid, size)
LfsObject.new(oid: oid, size: size).tap do |object|
object.file.store_workhorse_file!(params, :file)
object.save!
end
object.file.store!
object.save
end
def link_to_project(object)
def link_to_project!(object)
if object && !object.projects.exists?(storage_project.id)
object.projects << storage_project
object.save
object.save!
end
end
end
......@@ -3,7 +3,8 @@ class Projects::ServicesController < Projects::ApplicationController
# Authorize
before_action :authorize_admin_project!
before_action :service, only: [:edit, :update, :test]
before_action :ensure_service_enabled
before_action :service
respond_to :html
......@@ -23,26 +24,30 @@ class Projects::ServicesController < Projects::ApplicationController
end
def test
message = {}
if @service.can_test?
render json: service_test_response, status: :ok
else
render json: {}, status: :not_found
end
end
if @service.can_test? && @service.update_attributes(service_params[:service])
private
def service_test_response
if @service.update_attributes(service_params[:service])
data = @service.test_data(project, current_user)
outcome = @service.test(data)
unless outcome[:success]
message = { error: true, message: 'Test failed.', service_response: outcome[:result].to_s }
if outcome[:success]
{}
else
{ error: true, message: 'Test failed.', service_response: outcome[:result].to_s }
end
status = :ok
else
status = :not_found
{ error: true, message: 'Validations failed.', service_response: @service.errors.full_messages.join(',') }
end
render json: message, status: status
end
private
def success_message
if @service.active?
"#{@service.title} activated."
......@@ -54,4 +59,8 @@ class Projects::ServicesController < Projects::ApplicationController
def service
@service ||= @project.find_or_initialize_service(params[:id])
end
def ensure_service_enabled
render_404 unless service
end
end
......@@ -450,6 +450,10 @@ module ProjectsHelper
end
end
def project_can_be_shared?
!membership_locked? || @project.allowed_to_share_with_group?
end
def membership_locked?
if @project.group && @project.group.membership_lock
true
......@@ -458,6 +462,24 @@ module ProjectsHelper
end
end
def share_project_description
share_with_group = @project.allowed_to_share_with_group?
share_with_members = !membership_locked?
project_name = content_tag(:strong, @project.name)
member_message = "You can add a new member to #{project_name}"
description =
if share_with_group && share_with_members
"#{member_message} or share it with another group."
elsif share_with_group
"You can share #{project_name} with another group."
elsif share_with_members
"#{member_message}."
end
description.to_s.html_safe
end
def readme_cache_key
sha = @project.commit.try(:sha) || 'nil'
[@project.full_path, sha, "readme"].join('-')
......
......@@ -62,7 +62,8 @@ module Ci
schedule: 4,
api: 5,
external: 6,
pipeline: 7
pipeline: 7,
chat: 8
}
enum config_source: {
......
......@@ -9,6 +9,7 @@ class Commit
include Mentionable
include Referable
include StaticModel
include ::Gitlab::Utils::StrongMemoize
attr_mentionable :safe_message, pipeline: :single_line
......@@ -225,11 +226,13 @@ class Commit
end
def parents
@parents ||= parent_ids.map { |id| project.commit(id) }
@parents ||= parent_ids.map { |oid| Commit.lazy(project, oid) }
end
def parent
@parent ||= project.commit(self.parent_id) if self.parent_id
strong_memoize(:parent) do
project.commit_by(oid: self.parent_id) if self.parent_id
end
end
def notes
......
......@@ -9,6 +9,12 @@ class LfsObject < ActiveRecord::Base
mount_uploader :file, LfsObjectUploader
before_save :update_file_store
def update_file_store
self.file_store = file.object_store
end
def project_allowed_access?(project)
projects.exists?(project.lfs_storage_project.id)
end
......
......@@ -281,7 +281,8 @@ class Project < ActiveRecord::Base
scope :without_storage_feature, ->(feature) { where('storage_version < :version OR storage_version IS NULL', version: HASHED_STORAGE_FEATURES[feature]) }
scope :with_unmigrated_storage, -> { where('storage_version < :version OR storage_version IS NULL', version: LATEST_STORAGE_VERSION) }
scope :sorted_by_activity, -> { reorder(last_activity_at: :desc) }
# last_activity_at is throttled every minute, but last_repository_updated_at is updated with every push
scope :sorted_by_activity, -> { reorder("GREATEST(COALESCE(last_activity_at, '1970-01-01'), COALESCE(last_repository_updated_at, '1970-01-01')) DESC") }
scope :sorted_by_stars, -> { reorder('projects.star_count DESC') }
scope :in_namespace, ->(namespace_ids) { where(namespace_id: namespace_ids) }
......@@ -789,7 +790,7 @@ class Project < ActiveRecord::Base
end
def last_activity_date
last_repository_updated_at || last_activity_at || updated_at
[last_activity_at, last_repository_updated_at, updated_at].compact.max
end
def project_id
......
class SlackSlashCommandsService < SlashCommandsService
prepend EE::SlackSlashCommandsService
include TriggersHelper
def title
......
# To add new service you should build a class inherited from Service
# and implement a set of methods
class Service < ActiveRecord::Base
prepend EE::Service
include Sortable
include Importable
......@@ -129,6 +130,17 @@ class Service < ActiveRecord::Base
fields
end
def configurable_events
events = self.class.supported_events
# No need to disable individual triggers when there is only one
if events.count == 1
[]
else
events
end
end
def supported_events
self.class.supported_events
end
......@@ -242,8 +254,6 @@ class Service < ActiveRecord::Base
gemnasium
hipchat
irker
jenkins
jenkins_deprecated
jira
kubernetes
mattermost_slash_commands
......
......@@ -12,6 +12,7 @@ class Upload < ActiveRecord::Base
validates :uploader, presence: true
scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(store: ObjectStorage::Store::REMOTE) }
before_save :calculate_checksum!, if: :foreground_checksummable?
after_commit :schedule_checksum, if: :checksummable?
......
......@@ -3,6 +3,7 @@ module Ci
attr_reader :pipeline
SEQUENCE = [Gitlab::Ci::Pipeline::Chain::Build,
EE::Gitlab::Ci::Pipeline::Chain::RemoveUnwantedChatJobs,
Gitlab::Ci::Pipeline::Chain::Validate::Abilities,
Gitlab::Ci::Pipeline::Chain::Validate::Repository,
Gitlab::Ci::Pipeline::Chain::Validate::Config,
......@@ -29,7 +30,8 @@ module Ci
current_user: current_user,
# EE specific
allow_mirror_update: mirror_update
allow_mirror_update: mirror_update,
chat_data: params[:chat_data]
)
sequence = Gitlab::Ci::Pipeline::Chain::Sequence
......
......@@ -2,11 +2,6 @@ class LfsObjectUploader < GitlabUploader
extend Workhorse::UploadPath
include ObjectStorage::Concern
# LfsObject are in `tmp/upload` instead of `tmp/uploads`
def self.workhorse_upload_path
File.join(root, 'tmp/upload')
end
storage_options Gitlab.config.lfs
def filename
......
class UntrustedRegexpValidator < ActiveModel::EachValidator
def validate_each(record, attribute, value)
return unless value
Gitlab::UntrustedRegexp.new(value)
rescue RegexpError => e
record.errors.add(attribute, "not valid RE2 syntax: #{e.message}")
end
end
......@@ -22,6 +22,10 @@
.js-cluster-application-notice
.flash-container
-# EE-specific
- if @cluster.project.feature_available?(:cluster_health)
= render 'health'
%section.settings.no-animate.expanded#cluster-integration
= render 'banner'
= render 'integration_form'
......
......@@ -47,7 +47,7 @@
#{ commit_text.html_safe }
- if show_project_name
%span.project_namespace
= project.name_with_namespace
= project.full_name
.commit-actions.flex-row.hidden-xs
- if request.xhr?
......
......@@ -15,7 +15,8 @@
"empty-getting-started-svg-path": image_path('illustrations/monitoring/getting_started.svg'),
"empty-loading-svg-path": image_path('illustrations/monitoring/loading.svg'),
"empty-unable-to-connect-svg-path": image_path('illustrations/monitoring/unable_to_connect.svg'),
"additional-metrics": additional_metrics_project_environment_path(@project, @environment, format: :json),
"metrics-endpoint": additional_metrics_project_environment_path(@project, @environment, format: :json),
"deployment-endpoint": project_environment_deployments_path(@project, @environment, format: :json),
"project-path": project_path(@project),
"tags-path": project_tags_path(@project),
"has-metrics": "#{@environment.has_metrics?}", deployment_endpoint: project_environment_deployments_path(@project, @environment, format: :json) } }
"has-metrics": "#{@environment.has_metrics?}" } }
- page_title "Members"
- can_admin_project_members = can?(current_user, :admin_project_member, @project)
.row.prepend-top-default
.col-lg-12
- if project_can_be_shared?
%h4
Project members
- if can?(current_user, :admin_project_member, @project)
%p
You can add a new member to
%strong= @project.name
or share it with another group.
- if can_admin_project_members
%p= share_project_description
- else
%p
Members can be added by project
%i Masters
or
%i Owners
.light
- if can?(current_user, :admin_project_member, @project)
- if can_admin_project_members && project_can_be_shared?
- if !membership_locked? && @project.allowed_to_share_with_group?
%ul.nav-links.gitlab-tabs{ role: 'tablist' }
- if !membership_locked?
%li.active{ role: 'presentation' }
%a{ href: '#add-member-pane', id: 'add-member-tab', data: { toggle: 'tab' }, role: 'tab' } Add member
- if @project.allowed_to_share_with_group?
%li{ role: 'presentation', class: ('active' if membership_locked?) }
%a{ href: '#share-with-group-pane', id: 'share-with-group-tab', data: { toggle: 'tab' }, role: 'tab' } Share with group
.tab-content.gitlab-tab-content
- if !membership_locked?
.tab-pane.active{ id: 'add-member-pane', role: 'tabpanel' }
= render 'projects/project_members/new_project_member', tab_title: 'Add member'
.tab-pane{ id: 'share-with-group-pane', role: 'tabpanel', class: ('active' if membership_locked?) }
= render 'projects/project_members/new_shared_group', tab_title: 'Share with group'
- elsif !membership_locked?
.add-member= render 'projects/project_members/new_project_member', tab_title: 'Add member'
- elsif @project.allowed_to_share_with_group?
.share-with-group= render 'projects/project_members/new_shared_group', tab_title: 'Share with group'
= render 'shared/members/requests', membership_source: @project, requesters: @requesters
.clearfix
......
......@@ -5,6 +5,9 @@
= boolean_to_icon @service.activated?
%p= @service.description
- if @service.respond_to?(:detailed_description)
%p= @service.detailed_description
.col-lg-9
= form_for(@service, as: :service, url: project_service_path(@project, @service.to_param), method: :put, html: { class: 'gl-show-field-errors form-horizontal integration-settings-form js-integration-settings-form', data: { 'can-test' => @service.can_test?, 'test-url' => test_project_service_path(@project, @service) } }) do |form|
= render 'shared/service_settings', form: form, subject: @service
......
- @no_container = true
- @sort ||= sort_value_recently_updated
- page_title s_('TagsPage|Tags')
- add_to_breadcrumbs("Repository", project_tree_path(@project))
.flex-list{ class: container_class }
.top-area.adjust
......
......@@ -16,7 +16,7 @@
- if @project
= file_name
- else
#{project.name_with_namespace}:
#{project.full_name}:
%i= file_name
- if blob.data
.file-content.code.term
......
......@@ -10,7 +10,7 @@
- if @project
= wiki_blob.basename
- else
#{project.name_with_namespace}:
#{project.full_name}:
%i= wiki_blob.basename
.file-content.code.term
= render 'shared/file_highlight', blob: wiki_blob, first_line_number: wiki_blob.startline
......@@ -13,12 +13,12 @@
.col-sm-10
= form.check_box :active, disabled: disable_fields_service?(@service)
- if @service.supported_events.present?
- if @service.configurable_events.present?
.form-group
= form.label :url, "Trigger", class: 'control-label'
.col-sm-10
- @service.supported_events.each do |event|
- @service.configurable_events.each do |event|
%div
= form.check_box service_event_field_name(event), class: 'pull-left'
.prepend-left-20
......
......@@ -48,7 +48,6 @@
- pipeline_default:build_trace_sections
- pipeline_default:pipeline_metrics
- pipeline_default:pipeline_notification
- pipeline_default:update_head_pipeline_for_merge_request
- pipeline_hooks:build_hooks
- pipeline_hooks:pipeline_hooks
- pipeline_processing:build_finished
......@@ -58,6 +57,7 @@
- pipeline_processing:pipeline_success
- pipeline_processing:pipeline_update
- pipeline_processing:stage_update
- pipeline_processing:update_head_pipeline_for_merge_request
- repository_check:repository_check_clear
- repository_check:repository_check_single_repository
......@@ -145,3 +145,4 @@
- rebase
- repository_update_mirror
- repository_update_remote_mirror
- chat_notification
class BuildFinishedWorker
prepend EE::BuildFinishedWorker
include ApplicationWorker
include PipelineQueue
......
......@@ -2,6 +2,8 @@ class UpdateHeadPipelineForMergeRequestWorker
include ApplicationWorker
include PipelineQueue
queue_namespace :pipeline_processing
def perform(merge_request_id)
merge_request = MergeRequest.find(merge_request_id)
pipeline = Ci::Pipeline.where(project: merge_request.source_project, ref: merge_request.source_branch).last
......
---
title: Project can no longer be shared between groups when both member and group locks
are active
merge_request:
author:
type: security
---
title: Prevent new push rules from using non-RE2 regexes
merge_request:
author:
type: security
---
title: Fix LDAP group sync no longer configurable for regular users
merge_request:
author:
type: fixed
---
title: Remove extra breadcrumb on tags
merge_request: 17562
author: Takuya Noguchi
type: fixed
---
title: Started translation into Turkish, Indonesian and Filipino
merge_request: 17526
author:
type: other
---
title: Add one group board to Libre
merge_request:
author:
type: added
---
title: Fix project dashboard showing the wrong timestamps
merge_request:
author:
type: fixed
---
title: Upgrade GitLab Workhorse to 4.0.0
merge_request:
author:
type: added
......@@ -149,6 +149,7 @@ production: &base
# enabled: false
# remote_directory: artifacts # The bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
# connection:
# provider: AWS # Only AWS supported at the moment
# aws_access_key_id: AWS_ACCESS_KEY_ID
......@@ -164,6 +165,7 @@ production: &base
enabled: false
remote_directory: lfs-objects # Bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
connection:
provider: AWS
aws_access_key_id: AWS_ACCESS_KEY_ID
......@@ -183,6 +185,7 @@ production: &base
enabled: true
remote_directory: uploads # Bucket name
# background_upload: false # Temporary option to limit automatic upload (Default: true)
# proxy_download: false # Passthrough all downloads via GitLab instead of using Redirects to Object Storage
connection:
provider: AWS
aws_access_key_id: AWS_ACCESS_KEY_ID
......@@ -791,7 +794,7 @@ test:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1
region: us-east-1
artifacts:
path: tmp/tests/artifacts
enabled: true
......@@ -805,7 +808,7 @@ test:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1
region: us-east-1
uploads:
storage_path: tmp/tests/public
enabled: true
......@@ -815,7 +818,7 @@ test:
provider: AWS # Only AWS supported at the moment
aws_access_key_id: AWS_ACCESS_KEY_ID
aws_secret_access_key: AWS_SECRET_ACCESS_KEY
region: eu-central-1
region: us-east-1
gitlab:
host: localhost
port: 80
......
......@@ -351,6 +351,7 @@ Settings.artifacts['object_store'] ||= Settingslogic.new({})
Settings.artifacts['object_store']['enabled'] = false if Settings.artifacts['object_store']['enabled'].nil?
Settings.artifacts['object_store']['remote_directory'] ||= nil
Settings.artifacts['object_store']['background_upload'] = true if Settings.artifacts['object_store']['background_upload'].nil?
Settings.artifacts['object_store']['proxy_download'] = false if Settings.artifacts['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy
Settings.artifacts['object_store']['connection']&.deep_stringify_keys!
......@@ -396,7 +397,9 @@ Settings.lfs['storage_path'] = Settings.absolute(Settings.lfs['storage_path'] ||
Settings.lfs['object_store'] ||= Settingslogic.new({})
Settings.lfs['object_store']['enabled'] = false if Settings.lfs['object_store']['enabled'].nil?
Settings.lfs['object_store']['remote_directory'] ||= nil
Settings.lfs['object_store']['direct_upload'] = false if Settings.lfs['object_store']['direct_upload'].nil?
Settings.lfs['object_store']['background_upload'] = true if Settings.lfs['object_store']['background_upload'].nil?
Settings.lfs['object_store']['proxy_download'] = false if Settings.lfs['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy
Settings.lfs['object_store']['connection']&.deep_stringify_keys!
......@@ -410,6 +413,7 @@ Settings.uploads['object_store'] ||= Settingslogic.new({})
Settings.uploads['object_store']['enabled'] = false if Settings.uploads['object_store']['enabled'].nil?
Settings.uploads['object_store']['remote_directory'] ||= 'uploads'
Settings.uploads['object_store']['background_upload'] = true if Settings.uploads['object_store']['background_upload'].nil?
Settings.uploads['object_store']['proxy_download'] = false if Settings.uploads['object_store']['proxy_download'].nil?
# Convert upload connection settings to use string keys, to make Fog happy
Settings.uploads['object_store']['connection']&.deep_stringify_keys!
......
......@@ -28,16 +28,4 @@ if File.exist?(aws_file)
# when fog_public is false and provider is AWS or Google, defaults to 600
config.fog_authenticated_url_expiration = 1 << 29
end
# Mocking Fog requests, based on: https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Test-Fog-based-uploaders
if Rails.env.test?
Fog.mock!
connection = ::Fog::Storage.new(
aws_access_key_id: AWS_CONFIG['access_key_id'],
aws_secret_access_key: AWS_CONFIG['secret_access_key'],
provider: 'AWS',
region: AWS_CONFIG['region']
)
connection.directories.create(key: AWS_CONFIG['bucket'])
end
end
- group: Cluster Health
priority: 1
metrics:
- title: "CPU Usage"
y_label: "CPU"
required_metrics: ['container_cpu_usage_seconds_total']
weight: 1
queries:
- query_range: 'avg(sum(rate(container_cpu_usage_seconds_total{id="/"}[15m])) by (job)) without (job)'
label: Usage
unit: "cores"
- query_range: 'sum(kube_node_status_capacity_cpu_cores{kubernetes_namespace="gitlab-managed-apps"})'
label: Capacity
unit: "cores"
- title: "Memory usage"
y_label: "Memory"
required_metrics: ['container_memory_usage_bytes']
weight: 1
queries:
- query_range: 'avg(sum(container_memory_usage_bytes{id="/"}) by (job)) without (job) / 2^30'
label: Usage
unit: "GiB"
- query_range: 'sum(kube_node_status_capacity_memory_bytes{kubernetes_namespace="gitlab-managed-apps"})/2^30'
label: Capacity
unit: "GiB"
\ No newline at end of file
......@@ -69,7 +69,7 @@ constraints(ProjectUrlConstrainer.new) do
end
end
resources :services, constraints: { id: %r{[^/]+} }, only: [:index, :edit, :update] do
resources :services, constraints: { id: %r{[^/]+} }, only: [:edit, :update] do
member do
put :test
end
......@@ -244,6 +244,7 @@ constraints(ProjectUrlConstrainer.new) do
member do
get :status, format: :json
get :metrics, format: :json
scope :applications do
post '/:application', to: 'clusters/applications#create', as: :install_applications
......
......@@ -74,6 +74,7 @@
# EE-specific queues
- [ldap_group_sync, 2]
- [create_github_webhook, 2]
- [chat_notification, 2]
- [geo, 1]
- [repository_remove_remote, 1]
- [repository_update_mirror, 1]
......
class AddRegexpUsesRe2ToPushRules < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
def up
# Default value to true for new values while keeping NULL for existing ones
add_column :push_rules, :regexp_uses_re2, :boolean
change_column_default :push_rules, :regexp_uses_re2, true
end
def down
remove_column :push_rules, :regexp_uses_re2
end
end
class AddGroupIdToBoardsCe < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
disable_ddl_transaction!
DOWNTIME = false
def up
return if group_id_exists?
add_column :boards, :group_id, :integer
add_foreign_key :boards, :namespaces, column: :group_id, on_delete: :cascade
add_concurrent_index :boards, :group_id
change_column_null :boards, :project_id, true
end
def down
return unless group_id_exists?
remove_foreign_key :boards, column: :group_id
remove_index :boards, :group_id if index_exists? :boards, :group_id
remove_column :boards, :group_id
execute "DELETE from boards WHERE project_id IS NULL"
change_column_null :boards, :project_id, false
end
private
def group_id_exists?
column_exists?(:boards, :group_id)
end
end
class MigrateUpdateHeadPipelineForMergeRequestSidekiqQueue < ActiveRecord::Migration
include Gitlab::Database::MigrationHelpers
DOWNTIME = false
def up
sidekiq_queue_migrate 'pipeline_default:update_head_pipeline_for_merge_request',
to: 'pipeline_processing:update_head_pipeline_for_merge_request'
end
def down
sidekiq_queue_migrate 'pipeline_processing:update_head_pipeline_for_merge_request',
to: 'pipeline_default:update_head_pipeline_for_merge_request'
end
end
......@@ -11,7 +11,7 @@
#
# It's strongly recommended that you check this file into your version control system.
ActiveRecord::Schema.define(version: 20180306074045) do
ActiveRecord::Schema.define(version: 20180307012445) do
# These are extensions that must be enabled in order to support this database
enable_extension "plpgsql"
......@@ -434,6 +434,14 @@ ActiveRecord::Schema.define(version: 20180306074045) do
add_index "ci_job_artifacts", ["job_id", "file_type"], name: "index_ci_job_artifacts_on_job_id_and_file_type", unique: true, using: :btree
add_index "ci_job_artifacts", ["project_id"], name: "index_ci_job_artifacts_on_project_id", using: :btree
create_table "ci_pipeline_chat_data", id: :bigserial, force: :cascade do |t|
t.integer "pipeline_id", null: false
t.integer "chat_name_id", null: false
t.text "response_url", null: false
end
add_index "ci_pipeline_chat_data", ["pipeline_id"], name: "index_ci_pipeline_chat_data_on_pipeline_id", unique: true, using: :btree
create_table "ci_pipeline_schedule_variables", force: :cascade do |t|
t.string "key", null: false
t.text "value"
......@@ -2061,6 +2069,7 @@ ActiveRecord::Schema.define(version: 20180306074045) do
t.string "branch_name_regex"
t.boolean "reject_unsigned_commits"
t.boolean "commit_committer_check"
t.boolean "regexp_uses_re2", default: true
end
add_index "push_rules", ["is_sample"], name: "index_push_rules_on_is_sample", where: "is_sample", using: :btree
......@@ -2536,6 +2545,8 @@ ActiveRecord::Schema.define(version: 20180306074045) do
add_foreign_key "ci_group_variables", "namespaces", column: "group_id", name: "fk_33ae4d58d8", on_delete: :cascade
add_foreign_key "ci_job_artifacts", "ci_builds", column: "job_id", on_delete: :cascade
add_foreign_key "ci_job_artifacts", "projects", on_delete: :cascade
add_foreign_key "ci_pipeline_chat_data", "chat_names", on_delete: :cascade
add_foreign_key "ci_pipeline_chat_data", "ci_pipelines", column: "pipeline_id", on_delete: :cascade
add_foreign_key "ci_pipeline_schedule_variables", "ci_pipeline_schedules", column: "pipeline_schedule_id", name: "fk_41c35fda51", on_delete: :cascade
add_foreign_key "ci_pipeline_schedules", "projects", name: "fk_8ead60fcc4", on_delete: :cascade
add_foreign_key "ci_pipeline_schedules", "users", column: "owner_id", name: "fk_9ea99f58d2", on_delete: :nullify
......
......@@ -340,7 +340,6 @@ data before running `pg_basebackup`.
echo Cleaning up old cluster directory
sudo -u postgres rm -rf /var/opt/gitlab/postgresql/data
rm -f /tmp/postgresql.trigger
echo Starting base backup as the replicator user
echo Enter the password for $USER@$HOST
......@@ -350,7 +349,6 @@ data before running `pg_basebackup`.
sudo -u postgres bash -c "cat > /var/opt/gitlab/postgresql/data/recovery.conf <<- _EOF1_
standby_mode = 'on'
primary_conninfo = 'host=$HOST port=$PORT user=$USER password=$PASSWORD sslmode=$SSLMODE'
trigger_file = '/tmp/postgresql.trigger'
_EOF1_
"
......
......@@ -99,6 +99,32 @@ artifacts, you can use an object storage like AWS S3 instead.
This configuration relies on valid AWS credentials to be configured already.
Use an [Object storage option][ee-os] like AWS S3 to store job artifacts.
### Object Storage Settings
For source installations the following settings are nested under `artifacts:` and then `object_store:`. On omnibus installs they are prefixed by `artifacts_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Artfacts will be stored| |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
#### S3 compatible connection settings
The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
| Setting | Description | Default |
|---------|-------------|---------|
| `provider` | Always `AWS` for compatible hosts | AWS |
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
**In Omnibus installations:**
_The artifacts are stored by default in
......
......@@ -57,6 +57,32 @@ If you don't want to use the local disk where GitLab is installed to store the
uploads, you can use an object storage provider like AWS S3 instead.
This configuration relies on valid AWS credentials to be configured already.
### Object Storage Settings
For source installations the following settings are nested under `uploads:` and then `object_store:`. On omnibus installs they are prefixed by `uploads_object_store_`.
| Setting | Description | Default |
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where Uploads will be stored| |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
#### S3 compatible connection settings
The connection settings match those provided by [Fog](https://github.com/fog), and are as follows:
| Setting | Description | Default |
|---------|-------------|---------|
| `provider` | Always `AWS` for compatible hosts | AWS |
| `aws_access_key_id` | AWS credentials, or compatible | |
| `aws_secret_access_key` | AWS credentials, or compatible | |
| `region` | AWS region | us-east-1 |
| `host` | S3 compatible host for when not using AWS, e.g. `localhost` or `storage.example.com` | s3.amazonaws.com |
| `endpoint` | Can be used when configuring an S3 compatible service such as [Minio](https://www.minio.io), by entering a URL such as `http://127.0.0.1:9000` | (optional) |
| `path_style` | Set to true to use `host/bucket_name/object` style paths instead of `bucket_name.host/object`. Leave as false for AWS S3 | false |
**In Omnibus installations:**
_The uploads are stored by default in
......
......@@ -42,7 +42,7 @@ following locations:
- [Group milestones](group_milestones.md)
- [Namespaces](namespaces.md)
- [Notes](notes.md) (comments)
- [Threaded comments](discussions.md)
- [Discussions](discussions.md) (threaded comments)
- [Notification settings](notification_settings.md)
- [Open source license templates](templates/licenses.md)
- [Pages Domains](pages_domains.md)
......
# Notes API
Notes are comments on snippets, issues or merge requests.
Notes are comments on snippets, issues, merge requests or epics.
## Issues
......
......@@ -139,6 +139,14 @@ CREATE EXTENSION pg_trgm;
On some systems you may need to install an additional package (e.g.
`postgresql-contrib`) for this extension to become available.
#### Additional requirements for GitLab Geo
If you are using [GitLab Geo](https://docs.gitlab.com/ee/development/geo.html), the [tracking database](https://docs.gitlab.com/ee/development/geo.html#geo-tracking-database) also requires the `postgres_fdw` extension.
```
CREATE EXTENSION postgres_fdw;
```
## Unicorn Workers
It's possible to increase the amount of unicorn workers and this will usually help to reduce the response time of the applications and increase the ability to handle parallel requests.
......
......@@ -66,14 +66,14 @@ The following options are available.
| Check whether committer is the current authenticated user | **Premium** 10.2 | GitLab will reject any commit that was not committed by the current authenticated user |
| Check whether commit is signed through GPG | **Premium** 10.1 | Reject commit when it is not signed through GPG. Read [signing commits with GPG][signing-commits]. |
| Prevent committing secrets to Git | **Starter** 8.12 | GitLab will reject any files that are likely to contain secrets. Read [what files are forbidden](#prevent-pushing-secrets-to-the-repository). |
| Restrict by commit message | **Starter** 7.10 | Only commit messages that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any commit message. |
| Restrict by branch name | **Starter** 9.3 | Only branch names that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any branch name. |
| Restrict by commit author's email | **Starter** 7.10 | Only commit author's email that match this Ruby regular expression are allowed to be pushed. Leave empty to allow any email. |
| Prohibited file names | **Starter** 7.10 | Any committed filenames that match this Ruby regular expression are not allowed to be pushed. Leave empty to allow any filenames. |
| Restrict by commit message | **Starter** 7.10 | Only commit messages that match this regular expression are allowed to be pushed. Leave empty to allow any commit message. Uses multiline mode, which can be disabled using `(?-m)`. |
| Restrict by branch name | **Starter** 9.3 | Only branch names that match this regular expression are allowed to be pushed. Leave empty to allow any branch name. |
| Restrict by commit author's email | **Starter** 7.10 | Only commit author's email that match this regular expression are allowed to be pushed. Leave empty to allow any email. |
| Prohibited file names | **Starter** 7.10 | Any committed filenames that match this regular expression are not allowed to be pushed. Leave empty to allow any filenames. |
| Maximum file size | **Starter** 7.12 | Pushes that contain added or updated files that exceed this file size (in MB) are rejected. Set to 0 to allow files of any size. |
>**Tip:**
You can check your regular expressions at <http://rubular.com>.
GitLab uses [RE2 syntax](https://github.com/google/re2/wiki/Syntax) for regular expressions in push rules. You can check your regular expressions at <https://regex-golang.appspot.com>.
## Prevent pushing secrets to the repository
......
# GitHub Project Integration
GitLab provides integration for updating pipeline statuses on GitHub. This is especially useful if using GitLab for CI/CD only.
![Pipeline status update on GitHub](img/github_status_check_pipeline_update.png)
## Configuration
### Complete these steps on GitHub
This integration requires a [GitHub API token](https://github.com/settings/tokens) with `repo:status` access granted:
1. Go to your "Personal access tokens" page at https://github.com/settings/tokens
1. Click "Generate New Token"
1. Ensure that `repo:status` is checked and click "Generate token"
1. Copy the generated token to use on GitLab
### Complete these steps on GitLab
1. Navigate to the project you want to configure.
1. Navigate to the [Integrations page](project_services.md#accessing-the-project-services)
1. Click "GitHub".
1. Select the "Active" checkbox.
1. Paste the token you've generated on GitHub
1. Enter the path to your project on GitHub, such as "https://github.com/your-name/YourProject/"
1. Save or optionally click "Test Settings".
![Configure GitHub Project Integration](img/github_configuration.png)
......@@ -35,6 +35,7 @@ Click on the service links to see further configuration instructions and details
| External Wiki | Replaces the link to the internal wiki with a link to an external wiki |
| Flowdock | Flowdock is a collaboration web app for technical teams |
| Gemnasium | Gemnasium monitors your project dependencies and alerts you about updates and security vulnerabilities |
| [GitHub](github.md) | Sends pipeline notifications to GitHub |
| [HipChat](hipchat.md) | Private group chat and IM |
| [Irker (IRC gateway)](irker.md) | Send IRC messages, on update, to a list of recipients through an Irker gateway |
| [JIRA](jira.md) | JIRA issue tracker |
......
......@@ -329,6 +329,16 @@ Click the button at the top right to toggle focus mode on and off. In focus mode
[Developers and up](../permissions.md) can use all the functionality of the
Issue Board, that is create/delete lists and drag issues around.
## Group Issue Board
>Introduced in GitLab 10.6
Group issue board is analogous to project-level issue board and it is accessible at the group
navigation level. A group-level issue board allows you to view all issues from all projects in that group
(currently, it does not see issues from projects in subgroups). Similarly, you can only filter by group labels for these
boards. When updating milestones and labels for an issue through the sidebar update mechanism, again only
group-level objects are available.
## Tips
A few things to remember:
......
......@@ -63,7 +63,9 @@ For source installations the following settings are nested under `lfs:` and then
|---------|-------------|---------|
| `enabled` | Enable/disable object storage | `false` |
| `remote_directory` | The bucket name where LFS objects will be stored| |
| `direct_upload` | Set to true to enable direct upload of LFS without the need of local shared storage. Option may be removed once we decide to support only single storage for all files. | `false` |
| `background_upload` | Set to false to disable automatic upload. Option may be removed once upload is direct to S3 | `true` |
| `proxy_download` | Set to false to disable proxying all files served. Option allows to reduce egress traffic as this allows clients to download directly from remote storage instead of proxying all data | `false` |
| `connection` | Various connection options described below | |
#### S3 compatible connection settings
......
......@@ -40,8 +40,7 @@
createEpic() {
this.creating = true;
this.service.createEpic(this.title)
.then(res => res.json())
.then((data) => {
.then(({ data }) => {
visitUrl(data.web_url);
})
.catch(() => {
......
import Vue from 'vue';
import VueResource from 'vue-resource';
Vue.use(VueResource);
import axios from '~/lib/utils/axios_utils';
export default class NewEpicService {
constructor(endpoint) {
this.endpoint = endpoint;
this.resource = Vue.resource(this.endpoint, {});
}
createEpic(title) {
return this.resource.save({
return axios.post(this.endpoint, {
title,
});
}
......
import Vue from 'vue';
import Dashboard from '~/monitoring/components/dashboard.vue';
export default () => {
const el = document.getElementById('prometheus-graphs');
if (el && el.dataset) {
// eslint-disable-next-line no-new
new Vue({
el,
render(createElement) {
return createElement(Dashboard, {
props: {
...el.dataset,
showLegend: false,
showPanels: false,
forceSmallGraph: true,
},
});
},
});
}
};
import '~/pages/projects/clusters/show';
import initClusterHealth from './cluster_health';
document.addEventListener('DOMContentLoaded', initClusterHealth);
......@@ -4,7 +4,8 @@ module EE
:jenkins_url,
:multiproject_enabled,
:pass_unstable,
:project_name
:project_name,
:repository_url
].freeze
def allowed_service_params
......
module SendFileUpload
def send_upload(file_upload, send_params: {}, redirect_params: {}, attachment: nil)
def send_upload(file_upload, send_params: {}, redirect_params: {}, attachment: nil, disposition: 'attachment')
if attachment
redirect_params[:query] = { "response-content-disposition" => "attachment;filename=#{attachment.inspect}" }
send_params.merge!(filename: attachment, disposition: 'attachment')
redirect_params[:query] = { "response-content-disposition" => "#{disposition};filename=#{attachment.inspect}" }
send_params.merge!(filename: attachment, disposition: disposition)
end
if file_upload.file_storage?
send_file file_upload.path, send_params
elsif file_upload.class.proxy_download_enabled?
Gitlab::Workhorse.send_url(file_upload.url(**redirect_params))
else
redirect_to file_upload.url(**redirect_params)
end
......
module Geo
class AttachmentRegistryFinder < FileRegistryFinder
def attachments
relation =
if selective_sync?
Upload.where(group_uploads.or(project_uploads).or(other_uploads))
else
Upload.all
end
end
relation.with_files_stored_locally
def local_attachments
attachments.with_files_stored_locally
end
def count_attachments
attachments.count
def count_local_attachments
local_attachments.count
end
def count_synced_attachments
......@@ -49,20 +50,20 @@ module Geo
# Find limited amount of non replicated attachments.
#
# You can pass a list with `except_registry_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet
# You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and aren't persisted to the database yet
#
# TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want
#
# @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query
def find_unsynced_attachments(batch_size:, except_registry_ids: [])
# @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_attachments(batch_size:, except_file_ids: [])
relation =
if use_legacy_queries?
legacy_find_unsynced_attachments(except_registry_ids: except_registry_ids)
legacy_find_unsynced_attachments(except_file_ids: except_file_ids)
else
fdw_find_unsynced_attachments(except_registry_ids: except_registry_ids)
fdw_find_unsynced_attachments(except_file_ids: except_file_ids)
end
relation.limit(batch_size)
......@@ -106,31 +107,40 @@ module Geo
#
def fdw_find_synced_attachments
fdw_find_attachments.merge(Geo::FileRegistry.synced)
fdw_find_local_attachments.merge(Geo::FileRegistry.synced)
end
def fdw_find_failed_attachments
fdw_find_attachments.merge(Geo::FileRegistry.failed)
fdw_find_local_attachments.merge(Geo::FileRegistry.failed)
end
def fdw_find_attachments
fdw_table = Geo::Fdw::Upload.table_name
Geo::Fdw::Upload.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_table}.id")
def fdw_find_local_attachments
fdw_attachments.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_attachments_table}.id")
.with_files_stored_locally
.merge(Geo::FileRegistry.attachments)
end
def fdw_find_unsynced_attachments(except_registry_ids:)
fdw_table = Geo::Fdw::Upload.table_name
def fdw_find_unsynced_attachments(except_file_ids:)
upload_types = Geo::FileService::DEFAULT_OBJECT_TYPES.map { |val| "'#{val}'" }.join(',')
Geo::Fdw::Upload.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_table}.id
fdw_attachments.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_attachments_table}.id
AND file_registry.file_type IN (#{upload_types})")
.with_files_stored_locally
.where(file_registry: { id: nil })
.where.not(id: except_registry_ids)
.where.not(id: except_file_ids)
end
def fdw_attachments
if selective_sync?
Geo::Fdw::Upload.where(group_uploads.or(project_uploads).or(other_uploads))
else
Geo::Fdw::Upload.all
end
end
def fdw_attachments_table
Geo::Fdw::Upload.table_name
end
#
......@@ -139,7 +149,7 @@ module Geo
def legacy_find_synced_attachments
legacy_inner_join_registry_ids(
attachments,
local_attachments,
Geo::FileRegistry.attachments.synced.pluck(:file_id),
Upload
)
......@@ -147,18 +157,18 @@ module Geo
def legacy_find_failed_attachments
legacy_inner_join_registry_ids(
attachments,
local_attachments,
Geo::FileRegistry.attachments.failed.pluck(:file_id),
Upload
)
end
def legacy_find_unsynced_attachments(except_registry_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: Geo::FileService::DEFAULT_OBJECT_TYPES, except_registry_ids: except_registry_ids)
def legacy_find_unsynced_attachments(except_file_ids:)
registry_file_ids = legacy_pluck_registry_file_ids(file_types: Geo::FileService::DEFAULT_OBJECT_TYPES) | except_file_ids
legacy_left_outer_join_registry_ids(
attachments,
registry_ids,
local_attachments,
registry_file_ids,
Upload
)
end
......
......@@ -6,9 +6,8 @@ module Geo
protected
def legacy_pluck_registry_ids(file_types:, except_registry_ids:)
ids = Geo::FileRegistry.where(file_type: file_types).pluck(:file_id)
(ids + except_registry_ids).uniq
def legacy_pluck_registry_file_ids(file_types:)
Geo::FileRegistry.where(file_type: file_types).pluck(:file_id)
end
end
end
module Geo
class JobArtifactRegistryFinder < FileRegistryFinder
def count_job_artifacts
job_artifacts.count
local_job_artifacts.count
end
def count_synced_job_artifacts
relation =
if selective_sync?
legacy_find_synced_job_artifacts
if aggregate_pushdown_supported?
find_synced_job_artifacts.count
else
find_synced_job_artifacts_registries
legacy_find_synced_job_artifacts.count
end
relation.count
end
def count_failed_job_artifacts
relation =
if selective_sync?
legacy_find_failed_job_artifacts
if aggregate_pushdown_supported?
find_failed_job_artifacts.count
else
find_failed_job_artifacts_registries
legacy_find_failed_job_artifacts.count
end
relation.count
end
# Find limited amount of non replicated lfs objects.
#
# You can pass a list with `except_registry_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet
# You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and aren't persisted to the database yet
#
# TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want
#
# @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query
def find_unsynced_job_artifacts(batch_size:, except_registry_ids: [])
# @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_job_artifacts(batch_size:, except_file_ids: [])
relation =
if use_legacy_queries?
legacy_find_unsynced_job_artifacts(except_registry_ids: except_registry_ids)
legacy_find_unsynced_job_artifacts(except_file_ids: except_file_ids)
else
fdw_find_unsynced_job_artifacts(except_registry_ids: except_registry_ids)
fdw_find_unsynced_job_artifacts(except_file_ids: except_file_ids)
end
relation.limit(batch_size)
end
def job_artifacts
relation =
if selective_sync?
Ci::JobArtifact.joins(:project).where(projects: { id: current_node.projects })
else
Ci::JobArtifact.all
end
end
relation.with_files_stored_locally
def local_job_artifacts
job_artifacts.with_files_stored_locally
end
private
def find_synced_job_artifacts_registries
Geo::FileRegistry.job_artifacts.synced
def find_synced_job_artifacts
if use_legacy_queries?
legacy_find_synced_job_artifacts
else
fdw_find_job_artifacts.merge(Geo::FileRegistry.synced)
end
end
def find_failed_job_artifacts_registries
Geo::FileRegistry.job_artifacts.failed
def find_failed_job_artifacts
if use_legacy_queries?
legacy_find_failed_job_artifacts
else
fdw_find_job_artifacts.merge(Geo::FileRegistry.failed)
end
end
#
# FDW accessors
#
def fdw_find_unsynced_job_artifacts(except_registry_ids:)
fdw_table = Geo::Fdw::Ci::JobArtifact.table_name
def fdw_find_job_artifacts
fdw_job_artifacts.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_jobs_artifacts_table}.id")
.with_files_stored_locally
.merge(Geo::FileRegistry.job_artifacts)
end
Geo::Fdw::Ci::JobArtifact.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_table}.id
def fdw_find_unsynced_job_artifacts(except_file_ids:)
fdw_job_artifacts.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_job_artifacts_table}.id
AND file_registry.file_type = 'job_artifact'")
.with_files_stored_locally
.where(file_registry: { id: nil })
.where.not(id: except_registry_ids)
.where.not(id: except_file_ids)
end
def fdw_job_artifacts
if selective_sync?
Geo::Fdw::Ci::JobArtifact.joins(:project).where(projects: { id: current_node.projects })
else
Geo::Fdw::Ci::JobArtifact.all
end
end
def fdw_job_artifacts_table
Geo::Fdw::Ci::JobArtifact.table_name
end
#
......@@ -89,26 +108,26 @@ module Geo
def legacy_find_synced_job_artifacts
legacy_inner_join_registry_ids(
job_artifacts,
find_synced_job_artifacts_registries.pluck(:file_id),
local_job_artifacts,
Geo::FileRegistry.job_artifacts.synced.pluck(:file_id),
Ci::JobArtifact
)
end
def legacy_find_failed_job_artifacts
legacy_inner_join_registry_ids(
job_artifacts,
find_failed_job_artifacts_registries.pluck(:file_id),
local_job_artifacts,
Geo::FileRegistry.job_artifacts.failed.pluck(:file_id),
Ci::JobArtifact
)
end
def legacy_find_unsynced_job_artifacts(except_registry_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: :job_artifact, except_registry_ids: except_registry_ids)
def legacy_find_unsynced_job_artifacts(except_file_ids:)
registry_file_ids = legacy_pluck_registry_file_ids(file_types: :job_artifact) | except_file_ids
legacy_left_outer_join_registry_ids(
job_artifacts,
registry_ids,
local_job_artifacts,
registry_file_ids,
Ci::JobArtifact
)
end
......
module Geo
class LfsObjectRegistryFinder < FileRegistryFinder
def count_lfs_objects
lfs_objects.count
local_lfs_objects.count
end
def count_synced_lfs_objects
relation =
if selective_sync?
legacy_find_synced_lfs_objects
if aggregate_pushdown_supported?
find_synced_lfs_objects.count
else
find_synced_lfs_objects_registries
legacy_find_synced_lfs_objects.count
end
relation.count
end
def count_failed_lfs_objects
relation =
if selective_sync?
legacy_find_failed_lfs_objects
if aggregate_pushdown_supported?
find_failed_lfs_objects.count
else
find_failed_lfs_objects_registries
legacy_find_failed_lfs_objects.count
end
relation.count
end
# Find limited amount of non replicated lfs objects.
#
# You can pass a list with `except_registry_ids:` so you can exclude items you
# already scheduled but haven't finished and persisted to the database yet
# You can pass a list with `except_file_ids:` so you can exclude items you
# already scheduled but haven't finished and aren't persisted to the database yet
#
# TODO: Alternative here is to use some sort of window function with a cursor instead
# of simply limiting the query and passing a list of items we don't want
#
# @param [Integer] batch_size used to limit the results returned
# @param [Array<Integer>] except_registry_ids ids that will be ignored from the query
def find_unsynced_lfs_objects(batch_size:, except_registry_ids: [])
# @param [Array<Integer>] except_file_ids ids that will be ignored from the query
def find_unsynced_lfs_objects(batch_size:, except_file_ids: [])
relation =
if use_legacy_queries?
legacy_find_unsynced_lfs_objects(except_registry_ids: except_registry_ids)
legacy_find_unsynced_lfs_objects(except_file_ids: except_file_ids)
else
fdw_find_unsynced_lfs_objects(except_registry_ids: except_registry_ids)
fdw_find_unsynced_lfs_objects(except_file_ids: except_file_ids)
end
relation.limit(batch_size)
end
def lfs_objects
relation =
if selective_sync?
LfsObject.joins(:projects).where(projects: { id: current_node.projects })
else
LfsObject.all
end
end
relation.with_files_stored_locally
def local_lfs_objects
lfs_objects.with_files_stored_locally
end
private
def find_synced_lfs_objects_registries
Geo::FileRegistry.lfs_objects.synced
def find_synced_lfs_objects
if use_legacy_queries?
legacy_find_synced_lfs_objects
else
fdw_find_lfs_objects.merge(Geo::FileRegistry.synced)
end
end
def find_failed_lfs_objects_registries
Geo::FileRegistry.lfs_objects.failed
def find_failed_lfs_objects
if use_legacy_queries?
legacy_find_failed_lfs_objects
else
fdw_find_lfs_objects.merge(Geo::FileRegistry.failed)
end
end
#
# FDW accessors
#
def fdw_find_unsynced_lfs_objects(except_registry_ids:)
fdw_table = Geo::Fdw::LfsObject.table_name
def fdw_find_lfs_objects
fdw_lfs_objects.joins("INNER JOIN file_registry ON file_registry.file_id = #{fdw_lfs_objects_table}.id")
.with_files_stored_locally
.merge(Geo::FileRegistry.lfs_objects)
end
# Filter out objects in object storage (this is done in GeoNode#lfs_objects)
Geo::Fdw::LfsObject.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_table}.id
def fdw_find_unsynced_lfs_objects(except_file_ids:)
fdw_lfs_objects.joins("LEFT OUTER JOIN file_registry
ON file_registry.file_id = #{fdw_lfs_objects_table}.id
AND file_registry.file_type = 'lfs'")
.with_files_stored_locally
.where(file_registry: { id: nil })
.where.not(id: except_registry_ids)
.where.not(id: except_file_ids)
end
def fdw_lfs_objects
if selective_sync?
Geo::Fdw::LfsObject.joins(:project).where(projects: { id: current_node.projects })
else
Geo::Fdw::LfsObject.all
end
end
def fdw_lfs_objects_table
Geo::Fdw::LfsObject.table_name
end
#
......@@ -90,26 +108,26 @@ module Geo
def legacy_find_synced_lfs_objects
legacy_inner_join_registry_ids(
lfs_objects,
find_synced_lfs_objects_registries.pluck(:file_id),
local_lfs_objects,
Geo::FileRegistry.lfs_objects.synced.pluck(:file_id),
LfsObject
)
end
def legacy_find_failed_lfs_objects
legacy_inner_join_registry_ids(
lfs_objects,
find_failed_lfs_objects_registries.pluck(:file_id),
local_lfs_objects,
Geo::FileRegistry.lfs_objects.failed.pluck(:file_id),
LfsObject
)
end
def legacy_find_unsynced_lfs_objects(except_registry_ids:)
registry_ids = legacy_pluck_registry_ids(file_types: :lfs, except_registry_ids: except_registry_ids)
def legacy_find_unsynced_lfs_objects(except_file_ids:)
registry_file_ids = legacy_pluck_registry_file_ids(file_types: :lfs) | except_file_ids
legacy_left_outer_join_registry_ids(
lfs_objects,
registry_ids,
local_lfs_objects,
registry_file_ids,
LfsObject
)
end
......
......@@ -18,7 +18,7 @@ module AuditLogsHelper
def admin_project_dropdown_label(default_label)
if @entity
@entity.name_with_namespace
@entity.full_name
else
default_label
end
......
module Ci
class PipelineChatData < ActiveRecord::Base
self.table_name = 'ci_pipeline_chat_data'
belongs_to :chat_name
validates :pipeline_id, presence: true
validates :chat_name_id, presence: true
validates :response_url, presence: true
end
end
module EE
module Ci
module Pipeline
extend ActiveSupport::Concern
EE_FAILURE_REASONS = {
activity_limit_exceeded: 20,
size_limit_exceeded: 21
}.freeze
included do
has_one :chat_data, class_name: 'Ci::PipelineChatData'
end
def predefined_variables
result = super
result << { key: 'CI_PIPELINE_SOURCE', value: source.to_s, public: true }
......
......@@ -12,6 +12,7 @@ module EE
after_destroy :log_geo_event
scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(file_store: ObjectStorage::Store::REMOTE) }
end
def local_store?
......
......@@ -29,6 +29,7 @@ module EE
has_one :index_status
has_one :jenkins_service
has_one :jenkins_deprecated_service
has_one :github_service
has_many :approvers, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent
has_many :approver_groups, as: :target, dependent: :destroy # rubocop:disable Cop/ActiveRecordDependent
......@@ -462,6 +463,10 @@ module EE
disabled_services.push('jenkins', 'jenkins_deprecated')
end
unless feature_available?(:github_project_service_integration)
disabled_services.push('github')
end
disabled_services
end
end
......
module EE
module Service
extend ActiveSupport::Concern
module ClassMethods
extend ::Gitlab::Utils::Override
override :available_services_names
def available_services_names
ee_service_names = %w[
github
jenkins
jenkins_deprecated
]
(super + ee_service_names).sort_by(&:downcase)
end
end
end
end
module EE
module SlackSlashCommandsService
def chat_responder
::Gitlab::Chat::Responder::Slack
end
end
end
......@@ -4,6 +4,7 @@ module Geo
self.table_name = Gitlab::Geo::Fdw.table('lfs_objects')
scope :with_files_stored_locally, -> { where(file_store: [nil, LfsObjectUploader::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(file_store: LfsObjectUploader::Store::REMOTE) }
end
end
end
......@@ -4,6 +4,7 @@ module Geo
self.table_name = Gitlab::Geo::Fdw.table('uploads')
scope :with_files_stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
scope :with_files_stored_remotely, -> { where(store: ObjectStorage::Store::REMOTE) }
end
end
end
......@@ -5,4 +5,5 @@ class Geo::FileRegistry < Geo::BaseRegistry
scope :lfs_objects, -> { where(file_type: :lfs) }
scope :job_artifacts, -> { where(file_type: :job_artifact) }
scope :attachments, -> { where(file_type: Geo::FileService::DEFAULT_OBJECT_TYPES) }
scope :stored_locally, -> { where(store: [nil, ObjectStorage::Store::LOCAL]) }
end
......@@ -105,7 +105,7 @@ class GeoNodeStatus < ActiveRecord::Base
self.wikis_count = projects_finder.count_wikis
self.lfs_objects_count = lfs_objects_finder.count_lfs_objects
self.job_artifacts_count = job_artifacts_finder.count_job_artifacts
self.attachments_count = attachments_finder.count_attachments
self.attachments_count = attachments_finder.count_local_attachments
self.last_successful_status_check_at = Time.now
self.storage_shards = StorageShard.all
......
......@@ -42,6 +42,7 @@ class License < ActiveRecord::Base
extended_audit_events
file_locks
geo
github_project_service_integration
jira_dev_panel_integration
ldap_group_sync_filter
multiple_clusters
......@@ -60,9 +61,11 @@ class License < ActiveRecord::Base
EEU_FEATURES = EEP_FEATURES + %i[
sast
sast_container
cluster_health
dast
epics
ide
chatops
].freeze
# List all features available for early adopters,
......@@ -318,6 +321,7 @@ class License < ActiveRecord::Base
def reset_current
self.class.reset_current
Gitlab::Chat.flush_available_cache
end
def reset_license
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment