Commit ac062237 authored by GitLab Bot's avatar GitLab Bot

Add latest changes from gitlab-org/gitlab@master

parent 0dfbcd8f
......@@ -7,6 +7,8 @@ module Noteable
# avoiding n+1 queries and improving performance.
NoteableMeta = Struct.new(:user_notes_count)
MAX_NOTES_LIMIT = 5_000
class_methods do
# `Noteable` class names that support replying to individual notes.
def replyable_types
......
......@@ -104,6 +104,8 @@ class Note < ApplicationRecord
end
end
validate :does_not_exceed_notes_limit?, on: :create, unless: [:system?, :importing?]
# @deprecated attachments are handler by the MarkdownUploader
mount_uploader :attachment, AttachmentUploader
......@@ -525,6 +527,12 @@ class Note < ApplicationRecord
system_note_metadata&.cross_reference_types&.include?(system_note_metadata&.action)
end
def does_not_exceed_notes_limit?
return unless noteable
errors.add(:base, _('Maximum number of comments exceeded')) if noteable.notes.count >= Noteable::MAX_NOTES_LIMIT
end
end
Note.prepend_if_ee('EE::Note')
---
title: Limit the number of comments on an issue, MR, or commit
merge_request: 18111
author:
type: added
......@@ -211,7 +211,11 @@ environment that supports about 10,000 users. The specifications below are a
representation of the work so far. The specifications may be adjusted in the
future based on additional testing and iteration.
NOTE: **Note:** The specifications here were performance tested against a specific coded workload. Your exact needs may be more, depending on your workload. Your workload is influenced by factors such as - but not limited to - how active your users are, how much automation you use, mirroring, and repo/change size.
NOTE: **Note:** The specifications here were performance tested against a
specific coded workload. Your exact needs may be more, depending on your
workload. Your workload is influenced by factors such as - but not limited to -
how active your users are, how much automation you use, mirroring, and
repo/change size.
- 3 PostgreSQL - 4 CPU, 16GiB memory per node
- 1 PgBouncer - 2 CPU, 4GiB memory
......@@ -229,13 +233,28 @@ NOTE: **Note:** The specifications here were performance tested against a specif
- **Status:** Work-in-progress
- **Related Issue:** See the [related issue](https://gitlab.com/gitlab-org/quality/performance/issues/57) for more information.
The Support and Quality teams are in the process of building and performance testing
an environment that will support about 25,000 users. The specifications below
are a work-in-progress representation of the work so far. The Quality team will be
certifying this environment in late 2019. The specifications may be adjusted
prior to certification based on performance testing.
TBD: Add specs
The Support and Quality teams are in the process of building and performance
testing an environment that will support around 25,000 users. The specifications
below are a work-in-progress representation of the work so far. The Quality team
will be certifying this environment in late 2019. The specifications may be
adjusted prior to certification based on performance testing.
| Service | Configuration | GCP type |
| ------------------------------|-------------------------|----------------|
| 7 GitLab Rails <br> - Puma workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 28.8GB Memory | n1-highcpu-32 |
| 3 PostgreSQL | 8 vCPU, 30GB Memory | n1-standard-8 |
| 1 PgBouncer | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
| 2 Gitaly <br> - Gitaly Ruby workers on each node set to 90% of available CPUs with 16 threads | 32 vCPU, 120GB Memory | n1-standard-32 |
| 3 Redis Cache + Sentinel <br> - Cache maxmemory set to 90% of available memory | 4 vCPU, 15GB Memory | n1-standard-4 |
| 3 Redis Persistent + Sentinel | 4 vCPU, 15GB Memory | n1-standard-4 |
| 4 Sidekiq | 4 vCPU, 15GB Memory | n1-standard-4 |
| 3 Consul | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
| 1 NFS Server | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
| 1 Monitoring node | 4 CPU, 3.6GB Memory | n1-highcpu-4 |
| 1 Load Balancing node | 2 vCPU, 1.8GB Memory | n1-highcpu-2 |
NOTE: **Note:** At this time, HAProxy is the only tested and recommended load
balancer. We may test and add additional options to this list in time.
### 50,000 User Configuration
......@@ -244,10 +263,10 @@ TBD: Add specs
- **Status:** Work-in-progress
- **Related Issue:** See the [related issue](https://gitlab.com/gitlab-org/quality/performance/issues/66) for more information.
The Support and Quality teams are in the process of building and performance testing
an environment that will support about 50,000 users. The specifications below
are a work-in-progress representation of the work so far. The Quality team will be
certifying this environment in late 2019. The specifications may be adjusted
prior to certification based on performance testing.
The Support and Quality teams are in the process of building and performance
testing an environment that will support around 50,000 users. The specifications
below are a work-in-progress representation of the work so far. The Quality team
will be certifying this environment in late 2019. The specifications may be
adjusted prior to certification based on performance testing.
TBD: Add specs
......@@ -106,13 +106,15 @@ end
Using `any_instance` to stub a method (elasticsearch_indexing) that has been defined on a prepended module (EE::ApplicationSetting) is not supported.
```
### Alternative: `expect_next_instance_of`
### Alternative: `expect_next_instance_of` or `allow_next_instance_of`
Instead of writing:
```ruby
# Don't do this:
expect_any_instance_of(Project).to receive(:add_import_job)
allow_any_instance_of(Project).to receive(:add_import_job)
```
We could write:
......@@ -122,10 +124,14 @@ We could write:
expect_next_instance_of(Project) do |project|
expect(project).to receive(:add_import_job)
end
allow_next_instance_of(Project) do |project|
allow(project).to receive(:add_import_job)
end
```
If we also want to expect the instance was initialized with some particular
arguments, we could also pass it to `expect_next_instance_of` like:
If we also want to initialized the instance with some particular arguments, we
could also pass it like:
```ruby
# Do this:
......
......@@ -24,6 +24,9 @@ You can also reply to a comment notification email to reply to the comment if
creates another standard comment. Replying to a threaded comment creates a reply in the thread. Email replies support
[Markdown] and [quick actions], just as if you replied from the web.
NOTE: **Note:**
There is a limit of 5,000 comments for every object.
## Resolvable comments and threads
> **Notes:**
......
......@@ -9941,6 +9941,9 @@ msgstr ""
msgid "Maximum job timeout has a value which could not be accepted"
msgstr ""
msgid "Maximum number of comments exceeded"
msgstr ""
msgid "Maximum number of mirrors that can be synchronizing at the same time."
msgstr ""
......
......@@ -70,6 +70,37 @@ describe Note do
is_expected.to be_valid
end
end
describe 'max notes limit' do
let_it_be(:noteable) { create(:issue) }
let_it_be(:existing_note) { create(:note, project: noteable.project, noteable: noteable) }
before do
stub_const('Noteable::MAX_NOTES_LIMIT', 1)
end
context 'when creating a system note' do
subject { build(:system_note, project: noteable.project, noteable: noteable) }
it { is_expected.to be_valid }
end
context 'when creating a user note' do
subject { build(:note, project: noteable.project, noteable: noteable) }
it { is_expected.not_to be_valid }
end
context 'when updating an existing note on a noteable that already exceeds the limit' do
subject { existing_note }
before do
create(:system_note, project: noteable.project, noteable: noteable)
end
it { is_expected.to be_valid }
end
end
end
describe "Commit notes" do
......
......@@ -90,7 +90,7 @@ RSpec.configure do |config|
config.include StubFeatureFlags
config.include StubGitlabCalls
config.include StubGitlabData
config.include ExpectNextInstanceOf
config.include NextInstanceOf
config.include TestEnv
config.include Devise::Test::ControllerHelpers, type: :controller
config.include Devise::Test::IntegrationHelpers, type: :feature
......
# frozen_string_literal: true
module ExpectNextInstanceOf
def expect_next_instance_of(klass, *new_args)
receive_new = receive(:new)
receive_new.with(*new_args) if new_args.any?
expect(klass).to receive_new
.and_wrap_original do |method, *original_args|
method.call(*original_args).tap do |instance|
yield(instance)
end
end
end
end
# frozen_string_literal: true
module NextInstanceOf
def expect_next_instance_of(klass, *new_args)
stub_new(expect(klass), *new_args) do |expectation|
yield(expectation)
end
end
def allow_next_instance_of(klass, *new_args)
stub_new(allow(klass), *new_args) do |allowance|
yield(allowance)
end
end
private
def stub_new(target, *new_args)
receive_new = receive(:new)
receive_new.with(*new_args) if new_args.any?
target.to receive_new.and_wrap_original do |method, *original_args|
method.call(*original_args).tap do |instance|
yield(instance)
end
end
end
end
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment