Take into consideration the following when choosing a pagination strategy:
1. It is very inefficient to calculate amount of objects that pass the filtering,
this operation usually can take seconds, and can timeout,
this operation usually can take seconds, and can timeout,
1. It is very inefficent to get entries for page at higher ordinals, like 1000.
The database has to sort and iterate all previous items, and this operation usually
can result in exponential complexity put on database.
## Badge counters
The counters should always be truncated. It means that we do not want to present
exact number over some threshold. The reason for that is for the cases where we want
Counters should always be truncated. It means that we do not want to present
the exact number over some threshold. The reason for that is for the cases where we want
to calculate exact number of items, we effectively need to filter each of them for
the purpose of knowing exact number of items matching.
the purpose of knowing the exact number of items matching.
From ~UX perspective it is often acceptable to see that you have over 1000+ pipelines,
instead of that you have 40000+ pipelines, but at a tradeoff of loading page for 2s longer.
Example of such pattern is the list of pipelines and jobs. We truncate numbers to `1000+`,
An example of this pattern is the list of pipelines and jobs. We truncate numbers to `1000+`,
but we show an accurate number of running pipelines, which is the most interesting information.
There's an for example a helper method that can be used for that purpose `NumbersHelper.limited_counter_with_delimiter`
There's a helper method that can be used for that purpose - `NumbersHelper.limited_counter_with_delimiter` -
that accepts an upper limit of counting rows.
In some cases it is desired that badge counters are loaded asynchronously.
This can speeds-up initial page load and aid to overall better user-experience.
This can speed up the initial page load and give a better user experience overall.
## Application/misuse limits
Every new feature should have an safe usage quotas introduced.
Every new feature should have safe usage quotas introduced.
The quota should be optimised to a level that we consider the feature to
be performant and useable for the user, but **not limiting**.
be performant and usable for the user, but **not limiting**.
**We want the features to be fully useable for the users.**
**However, we want to ensure that the feature will continue to perform well if used at limit**
**We want the features to be fully usable for the users.**
**However, we want to ensure that the feature will continue to perform well if used at its limit**
**and it will not cause availability issues.**
The intent is to provide a safe usage pattern for the features,
The intent is to provide a safe usage pattern for the feature,
as our implementation decisions are optimised for the given data set.
Our feature limits should reflect the optimisations that we introduced.
...
...
@@ -307,14 +307,14 @@ The intent of quotas could be different:
1. We want to provide higher quotas for higher tiers of features:
we want to provide on GitLab.com more capabilities for different tiers,
1. We want to prevent misuse of the features: someone accidentially creates
10000 deploy tokens, because of broken API script,
1. We want to prevent abuse of the features: someone purposely creates
a 10000 pipelines to take an advantage from the system.
1. We want to prevent misuse of the feature: someone accidentially creates
10000 deploy tokens, because of a broken API script,
1. We want to prevent abuse of the feature: someone purposely creates
a 10000 pipelines to take advantage of the system.
Consider that always is better start with the some kind of limitation,
instead of later introducing a breaking change that would result some
of the workflows to break.
Consider that it is always better to start with some kind of limitation,
instead of later introducing a breaking change that would result in some
workflows breaking.
Examples:
...
...
@@ -322,13 +322,13 @@ Examples:
more than 50 schedules.
In such cases it is rather expected that this is either misuse
or abuse of the feature. Lack of the upper limit can result
in service degredation as system will try to process all schedules
in service degredation as the system will try to process all schedules
assigned the the project.
1. GitLab CI includes: We started with the limit of maximum of 50 nested includes.
We did understand that performance of the feature was acceptable at that level.
We received a request from the community that the limit is to small.
We had a time to understand the customer requirement, and implement additional
We understood that performance of the feature was acceptable at that level.
We received a request from the community that the limit is too small.
We had a time to understand the customer requirement, and implement an additional
fail-safe mechanism (time-based one) to increase the limit 100, and if needed increase it
further without negative impact on availability of the feature and GitLab.
...
...
@@ -340,8 +340,8 @@ should come with feature flag to disable it.
The feature flag makes our team more happy, because they can monitor the system and
quickly react without our users noticing the problem.
Know performance deficiencies should be addressed right away after we merge initial
Performance deficiencies should be addressed right away after we merge initial
changes.
To read more about when and how feature flags should be used is well
described in [Feature flags in GitLab development](https://docs.gitlab.com/ee/development/feature_flags/process.html#feature-flags-in-gitlab-development).
Read moar about when and how feature flags should be used in
[Feature flags in GitLab development](https://docs.gitlab.com/ee/development/feature_flags/process.html#feature-flags-in-gitlab-development).