@@ -272,7 +272,11 @@ To configure Vale in your editor, install one of the following as appropriate:
In the extension's settings:
- Select the **Use CLI** checkbox.
- In the **Config** setting, enter an absolute path to [`.vale.ini`](https://gitlab.com/gitlab-org/gitlab/blob/master/.vale.ini) in one of the cloned GitLab repositories on your computer.
- In the <!-- vale gitlab.Spelling = NO -->**Config** setting, enter an absolute
path to [`.vale.ini`](https://gitlab.com/gitlab-org/gitlab/blob/master/.vale.ini)
in one of the cloned GitLab repositories on your computer.
<!-- vale gitlab.Spelling = YES -->
- In the **Path** setting, enter the absolute path to the Vale binary. In most
cases, `vale` should work. To find the location, run `which vale` in a terminal.
@@ -73,7 +73,7 @@ we simply follow the path we take to serve any ordinary upload.
### Workhorse
Assuming Rails decided the request to be valid, Workhorse will take over. Upon receiving the `send-scaled-image`
instruction through the Rails response, a [special response injecter](https://gitlab.com/gitlab-org/gitlab-workhorse/-/blob/master/internal/imageresizer/image_resizer.go)
instruction through the Rails response, a [special response injector](https://gitlab.com/gitlab-org/gitlab-workhorse/-/blob/master/internal/imageresizer/image_resizer.go)
will be invoked that knows how to rescale images. The only inputs it requires are the location of the image
(a path if the image resides in block storage, or a URL to remote storage otherwise) and the desired width.
Workhorse will handle the location transparently so Rails does not need to be concerned with where the image
@@ -140,7 +140,7 @@ Even though this approach would make aggregating much easier, it has some major
- We'd have to migrate **all namespaces** by adding and filling a new column. Because of the size of the table, dealing with time/cost would be significant. The background migration would take approximately `153h`, see <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/29772>.
- Background migration has to be shipped one release before, delaying the functionality by another milestone.
### Attempt E (final): Update the namespace storage statistics in async way
### Attempt E (final): Update the namespace storage statistics asynchronously
This approach consists of continuing to use the incremental statistics updates we already have,
but we refresh them through Sidekiq jobs and in different transactions:
...
...
@@ -149,7 +149,7 @@ but we refresh them through Sidekiq jobs and in different transactions:
1. Whenever the statistics of a project changes, insert a row into `namespace_aggregation_schedules`
- We don't insert a new row if there's already one related to the root namespace.
- Keeping in mind the length of the transaction that involves updating `project_statistics`(<https://gitlab.com/gitlab-org/gitlab/-/issues/29070>), the insertion should be done in a different transaction and through a Sidekiq Job.
1. After inserting the row, we schedule another worker to be executed async at two different moments:
1. After inserting the row, we schedule another worker to be executed asynchronously at two different moments:
- One enqueued for immediate execution and another one scheduled in `1.5h` hours.
- We only schedule the jobs, if we can obtain a `1.5h` lease on Redis on a key based on the root namespace ID.
- If we can't obtain the lease, it indicates there's another aggregation already in progress, or scheduled in no more than `1.5h`.
...
...
@@ -161,7 +161,7 @@ but we refresh them through Sidekiq jobs and in different transactions:
This implementation has the following benefits:
- All the updates are done async, so we're not increasing the length of the transactions for `project_statistics`.
- All the updates are done asynchronously, so we're not increasing the length of the transactions for `project_statistics`.
Memory fragmentation could be reduced by tuning GC parameters as described in [this post by Nate Berkopec](https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html). This should be considered as a tradeoff, as it may affect overall performance of memory allocation and GC cycles.
Memory fragmentation could be reduced by tuning GC parameters [as described in this post](https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html). This should be considered as a tradeoff, as it may affect overall performance of memory allocation and GC cycles.
@@ -25,7 +25,7 @@ When you are optimizing your SQL queries, there are two dimensions to pay attent
- When analyzing your query's performance, pay attention to if the time you are seeing is on a [cold or warm cache](#cold-and-warm-cache). These guidelines apply for both cache types.
- When working with batched queries, change the range and batch size to see how it effects the query timing and caching.
- If an existing query is already underperforming, make an effort to improve it. If it is too complex or would stall development, create a follow-up so it can be addressed in a timely manner. You can always ask the database reviewer or maintainer for help and guidance.
- If an existing query is not performing well, make an effort to improve it. If it is too complex or would stall development, create a follow-up so it can be addressed in a timely manner. You can always ask the database reviewer or maintainer for help and guidance.
-[The impact of regular expression denial of service (ReDoS) in practice: an empirical study at the ecosystem scale](https://people.cs.vt.edu/~davisjam/downloads/publications/DavisCoghlanServantLee-EcosystemREDOS-ESECFSE18.pdf). This research paper discusses approaches to automatically detect ReDoS vulnerabilities.
-[Freezing the web: A study of redos vulnerabilities in JavaScript-based web servers](https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-staicu.pdf). Another research paper about detecting ReDoS vulnerabilities.
-[Freezing the web: A study of ReDoS vulnerabilities in JavaScript-based web servers](https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-staicu.pdf). Another research paper about detecting ReDoS vulnerabilities.
@@ -13,15 +13,15 @@ You should also reference the [OmniAuth documentation](omniauth.md) for general
## Common SAML Terms
| Term | Description |
|------|-------------|
| Identity Provider (IdP) | The service which manages your user identities such as ADFS, Okta, Onelogin, or Ping Identity. |
| Service Provider (SP) | SAML considers GitLab to be a service provider. |
| Assertion | A piece of information about a user's identity, such as their name or role. Also known as claims or attributes. |
| SSO | Single Sign-On. |
| Term | Description |
|--------------------------------|-------------|
| Identity Provider (IdP) | The service which manages your user identities, such as ADFS, Okta, OneLogin, or Ping Identity. |
| Service Provider (SP) | SAML considers GitLab to be a service provider. |
| Assertion | A piece of information about a user's identity, such as their name or role. Also known as claims or attributes. |
| SSO | Single Sign-On. |
| Assertion consumer service URL | The callback on GitLab where users will be redirected after successfully authenticating with the identity provider. |
| Issuer | How GitLab identifies itself to the identity provider. Also known as a "Relying party trust identifier". |
| Certificate fingerprint | Used to confirm that communications over SAML are secure by checking that the server is signing communications with the correct certificate. Also known as a certificate thumbprint. |
| Issuer | How GitLab identifies itself to the identity provider. Also known as a "Relying party trust identifier". |
| Certificate fingerprint | Used to confirm that communications over SAML are secure by checking that the server is signing communications with the correct certificate. Also known as a certificate thumbprint. |
## General Setup
...
...
@@ -265,7 +265,7 @@ considered admin users.
### Auditor groups **(PREMIUM SELF)**
> Introduced in [GitLab Starter](https://about.gitlab.com/pricing/) 11.4.
> Introduced in GitLab 11.4.
The requirements are the same as the previous settings, your IdP needs to pass Group information to GitLab, you need to tell
GitLab where to look for the groups in the SAML response, and which group(s) should be
...
...
@@ -454,8 +454,6 @@ args: {
### `uid_attribute`
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/17734) in GitLab 10.7.
By default, the `uid` is set as the `name_id` in the SAML response. If you'd like to designate a unique attribute for the `uid`, you can set the `uid_attribute`. In the example below, the value of `uid` attribute in the SAML response is set as the `uid_attribute`.
@@ -46,8 +46,12 @@ From [AlexJonesax](https://twitter.com/AlexJonesax) and [KaiPMDH](https://twitte
![Alex on Twitter: Auto DevOps in GitLab doesn't just lower the bar to entry, it removes the bar and holds your hand.](img/alexj_autodevops_min_v13_8.png)
<!-- vale gitlab.Spelling = NO -->
![Kai on Twitter: When I saw this on the Auto DevOps stuff, my mind was blown...](img/kai_autodevops_min_v13_8.png)
<!-- vale gitlab.Spelling = YES -->
We welcome everyone to [share your experience by tagging GitLab on Twitter](https://twitter.com/gitlab).