=s_("GitLabPages|Something went wrong while obtaining Let's Encrypt certificate for %{domain}. To retry visit your %{link_start}domain details%{link_end}.").html_safe%{domain: domain.domain,
=s_("GitLabPages|Something went wrong while obtaining the Let's Encrypt certificate for %{domain}. To retry visit your %{link_start}domain details%{link_end}.").html_safe%{domain: domain.domain,
@@ -13,9 +13,13 @@ You are strongly encouraged to get your code **reviewed** by a
...
@@ -13,9 +13,13 @@ You are strongly encouraged to get your code **reviewed** by a
[reviewer](https://about.gitlab.com/handbook/engineering/workflow/code-review/#reviewer) as soon as
[reviewer](https://about.gitlab.com/handbook/engineering/workflow/code-review/#reviewer) as soon as
there is any code to review, to get a second opinion on the chosen solution and
there is any code to review, to get a second opinion on the chosen solution and
implementation, and an extra pair of eyes looking for bugs, logic problems, or
implementation, and an extra pair of eyes looking for bugs, logic problems, or
uncovered edge cases. The reviewer can be from a different team, but it is
uncovered edge cases.
recommended to pick someone who knows the domain well. You can read more about the
importance of involving reviewer(s) in the section on the responsibility of the author below.
The default approach is to choose a reviewer from your group or team for the first review.
This is only a recommendation and the reviewer may be from a different team.
However, it is recommended to pick someone who is a [domain expert](#domain-experts).
You can read more about the importance of involving reviewer(s) in the section on the responsibility of the author below.
If you need some guidance (for example, it's your first merge request), feel free to ask
If you need some guidance (for example, it's your first merge request), feel free to ask
one of the [Merge request coaches](https://about.gitlab.com/company/team/).
one of the [Merge request coaches](https://about.gitlab.com/company/team/).
...
@@ -32,16 +36,32 @@ widget. Reviewers can add their approval by [approving additionally](../user/pro
...
@@ -32,16 +36,32 @@ widget. Reviewers can add their approval by [approving additionally](../user/pro
Getting your merge request **merged** also requires a maintainer. If it requires
Getting your merge request **merged** also requires a maintainer. If it requires
more than one approval, the last maintainer to review and approve it will also merge it.
more than one approval, the last maintainer to review and approve it will also merge it.
### Domain experts
Domain experts are team members who have substantial experience with a specific technology, product feature or area of the codebase. Team members are encouraged to self-identify as domain experts and add it to their [team profile](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/team.yml)
When self-identifying as a domain expert, it is recommended to assign the MR changing the `team.yml` to be merged by an already established Domain Expert or a corresponding Engineering Manager.
We make the following assumption with regards to automatically being considered a domain expert:
- Team members working in a specific stage/group (e.g. create: source code) are considered domain experts for that area of the app they work on
- Team members working on a specific feature (e.g. search) are considered domain experts for that feature
We default to assigning reviews to team members with domain expertise.
When a suitable [domain expert](#domain-experts) isn't available, you can choose any team member to review the MR, or simply follow the [Reviewer roulette](#reviewer-roulette) recommendation.
Team members' domain expertise can be viewed on the [engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page or on the [GitLab team page](https://about.gitlab.com/company/team/).
### Reviewer roulette
### Reviewer roulette
The [Danger bot](dangerbot.md) randomly picks a reviewer and a maintainer for
The [Danger bot](dangerbot.md) randomly picks a reviewer and a maintainer for
each area of the codebase that your merge request seems to touch. It only makes
each area of the codebase that your merge request seems to touch. It only makes
recommendations - feel free to override it if you think someone else is a better
**recommendations** and you should override it if you think someone else is a better
fit!
fit!
It picks reviewers and maintainers from the list at the
It picks reviewers and maintainers from the list at the
1. It will not pick people whose [GitLab status](../user/profile/index.md#current-status)
1. It will not pick people whose [GitLab status](../user/profile/index.md#current-status)
contains the string 'OOO'.
contains the string 'OOO'.
...
@@ -56,7 +76,7 @@ page, with these behaviours:
...
@@ -56,7 +76,7 @@ page, with these behaviours:
As described in the section on the responsibility of the maintainer below, you
As described in the section on the responsibility of the maintainer below, you
are recommended to get your merge request approved and merged by maintainer(s)
are recommended to get your merge request approved and merged by maintainer(s)
from teams other than your own.
with [domain expertise](#domain-experts).
1. If your merge request includes backend changes [^1], it must be
1. If your merge request includes backend changes [^1], it must be
**approved by a [backend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_backend)**.
**approved by a [backend maintainer](https://about.gitlab.com/handbook/engineering/projects/#gitlab_maintainers_backend)**.
...
@@ -103,13 +123,13 @@ To reach the required level of confidence in their solution, an author is expect
...
@@ -103,13 +123,13 @@ To reach the required level of confidence in their solution, an author is expect
to involve other people in the investigation and implementation processes as
to involve other people in the investigation and implementation processes as
appropriate.
appropriate.
They are encouraged to reach out to domain experts to discuss different solutions
They are encouraged to reach out to [domain experts](#domain-experts) to discuss different solutions
or get an implementation reviewed, to product managers and UX designers to clear
or get an implementation reviewed, to product managers and UX designers to clear
up confusion or verify that the end result matches what they had in mind, to
up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
any other developer to get an in-depth review of the solution.
If an author is unsure if a merge request needs a domain expert's opinion, that's
If an author is unsure if a merge request needs a [domain experts's](#domain-experts) opinion, that's
usually a pretty good sign that it does, since without it the required level of
usually a pretty good sign that it does, since without it the required level of
confidence in their solution will not have been reached.
confidence in their solution will not have been reached.
...
@@ -142,9 +162,8 @@ that it meets all requirements, you should:
...
@@ -142,9 +162,8 @@ that it meets all requirements, you should:
- Click the Approve button.
- Click the Approve button.
- Advise the author their merge request has been reviewed and approved.
- Advise the author their merge request has been reviewed and approved.
- Assign the merge request to a maintainer. [Reviewer roulette](#reviewer-roulette)
- Assign the merge request to a maintainer. Default to assigning it to a maintainer with [domain expertise](#domain-experts),
should have made a suggestion, but feel free to override if someone else is a
however, if one isn't available or you think the merge request doesn't need a review by a [domain expert](#domain-experts), feel free to follow the [Reviewer roulette](#reviewer-roulette) suggestion.
better choice.
### The responsibility of the maintainer
### The responsibility of the maintainer
...
@@ -159,20 +178,17 @@ Since a maintainer's job only depends on their knowledge of the overall GitLab
...
@@ -159,20 +178,17 @@ Since a maintainer's job only depends on their knowledge of the overall GitLab
codebase, and not that of any specific domain, they can review, approve, and merge
codebase, and not that of any specific domain, they can review, approve, and merge
merge requests from any team and in any product area.
merge requests from any team and in any product area.
In fact, authors are encouraged to get their merge requests merged by maintainers
from teams other than their own, to ensure that all code across GitLab is consistent
and can be easily understood by all contributors, from both inside and outside the
company, without requiring team-specific expertise.
Maintainers will do their best to also review the specifics of the chosen solution
Maintainers will do their best to also review the specifics of the chosen solution
before merging, but as they are not necessarily domain experts, they may be poorly
before merging, but as they are not necessarily [domain experts](#domain-experts), they may be poorly
placed to do so without an unreasonable investment of time. In those cases, they
placed to do so without an unreasonable investment of time. In those cases, they
will defer to the judgment of the author and earlier reviewers and involved domain
will defer to the judgment of the author and earlier reviewers, in favor of focusing on their primary responsibilities.
experts, in favor of focusing on their primary responsibilities.
If a maintainer feels that an MR is substantial enough that it warrants a review from a [domain expert](#domain-experts),
and it is unclear whether a domain expert have been involved in the reviews to date,
they may request a [domain expert's](#domain-experts) review before merging the MR.
If a developer who happens to also be a maintainer was involved in a merge request
If a developer who happens to also be a maintainer was involved in a merge request
as a domain expert and/or reviewer, it is recommended that they are not also picked
as a reviewer, it is recommended that they are not also picked as the maintainer to ultimately approve and merge it.
as the maintainer to ultimately approve and merge it.
Maintainers should check before merging if the merge request is approved by the
Maintainers should check before merging if the merge request is approved by the
required approvers.
required approvers.
...
@@ -255,11 +271,13 @@ first time.
...
@@ -255,11 +271,13 @@ first time.
### Assigning a merge request for a review
### Assigning a merge request for a review
If you want to have your merge request reviewed, you can assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
When you are ready to have your merge request reviewed,
you should default to assigning it to a reviewer from your group or team for the first review,
however, you can also assign it to any reviewer. The list of reviewers can be found on [Engineering projects](https://about.gitlab.com/handbook/engineering/projects/) page.
You can also use `workflow::ready for review` label. That means that your merge request is ready to be reviewed and any reviewer can pick it. It is recommended to use that label only if there isn't time pressure and make sure the merge request is assigned to a reviewer.
You can also use `workflow::ready for review` label. That means that your merge request is ready to be reviewed and any reviewer can pick it. It is recommended to use that label only if there isn't time pressure and make sure the merge request is assigned to a reviewer.
When your merge request was reviewed and can be passed to a maintainer you can either pick a specific maintainer or use a label `ready for merge`.
When your merge request was reviewed and can be passed to a maintainer, you should default to choosing a maintainer with [domain expertise](#domain-experts), and otherwise follow the Reviewer Roulette recommendation or use the label `ready for merge`.
It is responsibility of the author of a merge request that the merge request is reviewed. If it stays in `ready for review` state too long it is recommended to assign it to a specific reviewer.
It is responsibility of the author of a merge request that the merge request is reviewed. If it stays in `ready for review` state too long it is recommended to assign it to a specific reviewer.
This is an tailored extension of the Best Practices [found in the testing guide](../best_practices.md).
The majority of the end-to-end tests require some state to be built in the application for the tests to happen.
## Prefer API over UI
A good example is a user being logged in as a pre-condition for testing the feature.
The end-to-end testing framework has the ability to fabricate its resources on a case-by-case basis.
Resources should be fabricated via the API wherever possible.
But if the login feature is already covered with end-to-end tests through the GUI, there is no reason to perform such an expensive task to test the functionality of creating a project, or importing a repo, even if these features depend on a user being logged in. Let's see an example to make things clear.
We can save both time and money by fabricating resources that our test will need via the API.
Let's say that, on average, the process to perform a successful login through the GUI takes 2 seconds.
[Learn more](resources.md) about resources.
Now, realize that almost all tests need the user to be logged in, and that we need every test to run in isolation, meaning that tests cannot interfere with each other. This would mean that for every test the user needs to log in, and "waste 2 seconds".
## Avoid superfluous expectations
Now, multiply the number of tests per 2 seconds, and as your test suite grows, the time to run it grows with it, and this is not sustainable.
To keep tests lean, it is important that we only test what we need to test.
An alternative to perform a login in a cheaper way would be having an endpoint (available only for testing) where we could pass the user's credentials as encrypted values as query strings, and then we would be redirected to the logged in home page if the credentials are valid. Let's say that, on average, this process takes only 200 milliseconds.
Ensure that you do not add any `expect()` statements that are unrelated to what needs to be tested.
You see the point right?
For example:
Performing a login through the GUI for every test would cost a lot in terms of tests' execution.
```ruby
#=> Good
And there is another reason.
Flow::Login.sign_in
Page::Main::Menu.performdo|menu|
Let's say that you don't follow the above suggestion, and depend on the GUI for the creation of every application state in order to test a specific feature. In this case we could be talking about the **Issues** feature, that depends on a project to exist, and the user to be logged in.
expect(menu).tobe_signed_in
end
What would happen if there was a bug in the project creation page, where the 'Create' button is disabled, not allowing for the creation of a project through the GUI, but the API logic is still working?
In this case, instead of having only the project creation test failing, we would have many tests that depend on a project to be failing too.
But, if we were following the best practices, only one test would be failing, and tests for other features that depend on a project to exist would continue to pass, since they could be creating the project behind the scenes interacting directly with the public APIs, ensuring a more reliable metric of test failure rate.
#=> Bad
Flow::Login.sign_in(as: user)
Page::Main::Menu.performdo|menu|
expect(menu).tobe_signed_in
expect(page).tohave_content(user.name)#=> we already validated being signed in. redundant.
expect(menu).tohave_element(:nav_bar)#=> likely unnecessary. already validated in lower-level. test doesn't call for validating this.
end
Finally, interacting with the application only by its GUI generates a higher rate of test flakiness, and we want to avoid that at max.
#=> Good
issue=Resource::Issue.fabricate_via_api!do|issue|
issue.name='issue-name'
end
**The takeaways here are:**
Project::Issues::Index.performdo|index|
expect(index).tohave_issue(issue)
end
- Building state through the GUI is time consuming and it's not sustainable as the test suite grows.
#=> Bad
- When depending only on the GUI to create the application's state and tests fail due to front-end issues, we can't rely on the test failures rate, and we generate a higher rate of test flakiness.
issue=Resource::Issue.fabricate_via_api!do|issue|
issue.name='issue-name'
end
Now that we are aware of all of it, [let's go create some tests](quick_start_guide.md).
Project::Issues::Index.performdo|index|
expect(index).tohave_issue(issue)
expect(page).tohave_content(issue.name)#=> page content check is redundant as the issue was already validated in the line above.
end
```
## Prefer to split tests across multiple files
## Prefer to split tests across multiple files
...
@@ -54,17 +70,18 @@ In summary:
...
@@ -54,17 +70,18 @@ In summary:
-**Do**: Split tests across separate files, unless the tests share expensive setup.
-**Do**: Split tests across separate files, unless the tests share expensive setup.
-**Don't**: Put new tests in an existing file without considering the impact on parallelization.
-**Don't**: Put new tests in an existing file without considering the impact on parallelization.
## Limit the use of `before(:all)` and `after` hooks
## Limit the use of the UI in `before(:context)` and `after` hooks
Limit the use of `before(:all)` hook to perform setup tasks with only API calls, non UI operations
Limit the use of `before(:context)` hooks to perform setup tasks with only API calls,
or basic UI operations such as login.
non-UI operations, or basic UI operations such as login.
We use [`capybara-screenshot`](https://github.com/mattheworiordan/capybara-screenshot) library to automatically save screenshots on failures.
We use [`capybara-screenshot`](https://github.com/mattheworiordan/capybara-screenshot) library to automatically save a screenshot on
This library [saves the screenshots in the RSpec's `after` hook](https://github.com/mattheworiordan/capybara-screenshot/blob/master/lib/capybara-screenshot/rspec.rb#L97).
failure.
[If there is a failure in `before(:all)`, the `after` hook is not called](https://github.com/rspec/rspec-core/pull/2652/files#diff-5e04af96d5156e787f28d519a8c99615R148) and so the screenshots are not saved.
Given this fact, we should limit the use of `before(:all)` to only those operations where a screenshot is not
`capybara-screenshot`[saves the screenshot in the RSpec's `after` hook](https://github.com/mattheworiordan/capybara-screenshot/blob/master/lib/capybara-screenshot/rspec.rb#L97).
necessary in case of failure and QA logs would be enough for debugging.
[If there is a failure in `before(:context)`, the `after` hook is not called](https://github.com/rspec/rspec-core/pull/2652/files#diff-5e04af96d5156e787f28d519a8c99615R148) and so the screenshot is not saved.
Given this fact, we should limit the use of `before(:context)` to only those operations where a screenshot is not needed.
Similarly, the `after` hook should only be used for non-UI operations. Any UI operations in `after` hook in a test file
Similarly, the `after` hook should only be used for non-UI operations. Any UI operations in `after` hook in a test file
would execute before the `after` hook that takes the screenshot. This would result in moving the UI status away from the
would execute before the `after` hook that takes the screenshot. This would result in moving the UI status away from the
...
@@ -72,16 +89,11 @@ point of failure and so the screenshot would not be captured at the right moment
...
@@ -72,16 +89,11 @@ point of failure and so the screenshot would not be captured at the right moment
## Ensure tests do not leave the browser logged in
## Ensure tests do not leave the browser logged in
All QA tests expect to be able to log in at the start of the test.
All tests expect to be able to log in at the start of the test.
That's not possible if a test leaves the browser logged in when it finishes. Normally this isn't a
problem because [Capybara resets the session after each test](https://github.com/teamcapybara/capybara/blob/9ebc5033282d40c73b0286e60217515fd1bb0b5d/lib/capybara/rspec.rb#L18).
But Capybara does that in an `after` block, so when a test logs in within an `after(:context)` block,
the browser returns to a logged in state *after* Capybara had logged it out. And so the next test will fail.
For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736>
For an example see: <https://gitlab.com/gitlab-org/gitlab/issues/34736>
Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-beforeall-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
Ideally, any actions performed in an `after(:context)` (or [`before(:context)`](#limit-the-use-of-the-ui-in-beforecontext-and-after-hooks)) block would be performed via the API. But if it's necessary to do so via the UI (e.g., if API functionality doesn't exist), make sure to log out at the end of the block.
```ruby
```ruby
after(:all)do
after(:all)do
...
@@ -100,3 +112,30 @@ We don't run tests that require Administrator access against our Production envi
...
@@ -100,3 +112,30 @@ We don't run tests that require Administrator access against our Production envi
When you add a new test that requires Administrator access, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests.
When you add a new test that requires Administrator access, apply the RSpec metadata `:requires_admin` so that the test will not be included in the test suites executed against Production and other environments on which we don't want to run those tests.
Note: When running tests locally or configuring a pipeline, the environment variable `QA_CAN_TEST_ADMIN_FEATURES` can be set to `false` to skip tests that have the `:requires_admin` tag.
Note: When running tests locally or configuring a pipeline, the environment variable `QA_CAN_TEST_ADMIN_FEATURES` can be set to `false` to skip tests that have the `:requires_admin` tag.
## Prefer `Commit` resource over `ProjectPush`
In line with [using the API](#prefer-api-over-ui), use a `Commit` resource whenever possible.
`ProjectPush` uses raw shell commands via the Git Command Line Interface (CLI) whereas the `Commit` resource makes an HTTP request.
@@ -1036,6 +1060,30 @@ management project. Refer to the
...
@@ -1036,6 +1060,30 @@ management project. Refer to the
[chart](https://github.com/crossplane/crossplane/tree/master/cluster/charts/crossplane#configuration) for the
[chart](https://github.com/crossplane/crossplane/tree/master/cluster/charts/crossplane#configuration) for the
available configuration options. Note that this link points to the docs for the current development release, which may differ from the version you have installed. You can check out a specific version in the branch/tag switcher.
available configuration options. Note that this link points to the docs for the current development release, which may differ from the version you have installed. You can check out a specific version in the branch/tag switcher.
### Install Fluentd using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/76) in GitLab 12.10.
To install Fluentd into the `gitlab-managed-apps` namespace of your cluster using GitLab CI/CD, define the following configuration in `.gitlab/managed-apps/config.yaml`:
```yaml
Fluentd:
installed:true
```
You can also review the default values set for this chart in the [values.yaml](https://github.com/helm/charts/blob/master/stable/fluentd/values.yaml) file.
You can customize the installation of Fluentd by defining
`.gitlab/managed-apps/fluentd/values.yaml` file in your cluster management
project. Refer to the
[configuration chart for the current development release of Fluentd](https://github.com/helm/charts/tree/master/stable/fluentd#configuration)
for the available configuration options.
NOTE: **Note:**
The configuration chart link points to the current development release, which
may differ from the version you have installed. To ensure compatibility, switch
to the specific branch or tag you are using.
## Upgrading applications
## Upgrading applications
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24789) in GitLab 11.8.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24789) in GitLab 11.8.
@@ -60,11 +60,11 @@ associated Pages domain. It also will be renewed automatically by GitLab.
...
@@ -60,11 +60,11 @@ associated Pages domain. It also will be renewed automatically by GitLab.
## Troubleshooting
## Troubleshooting
### Error "Something went wrong while obtaining Let's Encrypt certificate"
### Error "Something went wrong while obtaining the Let's Encrypt certificate"
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30146) in GitLab 13.0.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/30146) in GitLab 13.0.
If you get an error **Something went wrong while obtaining Let's Encrypt certificate**, you can try obtaining the certificate again by following these steps:
If you get an error **Something went wrong while obtaining the Let's Encrypt certificate**, you can try obtaining the certificate again by following these steps:
-if [ -z "$DAST_WEBSITE$DAST_API_SPECIFICATION" ]; then echo "Either DAST_WEBSITE or DAST_API_SPECIFICATION must be set. See https://docs.gitlab.com/ee/user/application_security/dast/#configuration for more details." && exit 1; fi
msgid "GitLabPages|%{domain} is not verified. To learn how to verify ownership, visit your %{link_start}domain details%{link_end}."
msgid "GitLabPages|%{domain} is not verified. To learn how to verify ownership, visit your %{link_start}domain details%{link_end}."
msgstr ""
msgstr ""
...
@@ -9867,7 +9870,7 @@ msgstr ""
...
@@ -9867,7 +9870,7 @@ msgstr ""
msgid "GitLabPages|Save"
msgid "GitLabPages|Save"
msgstr ""
msgstr ""
msgid "GitLabPages|Something went wrong while obtaining Let's Encrypt certificate for %{domain}. To retry visit your %{link_start}domain details%{link_end}."
msgid "GitLabPages|Something went wrong while obtaining the Let's Encrypt certificate for %{domain}. To retry visit your %{link_start}domain details%{link_end}."
msgstr ""
msgstr ""
msgid "GitLabPages|Support for domains and certificates is disabled. Ask your system's administrator to enable it."
msgid "GitLabPages|Support for domains and certificates is disabled. Ask your system's administrator to enable it."
...
@@ -12085,6 +12088,9 @@ msgstr ""
...
@@ -12085,6 +12088,9 @@ msgstr ""
msgid "Licenses|Detected in Project"
msgid "Licenses|Detected in Project"
msgstr ""
msgstr ""
msgid "Licenses|Detected licenses that are out-of-compliance with the project's assigned policies"
msgstr ""
msgid "Licenses|Displays licenses detected in the project, based on the %{linkStart}latest successful%{linkEnd} scan"
msgid "Licenses|Displays licenses detected in the project, based on the %{linkStart}latest successful%{linkEnd} scan"
msgstr ""
msgstr ""
...
@@ -12106,6 +12112,9 @@ msgstr ""
...
@@ -12106,6 +12112,9 @@ msgstr ""
msgid "Licenses|Policy"
msgid "Licenses|Policy"
msgstr ""
msgstr ""
msgid "Licenses|Policy violation: denied"
msgstr ""
msgid "Licenses|Specified policies in this project"
msgid "Licenses|Specified policies in this project"
expect(page).tohave_text("Something went wrong while obtaining the Let's Encrypt certificate.")
click_on('Retry')
expect(page).tohave_text("GitLab is obtaining a Let's Encrypt SSL certificate for this domain. This process can take some time. Please try again later.")
end
end
shared_examples'user sees private keys only for user provided certificate'do
shared_examples'user sees private keys only for user provided certificate'do