Note there are several options that you should consider using:
...
...
@@ -162,6 +165,42 @@ Note there are several options that you should consider using:
| `vers=4.1` |NFS v4.1 should be used instead of v4.0 because there is a Linux [NFS client bug in v4.0](https://gitlab.com/gitlab-org/gitaly/issues/1339) that can cause significant problems due to stale data.
| `nofail` | Don't halt boot process waiting for this mount to become available
| `lookupcache=positive` | Tells the NFS client to honor `positive` cache results but invalidates any `negative` cache results. Negative cache results cause problems with Git. Specifically, a `git push` can fail to register uniformly across all NFS clients. The negative cache causes the clients to 'remember' that the files did not exist previously.
| `hard` | Instead of `soft`. [Further details](#soft-mount-option).
### soft mount option
We recommend that you use `hard` in your mount options, unless you have a specific
reason to use `soft`.
On GitLab.com, we use `soft` because there were times when we had NFS servers
reboot and `soft` improved availability, but everyone's infrastructure is different.
If your NFS is provided by on-premise storage arrays with redundant controllers,
for example, you shouldn't need to worry about NFS server availability.
The NFS man page states:
> "soft" timeout can cause silent data corruption in certain cases
Read the [Linux man page](https://linux.die.net/man/5/nfs) to understand the difference,
and if you do use `soft`, ensure that you've taken steps to mitigate the risks.
If you experience behaviour that might have been caused by
writes to disk on the NFS server not occurring, such as commits going missing,
use the `hard` option, because (from the man page):
> use the soft option only when client responsiveness is more important than data integrity
Other vendors make similar recommendations, including
[SAP](http://wiki.scn.sap.com/wiki/x/PARnFQ) and NetApp's
@@ -7,13 +7,7 @@ the [Elasticsearch integration documentation](../integration/elasticsearch.md#en
## Deep Dive
In June 2019, Mario de la Ossa hosted a [Deep Dive] on GitLab's [Elasticsearch integration] to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube], and the slides on [Google Slides] and in [PDF]. Everything covered in this deep dive was accurate as of GitLab 12.0, and while specific details may have changed since then, it should still serve as a good introduction.
In June 2019, Mario de la Ossa hosted a [Deep Dive](https://gitlab.com/gitlab-org/create-stage/issues/1) on GitLab's [Elasticsearch integration](../integration/elasticsearch.md) to share his domain specific knowledge with anyone who may work in this part of the code base in the future. You can find the [recording on YouTube](https://www.youtube.com/watch?v=vrvl-tN2EaA), and the slides on [Google Slides](https://docs.google.com/presentation/d/1H-pCzI_LNrgrL5pJAIQgvLX8Ji0-jIKOg1QeJQzChug/edit) and in [PDF](https://gitlab.com/gitlab-org/create-stage/uploads/c5aa32b6b07476fa8b597004899ec538/Elasticsearch_Deep_Dive.pdf). Everything covered in this deep dive was accurate as of GitLab 12.0, and while specific details may have changed since then, it should still serve as a good introduction.
and waits for the resulting status. We call this a _status attribution_.
1. GitLab packages are being built in the [Omnibus GitLab][omnibus-gitlab]
1. GitLab packages are being built in the [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab)
pipeline. Packages are then pushed to its Container Registry.
1. When packages are ready, and available in the registry, a final step in the
[Omnibus GitLab][omnibus-gitlab] pipeline, triggers a new
[Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) pipeline, triggers a new
GitLab QA pipeline (those with access can view them at `https://gitlab.com/gitlab-org/gitlab-qa-mirror/pipelines`). It also waits for a resulting status.
1. GitLab QA pulls images from the registry, spins-up containers and runs tests
...
...
@@ -139,26 +140,23 @@ many of the 10 available jobs that you want to run).
On every pipeline during the `test` stage, the `review-qa-smoke` job is
automatically started: it runs the QA smoke suite against the
[Review App][review-apps].
[Review App](../review_apps.md).
You can also manually start the `review-qa-all`: it runs the full QA suite
against the [Review App][review-apps].
against the [Review App](../review_apps.md).
**This runs end-to-end tests against a Review App based on [the official GitLab
Helm chart][helm-chart], itself deployed with custom
[Cloud Native components][cng] built from your merge request's changes.**
Helm chart](https://gitlab.com/gitlab-org/charts/gitlab/), itself deployed with custom
[Cloud Native components](https://gitlab.com/gitlab-org/build/CNG) built from your merge request's changes.**
See [Review Apps][review-apps] for more details about Review Apps.
Once you decided where to put [test environment orchestration scenarios](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/lib/gitlab/qa/scenario) and
[instance-level scenarios](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/qa/qa/specs/features), take a look at the [GitLab QA README](https://gitlab.com/gitlab-org/gitlab/tree/master/qa/README.md),
the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md), and [the already existing
@@ -22,10 +22,10 @@ If you don't exactly understand what we mean by **not everything needs to happen
-[2.](#2-test-skeleton) Creating the skeleton of the test file (`*_spec.rb`)
-[3.](#3-test-cases-mvc) The [MVC](https://about.gitlab.com/handbook/values/#minimum-viable-change-mvc) of the test cases' logic
-[4.](#4-extracting-duplicated-code) Extracting duplicated code into methods
-[5.](#5-tests-pre-conditions-using-resources-and-page-objects) Tests' pre-conditions (`before :context` and `before`) using resources and [Page Objects]
-[5.](#5-tests-pre-conditions-using-resources-and-page-objects) Tests' pre-conditions (`before :context` and `before`) using resources and [Page Objects](page_objects.md)
-[6.](#6-optimization) Optimizing the test suite
-[7.](#7-resources) Using and implementing resources
-[8.](#8-page-objects) Moving element definitions and methods to [Page Objects]
-[8.](#8-page-objects) Moving element definitions and methods to [Page Objects](page_objects.md)
### 0. Are end-to-end tests needed?
...
...
@@ -126,7 +126,7 @@ end
> Notice that the test itself is simple. The most challenging part is the creation of the application state, which will be covered later.
>
> The exemplified test case's MVC is not enough for the change to be merged, but it helps to build up the test logic. The reason is that we do not want to use locators directly in the tests, and tests **must** use [Page Objects] before they can be merged. This way we better separate the responsibilities, where the Page Objects encapsulate elements and methods that allow us to interact with pages, while the spec files describe the test cases in more business-related language.
> The exemplified test case's MVC is not enough for the change to be merged, but it helps to build up the test logic. The reason is that we do not want to use locators directly in the tests, and tests **must** use [Page Objects](page_objects.md) before they can be merged. This way we better separate the responsibilities, where the Page Objects encapsulate elements and methods that allow us to interact with pages, while the spec files describe the test cases in more business-related language.
Below are the steps that the test covers:
...
...
@@ -294,7 +294,7 @@ In the `before` block we create all the application state needed for the tests t
> A project is created in the background by creating the `issue` resource.
>
> When creating the [Resources], notice that when calling the `fabricate_via_api` method, we pass some attribute:values, like `title`, and `labels` for the `issue` resource; and `project` and `title` for the `label` resource.
> When creating the [Resources](resources.md), notice that when calling the `fabricate_via_api` method, we pass some attribute:values, like `title`, and `labels` for the `issue` resource; and `project` and `title` for the `label` resource.
>
> What's important to understand here is that by creating the application state mostly using the public APIs we save a lot of time in the test suite setup stage.
>
...
...
@@ -358,7 +358,7 @@ To address point 1, we changed the test implementation from two `it` blocks into
**Note:** When writing this document, some code that is now merged to master was not implemented yet, but we left them here for the readers to understand the whole process of end-to-end test creation.
You can think of [Resources] as anything that can be created on GitLab CE or EE, either through the GUI, the API, or the CLI.
You can think of [Resources](resources.md) as anything that can be created on GitLab CE or EE, either through the GUI, the API, or the CLI.
With that in mind, resources can be a project, an epic, an issue, a label, a commit, etc.
...
...
@@ -468,7 +468,7 @@ Page Objects are used in end-to-end tests for maintenance reasons, where a page'
> Page Objects are auto-loaded in the [`qa/qa.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/qa/qa.rb) file and available in all the test files (`*_spec.rb`).
Take a look at the [Page Objects] documentation.
Take a look at the [Page Objects](page_objects.md) documentation.
Now, let's go back to our example.
...
...
@@ -571,7 +571,7 @@ The `text_of_labels_block` method is a simple method that returns the `:labels_b
#### Updates in the view (*.html.haml) and `dropdowns_helper.rb` files
Now let's change the view and the `dropdowns_helper` files to add the selectors that relate to the [Page Objects].
Now let's change the view and the `dropdowns_helper` files to add the selectors that relate to the [Page Objects](page_objects.md).
In [`app/views/shared/issuable/_sidebar.html.haml:105`](https://gitlab.com/gitlab-org/gitlab/blob/7ca12defc7a965987b162a6ebef302f95dc8867f/app/views/shared/issuable/_sidebar.html.haml#L105), add a `data: { qa_selector: 'edit_link_labels' }` data attribute.
...
...
@@ -619,6 +619,3 @@ This method receives an element (`name`) and the `keys` that it will send to tha
As you might remember, in the Issue Page Object we call this method like this: `send_keys_to_element(:dropdown_input_field, [label, :enter])`.
With that, you should be able to start writing end-to-end tests yourself. *Congratulations!*
-[`rspec-retry` is biting us when some API specs fail](https://gitlab.com/gitlab-org/gitlab-foss/issues/29242): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/9825>
1. On every [pipeline][gitlab-pipeline] during the `test` stage, the
[`gitlab:assets:compile`][gitlab:assets:compile pull-cache] job is automatically started.
- Once it's done, it starts the [`review-build-cng`][review-build-cng]
manual job since the [`CNG-mirror`][cng-mirror] pipeline triggered in the
1. On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) during the `test` stage, the
[`gitlab:assets:compile`](https://gitlab.com/gitlab-org/gitlab/-/jobs/467724487) job is automatically started.
- Once it's done, it starts the [`review-build-cng`](https://gitlab.com/gitlab-org/gitlab/-/jobs/467724808)
manual job since the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) pipeline triggered in the
following step depends on it.
1. The [`review-build-cng`][review-build-cng] job [triggers a pipeline][cng-mirror-pipeline]
in the [`CNG-mirror`][cng-mirror] project.
- The [`CNG-mirror`][cng-mirror-pipeline] pipeline creates the Docker images of
1. The [`review-build-cng`](https://gitlab.com/gitlab-org/gitlab/-/jobs/467724808) job [triggers a pipeline](https://gitlab.com/gitlab-org/build/CNG-mirror/pipelines/44364657)
in the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project.
- The [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror/pipelines/44364657) pipeline creates the Docker images of
each component (e.g. `gitlab-rails-ee`, `gitlab-shell`, `gitaly` etc.)
based on the commit from the [GitLab pipeline][gitlab-pipeline] and stores
them in its [registry][cng-mirror-registry].
- We use the [`CNG-mirror`][cng-mirror] project so that the `CNG`, (**C**loud
based on the commit from the [GitLab pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) and stores
them in its [registry](https://gitlab.com/gitlab-org/build/CNG-mirror/container_registry).
- We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (**C**loud
**N**ative **G**itLab), project's registry is not overloaded with a
lot of transient Docker images.
- Note that the official CNG images are built by the `cloud-native-image`
job, which runs only for tags, and triggers itself a [`CNG`][cng] pipeline.
1. Once the `test` stage is done, the [`review-deploy`][review-deploy] job
deploys the Review App using [the official GitLab Helm chart][helm-chart] to
the [`review-apps-ce`][review-apps-ce] / [`review-apps-ee`][review-apps-ee]
job, which runs only for tags, and triggers itself a [`CNG`](https://gitlab.com/gitlab-org/build/CNG) pipeline.
1. Once the `test` stage is done, the [`review-deploy`](https://gitlab.com/gitlab-org/gitlab/-/jobs/467724810) job
deploys the Review App using [the official GitLab Helm chart](https://gitlab.com/gitlab-org/charts/gitlab/) to
the [`review-apps-ce`](https://console.cloud.google.com/kubernetes/clusters/details/us-central1-a/review-apps-ce?project=gitlab-review-apps) / [`review-apps-ee`](https://console.cloud.google.com/kubernetes/clusters/details/us-central1-b/review-apps-ee?project=gitlab-review-apps)
Kubernetes cluster on GCP.
- The actual scripts used to deploy the Review App can be found at
@@ -287,7 +287,7 @@ kubectl get cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-' | grep
### Using K9s
[K9s] is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits.
[K9s](https://github.com/derailed/k9s) is a powerful command line dashboard which allows you to filter by labels. This can help identify trends with apps exceeding the [review-app resource requests](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/review_apps/base-config.yaml). Kubernetes will schedule pods to nodes based on resource requests and allow for CPU usage up to the limits.
- In K9s you can sort or add filters by typing the `/` character
-`-lrelease=<review-app-slug>` - filters down to all pods for a release. This aids in determining what is having issues in a single deployment
...
...
@@ -389,27 +389,9 @@ find a way to limit it to only us.**
### Helpful command line tools
- [K9s] - enables CLI dashboard across pods and enabling filtering by labels
-[K9s](https://github.com/derailed/k9s) - enables CLI dashboard across pods and enabling filtering by labels
-[Stern](https://github.com/wercker/stern) - enables cross pod log tailing based on label/field selectors