Commit 202ecc8a authored by Uma Chandran's avatar Uma Chandran Committed by Kati Paizee

Fixes vale latin terms error by replacing instances of e.g. and i.e.

parent 28383338
......@@ -395,7 +395,7 @@ it('passes', () => {
```
To modify only the hash, use either the `setWindowLocation` helper, or assign
directly to `window.location.hash`, e.g.:
directly to `window.location.hash`, for example:
```javascript
it('passes', () => {
......@@ -423,9 +423,7 @@ it('passes', () => {
### Waiting in tests
Sometimes a test needs to wait for something to happen in the application before it continues.
Avoid using [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout)
because it makes the reason for waiting unclear. Instead use one of the following approaches.
Avoid using [`setTimeout`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout) because it makes the reason for waiting unclear. Instead use one of the following approaches.
#### Promises and Ajax calls
......@@ -633,8 +631,8 @@ The latter is useful when you have `setInterval` in the code. **Remember:** our
Non-determinism is the breeding ground for flaky and brittle specs. Such specs end up breaking the CI pipeline, interrupting the work flow of other contributors.
1. Make sure your test subject's collaborators (e.g., Axios, apollo, Lodash helpers) and test environment (e.g., Date) behave consistently across systems and over time.
1. Make sure tests are focused and not doing "extra work" (e.g., needlessly creating the test subject more than once in an individual test)
1. Make sure your test subject's collaborators (for example, Axios, Apollo, Lodash helpers) and test environment (for example, Date) behave consistently across systems and over time.
1. Make sure tests are focused and not doing "extra work" (for example, needlessly creating the test subject more than once in an individual test).
### Faking `Date` for determinism
......@@ -650,7 +648,7 @@ describe('cool/component', () => {
// Default fake `Date`
const TODAY = new Date();
// NOTE: `useFakeDate` cannot be called during test execution (i.e. inside `it`, `beforeEach`, `beforeAll`, etc.).
// NOTE: `useFakeDate` cannot be called during test execution (that is, inside `it`, `beforeEach`, `beforeAll`, etc.).
describe("on Ada Lovelace's Birthday", () => {
useFakeDate(1815, 11, 10)
......@@ -670,7 +668,7 @@ Similarly, if you really need to use the real `Date` class, then you can import
```javascript
import { useRealDate } from 'helpers/fake_date';
// NOTE: `useRealDate` cannot be called during test execution (i.e. inside `it`, `beforeEach`, `beforeAll`, etc.).
// NOTE: `useRealDate` cannot be called during test execution (that is, inside `it`, `beforeEach`, `beforeAll`, etc.).
describe('with real date', () => {
useRealDate();
});
......@@ -702,16 +700,16 @@ The more challenging part are mocks, which can be used for functions or even dep
### Manual module mocks
Manual mocks are used to mock modules across the entire Jest environment. This is a very powerful testing tool that helps simplify
unit testing by mocking out modules which cannot be easily consumed in our test environment.
unit testing by mocking out modules that cannot be easily consumed in our test environment.
> **WARNING:** Do not use manual mocks if a mock should not be consistently applied in every spec (i.e. it's only needed by a few specs).
> **WARNING:** Do not use manual mocks if a mock should not be consistently applied in every spec (that is, it's only needed by a few specs).
> Instead, consider using [`jest.mock(..)`](https://jestjs.io/docs/jest-object#jestmockmodulename-factory-options)
> (or a similar mocking function) in the relevant spec file.
#### Where should I put manual mocks?
Jest supports [manual module mocks](https://jestjs.io/docs/manual-mocks) by placing a mock in a `__mocks__/` directory next to the source module
(e.g. `app/assets/javascripts/ide/__mocks__`). **Don't do this.** We want to keep all of our test-related code in one place (the `spec/` folder).
(for example, `app/assets/javascripts/ide/__mocks__`). **Don't do this.** We want to keep all of our test-related code in one place (the `spec/` folder).
If a manual mock is needed for a `node_modules` package, use the `spec/frontend/__mocks__` folder. Here's an example of
a [Jest mock for the package `monaco-editor`](https://gitlab.com/gitlab-org/gitlab/-/blob/b7f914cddec9fc5971238cdf12766e79fa1629d7/spec/frontend/__mocks__/monaco-editor/index.js#L1).
......@@ -900,7 +898,7 @@ it.each([
NOTE:
Only use template literal block if pretty print is not needed for spec output. For example, empty strings, nested objects etc.
For example, when testing the difference between an empty search string and a non-empty search string, the use of the array block syntax with the pretty print option would be preferred. That way the differences between an empty string e.g. `''` and a non-empty string e.g. `'search string'` would be visible in the spec output. Whereas with a template literal block, the empty string would be shown as a space, which could lead to a confusing developer experience
For example, when testing the difference between an empty search string and a non-empty search string, the use of the array block syntax with the pretty print option would be preferred. That way the differences between an empty string (`''`) and a non-empty string (`'search string'`) would be visible in the spec output. Whereas with a template literal block, the empty string would be shown as a space, which could lead to a confusing developer experience.
```javascript
// bad
......@@ -1193,7 +1191,7 @@ it('renders the component correctly', () => {
})
```
The above test will create two snapshots, what's important is to decide which of the snapshots provide more value for the codebase safety i.e. if one of these snapshots changes, does that highlight a possible un-wanted break in the codebase? This can help catch unexpected changes if something in an underlying dependency changes without our knowledge.
The above test will create two snapshots. It's important to decide which of the snapshots provide more value for codebase safety. That is, if one of these snapshots changes, does that highlight a possible break in the codebase? This can help catch unexpected changes if something in an underlying dependency changes without our knowledge.
### Pros and Cons
......
......@@ -81,10 +81,9 @@ the GitLab handbook information for the [shared 1Password account](https://about
### Run a Rails console
1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.exec` permission first.
1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
e.g. `review-qa-raise-e-12chm0`.
1. Find and open the `toolbox` Deployment, e.g. `review-qa-raise-e-12chm0-toolbox`.
1. Click on the Pod in the "Managed pods" section, e.g. `review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz`.
1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps). For example, `review-qa-raise-e-12chm0`.
1. Find and open the `toolbox` Deployment. For example, `review-qa-raise-e-12chm0-toolbox`.
1. Click on the Pod in the "Managed pods" section. For example, `review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz`.
1. Click on the `KUBECTL` dropdown, then `Exec` -> `toolbox`.
1. Replace `-c toolbox -- ls` with `-it -- gitlab-rails console` from the
default command or
......@@ -95,12 +94,9 @@ the GitLab handbook information for the [shared 1Password account](https://about
### Dig into a Pod's logs
1. Make sure you [have access to the cluster](#get-access-to-the-gcp-review-apps-cluster) and the `container.pods.getLogs` permission first.
1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps),
e.g. `review-qa-raise-e-12chm0`.
1. Find and open the `migrations` Deployment, e.g.
`review-qa-raise-e-12chm0-migrations.1`.
1. Click on the Pod in the "Managed pods" section, e.g.
`review-qa-raise-e-12chm0-migrations.1-nqwtx`.
1. [Filter Workloads by your Review App slug](https://console.cloud.google.com/kubernetes/workload?project=gitlab-review-apps). For example, `review-qa-raise-e-12chm0`.
1. Find and open the `migrations` Deployment. For example, `review-qa-raise-e-12chm0-migrations.1`.
1. Click on the Pod in the "Managed pods" section. For example, `review-qa-raise-e-12chm0-migrations.1-nqwtx`.
1. Click on the `Container logs` link.
Alternatively, you could use the [Logs Explorer](https://console.cloud.google.com/logs/query;query=?project=gitlab-review-apps) which provides more utility to search logs. An example query for a pod name is as follows:
......@@ -161,7 +157,7 @@ subgraph "CNG-mirror pipeline"
- The `review-build-cng` job automatically starts only if your MR includes
[CI or frontend changes](../pipelines.md#changes-patterns). In other cases, the job is manual.
- The [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror/pipelines/44364657) pipeline creates the Docker images of
each component (e.g. `gitlab-rails-ee`, `gitlab-shell`, `gitaly` etc.)
each component (for example, `gitlab-rails-ee`, `gitlab-shell`, `gitaly` etc.)
based on the commit from the [GitLab pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) and stores
them in its [registry](https://gitlab.com/gitlab-org/build/CNG-mirror/container_registry).
- We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (Cloud
......@@ -192,7 +188,7 @@ subgraph "CNG-mirror pipeline"
- If the `review-deploy` job keeps failing (and a manual retry didn't help),
please post a message in the `#g_qe_engineering_productivity` channel and/or create a `~"Engineering Productivity"` `~"ep::review apps"` `~"type::bug"`
issue with a link to your merge request. Note that the deployment failure can
reveal an actual problem introduced in your merge request (i.e. this isn't
reveal an actual problem introduced in your merge request (that is, this isn't
necessarily a transient failure)!
- If the `review-qa-smoke` job keeps failing (note that we already retry it twice),
please check the job's logs: you could discover an actual problem introduced in
......@@ -251,7 +247,7 @@ aids in identifying load spikes on the cluster, and if nodes are problematic or
### Database related errors in `review-deploy` or `review-qa-smoke`
Occasionally the state of a Review App's database could diverge from the database schema. This could be caused by
changes to migration files or schema, such as a migration being renamed or deleted. This typically manifest in migration errors such as:
changes to migration files or schema, such as a migration being renamed or deleted. This typically manifests in migration errors such as:
- migration job failing with a column that already exists
- migration job failing with a column that does not exist
......@@ -283,7 +279,7 @@ If the Docker image does not exist:
- Verify the `image.repository` and `image.tag` options in the `helm upgrade --install` command match the repository names used by CNG-mirror pipeline.
- Look further in the corresponding downstream CNG-mirror pipeline in `review-build-cng` job.
### Node count is always increasing (i.e. never stabilizing or decreasing)
### Node count is always increasing (never stabilizing or decreasing)
**Potential cause:**
......@@ -370,10 +366,10 @@ effectively preventing all the Review Apps from getting a DNS record assigned,
making them unreachable via domain name.
This in turn prevented other components of the Review App to properly start
(e.g. `gitlab-runner`).
(for example, `gitlab-runner`).
After some digging, we found that new mounts were failing, when being performed
with transient scopes (e.g. pods) of `systemd-mount`:
After some digging, we found that new mounts fail when performed
with transient scopes (for example, pods) of `systemd-mount`:
```plaintext
MountVolume.SetUp failed for volume "dns-gitlab-review-app-external-dns-token-sj5jm" : mount failed: exit status 1
......@@ -399,8 +395,8 @@ For the record, the debugging steps to find out this issue were:
instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs)
1. In the node: `systemctl --version` => `systemd 232`
1. Gather some more information:
- `mount | grep kube | wc -l` => e.g. 290
- `systemctl list-units --all | grep -i var-lib-kube | wc -l` => e.g. 142
- `mount | grep kube | wc -l` (returns a count, for example, 290)
- `systemctl list-units --all | grep -i var-lib-kube | wc -l` (returns a count, for example, 142)
1. Check how many pods are in a bad state:
- Get all pods running a given node: `kubectl get pods --field-selector=spec.nodeName=NODE_NAME`
- Get all the `Running` pods on a given node: `kubectl get pods --field-selector=spec.nodeName=NODE_NAME | grep Running`
......
......@@ -223,7 +223,7 @@ graph RL
Formal definition: <https://en.wikipedia.org/wiki/Integration_testing>
These kind of tests ensure that individual parts of the application work well
together, without the overhead of the actual app environment (i.e. the browser).
together, without the overhead of the actual app environment (such as the browser).
These tests should assert at the request/response level: status code, headers,
body.
They're useful to test permissions, redirections, what view is rendered etc.
......@@ -502,9 +502,7 @@ These tests run against the UI and ensure that basic functionality is working.
### GitLab QA orchestrator
[GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa) is a tool that allows to test that all these pieces
integrate well together by building a Docker image for a given version of GitLab
Rails and running end-to-end tests (i.e. using Capybara) against it.
[GitLab QA orchestrator](https://gitlab.com/gitlab-org/gitlab-qa) is a tool that allows you to test that all these pieces integrate well together by building a Docker image for a given version of GitLab Rails and running end-to-end tests (using Capybara) against it.
Learn more in the [GitLab QA orchestrator README](https://gitlab.com/gitlab-org/gitlab-qa/tree/master/README.md).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment