Commit 5d72e8b8 authored by Amy Qualls's avatar Amy Qualls Committed by Craig Norris

Add more words to spelling exceptions list

More words that should be allowed to pass through the spell checker.
parent dee867c1
......@@ -5,6 +5,7 @@ Alibaba
allowlist
allowlisting
allowlists
anonymized
Ansible
Anthos
API
......@@ -62,8 +63,11 @@ burndown
cacheable
CAS
CentOS
Certbot
chai
Chatops
chatbot
chatbots
ChatOps
checksummed
checksumming
Citrix
......@@ -96,6 +100,7 @@ deduplicated
deduplicates
deduplicating
deduplication
deliverables
denylist
denylisting
denylists
......@@ -105,6 +110,9 @@ deprovision
deprovisioned
deprovisioning
deprovisions
dequarantine
dequarantined
dequarantining
DevOps
discoverability
Disqus
......@@ -133,6 +141,7 @@ failovers
failsafe
fastlane
favicon
Filebeat
Fio
firewalled
Flawfinder
......@@ -199,12 +208,14 @@ jsdom
JupyterHub
kanban
kanbans
Kaniko
Karma
Kerberos
Kibana
Kinesis
Knative
Kramdown
kubectl
Kubernetes
Kubesec
Laravel
......@@ -226,6 +237,8 @@ Makefile
Makefiles
Markdown
markdownlint
matcher
matchers
Mattermost
mbox
memoization
......@@ -260,6 +273,8 @@ nameservers
namespace
namespaced
namespaces
namespacing
namespacings
Nanoc
NGINX
Nokogiri
......@@ -278,6 +293,7 @@ Packagist
parallelization
parallelizations
passwordless
Patroni
performant
phaser
phasers
......@@ -327,6 +343,8 @@ Redcarpet
Redis
Redmine
reCAPTCHA
redirection
redirections
refactorings
referer
referers
......@@ -343,6 +361,7 @@ requeue
requeued
requeues
reusability
Restlet
resynced
resyncing
resyncs
......@@ -368,6 +387,8 @@ Salesforce
SAML
sandboxing
sbt
scatterplot
scatterplots
Sendmail
Sentry
serverless
......@@ -421,6 +442,7 @@ substrings
syslog
tcpdump
Tiller
timecop
todos
tokenizer
Tokenizers
......@@ -432,13 +454,15 @@ tooltips
Trello
triaging
TypeScript
Twilio
Twitter
Ubuntu
unarchive
unarchived
unarchives
Unassign
Unassigns
unarchiving
unassign
unassigns
uncheck
unchecked
unchecking
......@@ -482,10 +506,12 @@ unresolve
unresolved
unresolving
unschedule
unscoped
unstage
unstaged
unstages
unstaging
unstarted
unstash
unstashed
unstashing
......@@ -504,6 +530,8 @@ validator
validators
vendored
versionless
viewport
viewports
virtualized
virtualizing
Vue
......
......@@ -147,7 +147,7 @@ To run several tests inside one directory:
### Speed up tests, Rake tasks, and migrations
[Spring](https://github.com/rails/spring) is a Rails application preloader. It
[Spring](https://github.com/rails/spring) is a Rails application pre-loader. It
speeds up development by keeping your application running in the background so
you don't need to boot it every time you run a test, Rake task or migration.
......@@ -203,9 +203,9 @@ To generate a sprite file containing all the Emoji, run:
bundle exec rake gemojione:sprite
```
If new emoji are added, the spritesheet may change size. To compensate for
such changes, first generate the `emoji.png` spritesheet with the above Rake
task, then check the dimensions of the new spritesheet and update the
If new emoji are added, the sprite sheet may change size. To compensate for
such changes, first generate the `emoji.png` sprite sheet with the above Rake
task, then check the dimensions of the new sprite sheet and update the
`SPRITESHEET_WIDTH` and `SPRITESHEET_HEIGHT` constants accordingly.
## Update project templates
......
......@@ -76,6 +76,6 @@ To get started, see an [example merge request](https://gitlab.com/gitlab-org/git
## Useful links
- [Routing improvements masterplan](https://gitlab.com/gitlab-org/gitlab/-/issues/215362)
- [Routing improvements master plan](https://gitlab.com/gitlab-org/gitlab/-/issues/215362)
- [Scoped routing explained](https://gitlab.com/gitlab-org/gitlab/-/issues/214217)
- [Removal of deprecated routes](https://gitlab.com/gitlab-org/gitlab/-/issues/28848)
......@@ -8,8 +8,8 @@ The measuring module is a tool that allows to measure a service's execution, and
- Service class name
- Execution time
- Number of sql calls
- Detailed gc stats and diffs
- Number of SQL calls
- Detailed `gc` stats and diffs
- RSS memory usage
- Server worker ID
......@@ -74,7 +74,7 @@ In the following example, the `:gitlab_service_measuring_projects_import_service
[feature flag](feature_flags/development.md#enabling-a-feature-flag-in-development) is used to enable the measuring feature
for `Projects::ImportService`.
From chatops:
From ChatOps:
```shell
/chatops run feature set gitlab_service_measuring_projects_import_service true
......
......@@ -78,7 +78,7 @@ As a general rule, a worker can be considered idempotent if:
- It can safely run multiple times with the same arguments.
- Application side-effects are expected to happen only once
(or side-effects of a second run are not impactful).
(or side-effects of a second run do not have an effect).
A good example of that would be a cache expiration worker.
......@@ -156,7 +156,7 @@ named `disable_<queue name>_deduplication`. For example to disable
deduplication for the `AuthorizedProjectsWorker`, we would enable the
feature flag `disable_authorized_projects_deduplication`.
From chatops:
From ChatOps:
```shell
/chatops run feature set disable_authorized_projects_deduplication true
......@@ -272,10 +272,10 @@ annotated with the `worker_resource_boundary` method.
Most workers tend to spend most of their time blocked, wait on network responses
from other services such as Redis, PostgreSQL, and Gitaly. Since Sidekiq is a
multithreaded environment, these jobs can be scheduled with high concurrency.
multi-threaded environment, these jobs can be scheduled with high concurrency.
Some workers, however, spend large amounts of time _on-CPU_ running logic in
Ruby. Ruby MRI does not support true multithreading - it relies on the
Ruby. Ruby MRI does not support true multi-threading - it relies on the
[GIL](https://thoughtbot.com/blog/untangling-ruby-threads#the-global-interpreter-lock)
to greatly simplify application development by only allowing one section of Ruby
code in a process to run at a time, no matter how many cores the machine
......@@ -427,7 +427,7 @@ isn't picked up by the cops. In any case, please leave a code-comment
pointing to which context will be used when disabling the cops.
When you do provide objects to the context, please make sure that the
route for namespaces and projects is preloaded. This can be done using
route for namespaces and projects is pre-loaded. This can be done using
the `.with_route` scope defined on all `Routable`s.
### Cron-Workers
......@@ -591,7 +591,7 @@ to be merged and deployed before additional changes are merged.
1. In a further merge request, update `ExampleWorker.perform_async` calls to
use the new argument.
##### Parameter hash
##### Parameter hash
This approach will not require multiple deployments if an existing worker already
utilizes a parameter hash.
......
......@@ -113,9 +113,9 @@ sequenceDiagram
## How Usage Ping works
1. The Usage Ping [cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/gitlab_usage_ping_worker.rb#L30) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [GitLab::UsageData.to_json](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
1. GitLab::UsageData.to_json [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in GitLab::UsageData.to_json.
1. When the cron job runs, it calls [`GitLab::UsageData.to_json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L22).
1. `GitLab::UsageData.to_json` [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L22) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L14) into a single JSON payload in `GitLab::UsageData.to_json`.
1. The JSON payload is then [posted to the Versions application]( https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/submit_usage_ping_service.rb#L20).
## Implementing Usage Ping
......@@ -136,12 +136,12 @@ For large tables, PostgreSQL can take a long time to count rows due to MVCC [(Mu
For GitLab.com, there are extremely large tables with 15 second query timeouts, so we use batch counting to avoid encountering timeouts. Here are the sizes of some GitLab.com tables:
| Table | Row counts in millions |
|----------------------------|------------------------|
| merge_request_diff_commits | 2280 |
| ci_build_trace_sections | 1764 |
| merge_request_diff_files | 1082 |
| events | 514 |
| Table | Row counts in millions |
|------------------------------|------------------------|
| `merge_request_diff_commits` | 2280 |
| `ci_build_trace_sections` | 1764 |
| `merge_request_diff_files` | 1082 |
| `events` | 514 |
There are two batch counting methods provided, `Ordinary Batch Counters` and `Distinct Batch Counters`. Batch counting requires indexes on columns to calculate max, min, and range queries. In some cases, a specialized index may need to be added on the columns involved in a counter.
......@@ -204,7 +204,7 @@ Method: `redis_usage_data(counter, &block)`
Arguments:
- `counter`: a counter from `Gitlab::UsageDataCounters`, that has `fallback_totals` method implemented
- or a `block`: wich is evaluated
- or a `block`: which is evaluated
Example of usage:
......
......@@ -330,7 +330,7 @@ Feature.enabled?(:ci_live_trace) # => false
If you wish to set up a test where a feature flag is enabled only
for some actors and not others, you can specify this in options
passed to the helper. For example, to enable the `ci_live_trace`
feature flag for a specifc project:
feature flag for a specific project:
```ruby
project1, project2 = build_list(:project, 2)
......@@ -347,7 +347,7 @@ This represents an actual behavior of FlipperGate:
1. You can enable an override for a specified actor to be enabled
1. You can disable (remove) an override for a specified actor,
fallbacking to default state
falling back to default state
1. There's no way to model that you explicitly disable a specified actor
```ruby
......@@ -467,7 +467,7 @@ However, if a spec makes direct Redis calls, it should mark itself with the
#### Background jobs / Sidekiq
By default, Sidekiq jobs are enqueued into a jobs array and aren't processed.
If a test enqueues Sidekiq jobs and need them to be processed, the
If a test queues Sidekiq jobs and need them to be processed, the
`:sidekiq_inline` trait can be used.
The `:sidekiq_might_not_need_inline` trait was added when [Sidekiq inline mode was
......@@ -723,7 +723,7 @@ module Spec
end
```
Helpers should not change the RSpec config. For instance, the helpers module
Helpers should not change the RSpec configuration. For instance, the helpers module
described above should not include:
```ruby
......@@ -784,9 +784,9 @@ end
This will create a repository containing two files, with default permissions and
the specified content.
### Config
### Configuration
RSpec config files are files that change the RSpec config (i.e.
RSpec configuration files are files that change the RSpec configuration (i.e.
`RSpec.configure do |config|` blocks). They should be placed under
`spec/support/`.
......@@ -805,7 +805,7 @@ RSpec.configure do |config|
end
```
If a config file only consists of `config.include`, you can add these
If a configuration file only consists of `config.include`, you can add these
`config.include` directly in `spec/spec_helper.rb`.
For very generic helpers, consider including them in the `spec/support/rspec.rb`
......
......@@ -89,7 +89,7 @@ end
### Defining Elements
The `view` DSL method will correspond to the rails View, partial, or vue component that renders the elements.
The `view` DSL method will correspond to the rails View, partial, or Vue component that renders the elements.
The `element` DSL method in turn declares an element for which a corresponding
`data-qa-selector=element_name_snaked` data attribute will need to be added to the view file.
......@@ -134,7 +134,7 @@ view 'app/views/my/view.html.haml' do
end
```
To add these elements to the view, you must change the rails View, partial, or vue component by adding a `data-qa-selector` attribute
To add these elements to the view, you must change the rails View, partial, or Vue component by adding a `data-qa-selector` attribute
for each element defined.
In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field"` and `data-qa-selector="sign_in_button"`
......@@ -149,7 +149,7 @@ In our case, `data-qa-selector="login_field"`, `data-qa-selector="password_field
Things to note:
- The name of the element and the qa_selector must match and be snake_cased
- The name of the element and the `qa_selector` must match and be snake_cased
- If the element appears on the page unconditionally, add `required: true` to the element. See
[Dynamic element validation](dynamic_element_validation.md)
- You may see `.qa-selector` classes in existing Page Objects. We should prefer the [`data-qa-selector`](#data-qa-selector-vs-qa-selector)
......@@ -255,7 +255,7 @@ These steps ensure the sanity selectors check will detect problems properly.
For example, `qa/qa/ee/page/merge_request/show.rb` adds EE-specific methods to `qa/qa/page/merge_request/show.rb` (with
`QA::Page::MergeRequest::Show.prepend_if_ee('QA::EE::Page::MergeRequest::Show')`) and following is how it's implemented
(only showing the relevant part and refering to the 4 steps described above with inline comments):
(only showing the relevant part and referring to the 4 steps described above with inline comments):
```ruby
module QA
......
......@@ -24,7 +24,7 @@ To run the tests from the `/qa` directory:
CHROME_HEADLESS=false bin/qa Test::Instance::All http://localhost -- qa/specs/features/ee/browser_ui/3_create/jenkins/jenkins_build_status_spec.rb
```
The test will automatically spinup a Docker container for Jenkins and tear down once the test completes.
The test will automatically spin up a Docker container for Jenkins and tear down once the test completes.
However, if you need to run Jenkins manually outside of the tests, use this command:
......
......@@ -49,7 +49,7 @@ Notice that in the above example, before clicking the `:operations_environments_
When adding new elements to a page, it's important that we have a uniform element naming convention.
We follow a simple formula roughly based on hungarian notation.
We follow a simple formula roughly based on Hungarian notation.
*Formula*: `element :<descriptor>_<type>`
......@@ -109,7 +109,7 @@ we use the name of the page object in [snake_case](https://en.wikipedia.org/wiki
(all lowercase, with words separated by an underscore). See good and bad examples below.
While we prefer to follow the standard in most cases, it is also acceptable to
use common abbreviations (e.g., mr) or other alternatives, as long as
use common abbreviations (e.g., `mr`) or other alternatives, as long as
the name is not ambiguous. This can include appending `_page` if it helps to
avoid confusion or make the code more readable. For example, if a page object is
named `New`, it could be confusing to name the block argument `new` because that
......
......@@ -49,7 +49,7 @@ examples in a JSON report file on `master` (`retrieve-tests-metadata` and
This was originally implemented in: <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/13021>.
If you want to enable retries locally, you can use the `RETRIES` env variable.
If you want to enable retries locally, you can use the `RETRIES` environment variable.
For instance `RETRIES=1 bin/rspec ...` would retry the failing examples once.
## Problems we had in the past at GitLab
......@@ -79,11 +79,11 @@ For instance `RETRIES=1 bin/rspec ...` would retry the failing examples once.
- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34609#note_34048715): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12604>
- [Bis](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/34698#note_34276286): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/12664>
- [Assert against the underlying database state instead of against a page's content](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/31437): <https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/10934>
- In JS tests, shifting elements can cause Capybara to misclick when the element moves at the exact time Capybara sends the click
- In JS tests, shifting elements can cause Capybara to mis-click when the element moves at the exact time Capybara sends the click
- [Dropdowns rendering upward or downward due to window size and scroll position](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17660)
- [Lazy loaded images can cause Capybara to misclick](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
- [Lazy loaded images can cause Capybara to mis-click](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18713)
- [Triggering JS events before the event handlers are set up](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18742)
- [Wait for the image to be lazy-loaded when asserting on a Markdown image's src attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
- [Wait for the image to be lazy-loaded when asserting on a Markdown image's `src` attribute](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25408)
#### Capybara viewport size related issues
......
......@@ -55,8 +55,8 @@ subgraph "CNG-mirror pipeline"
each component (e.g. `gitlab-rails-ee`, `gitlab-shell`, `gitaly` etc.)
based on the commit from the [GitLab pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) and stores
them in its [registry](https://gitlab.com/gitlab-org/build/CNG-mirror/container_registry).
- We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (**C**loud
**N**ative **G**itLab), project's registry is not overloaded with a
- We use the [`CNG-mirror`](https://gitlab.com/gitlab-org/build/CNG-mirror) project so that the `CNG`, (Cloud
Native GitLab), project's registry is not overloaded with a
lot of transient Docker images.
- Note that the official CNG images are built by the `cloud-native-image`
job, which runs only for tags, and triggers itself a [`CNG`](https://gitlab.com/gitlab-org/build/CNG) pipeline.
......@@ -139,8 +139,8 @@ browser performance testing using a
The `review-apps-ee` and `review-apps-ce` clusters are currently set up with
the following node pools:
- `review-apps-ee` of preemptible `e2-highcpu-16` (16 vCPU, 16 GB memory) nodes with autoscaling
- `review-apps-ce` of preemptible `n1-standard-8` (8 vCPU, 16 GB memory) nodes with autoscaling
- `review-apps-ee` of pre-emptible `e2-highcpu-16` (16 vCPU, 16 GB memory) nodes with autoscaling
- `review-apps-ce` of pre-emptible `n1-standard-8` (8 vCPU, 16 GB memory) nodes with autoscaling
### Helm
......@@ -278,14 +278,14 @@ kubectl top pods | sort --key 2 --numeric
**Potential cause:**
This could be a sign that there are too many stale secrets and/or config maps.
This could be a sign that there are too many stale secrets and/or configuration maps.
**Where to look for further debugging:**
Look at [the list of Configurations](https://console.cloud.google.com/kubernetes/config?project=gitlab-review-apps)
or `kubectl get secret,cm --sort-by='{.metadata.creationTimestamp}' | grep 'review-'`.
Any secrets or config maps older than 5 days are suspect and should be deleted.
Any secrets or configuration maps older than 5 days are suspect and should be deleted.
**Useful commands:**
......@@ -354,7 +354,7 @@ For the record, the debugging steps to find out this issue were:
1. Web search for exact error message, following rabbit hole to [a relevant Kubernetes bug report](https://github.com/kubernetes/kubernetes/issues/57345)
1. Access the node over SSH via the GCP console (**Computer Engine > VM
instances** then click the "SSH" button for the node where the `dns-gitlab-review-app-external-dns` pod runs)
1. In the node: `systemctl --version` => systemd 232
1. In the node: `systemctl --version` => `systemd 232`
1. Gather some more information:
- `mount | grep kube | wc -l` => e.g. 290
- `systemctl list-units --all | grep -i var-lib-kube | wc -l` => e.g. 142
......
......@@ -198,8 +198,7 @@ There are quite a few different types of nodes, so we only cover some of the
more common ones here.
A full list of all the available nodes and their descriptions can be found in
the [PostgreSQL source file
"plannodes.h"](https://gitlab.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h)
the [PostgreSQL source file `plannodes.h`](https://gitlab.com/postgres/postgres/blob/master/src/include/nodes/plannodes.h)
### Seq Scan
......@@ -686,11 +685,11 @@ Planning time: 0.411 ms
Execution time: 0.113 ms
```
### Chatops
### ChatOps
[GitLab employees can also use our chatops solution, available in Slack using the
[GitLab employees can also use our ChatOps solution, available in Slack using the
`/chatops` slash command](chatops_on_gitlabcom.md).
You can use chatops to get a query plan by running the following:
You can use ChatOps to get a query plan by running the following:
```sql
/chatops run explain SELECT COUNT(*) FROM projects WHERE visibility_level IN (0, 20)
......@@ -719,7 +718,7 @@ with their own clone of the production database.
Joe is available in the
[`#database-lab`](https://gitlab.slack.com/archives/CLJMDRD8C) channel on Slack.
Unlike chatops, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
Unlike ChatOps, it gives you a way to execute DDL statements (like creating indexes and tables) and get query plan not only for `SELECT` but also `UPDATE` and `DELETE`.
For example, in order to test new index you can do the following:
......
......@@ -92,7 +92,7 @@ We can identify three major use-cases for an upload:
1. **storage:** if we are uploading for storing a file (i.e. artifacts, packages, discussion attachments). In this case [direct upload](#direct-upload) is the proper level as it's the less resource-intensive operation. Additional information can be found on [File Storage in GitLab](file_storage.md).
1. **in-controller/synchronous processing:** if we allow processing **small files** synchronously, using [disk buffered upload](#disk-buffered-upload) may speed up development.
1. **Sidekiq/asynchronous processing:** Async processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
1. **Sidekiq/asynchronous processing:** Asynchronous processing must implement [direct upload](#direct-upload), the reason being that it's the only way to support Cloud Native deployments without a shared NFS.
For more details about currently broken feature see [epic &1802](https://gitlab.com/groups/gitlab-org/-/epics/1802).
......@@ -128,7 +128,7 @@ This is the default kind of upload, and it's most expensive in terms of resource
In this case, workhorse is unaware of files being uploaded and acts as a regular proxy.
When a multipart request reaches the rails application, `Rack::Multipart` leaves behind tempfiles in `/tmp` and uses valuable Ruby process time to copy files around.
When a multipart request reaches the rails application, `Rack::Multipart` leaves behind temporary files in `/tmp` and uses valuable Ruby process time to copy files around.
```mermaid
sequenceDiagram
......
......@@ -49,7 +49,7 @@ Refer to: <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/mer
## `Override`
Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/override.rb>:
Refer to [`override.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/override.rb):
- This utility can help you check if one method would override
another or not. It is the same concept as Java's `@Override` annotation
......@@ -153,7 +153,7 @@ Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/utils/stro
## `RequestCache`
Refer to <https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cache/request_cache.rb>.
Refer to [`request_cache.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cache/request_cache.rb).
This module provides a simple way to cache values in RequestStore,
and the cache key would be based on the class name, method name,
......
......@@ -47,7 +47,7 @@ There is a chance that your Google Cloud group may already have an image
built. Search the available images before you do the work to build your
own.
Build a Google Cloud image with the above shared runners repo by doing the following:
Build a Google Cloud image with the above shared runners repository by doing the following:
1. Install [Packer](https://www.packer.io/) (tested to work with version 1.5.1).
1. Install Packer Windows Update Provisioner.
......@@ -55,7 +55,7 @@ Build a Google Cloud image with the above shared runners repo by doing the follo
1. Run the command `go build -o packer-provisioner-windows-update` (requires `go` to be installed).
1. Verify `packer-provisioner-windows-update` is in the `PATH` environment variable.
1. Add all [required environment variables](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/gcp/windows-containers/-/blob/master/packer.json#L2-10)
in the `packer.json` file to your environment (perhaps use [direnv](https://direnv.net/)).
in the `packer.json` file to your environment (perhaps use [`direnv`](https://direnv.net/)).
1. Build the image by running the command: `packer build packer.json`.
## How to use a Windows image in GCP
......@@ -96,7 +96,7 @@ Here are a few tips on GCP and Windows.
### GCP cost savings
To minimise the cost of your GCP VM instance, stop it when you're not using it.
If you do, you'll need to redownload the RDP file from the console as the IP
If you do, you'll need to re-download the RDP file from the console as the IP
address changes every time you stop and start it.
### chocolatey
......@@ -119,13 +119,13 @@ You can install .NET version 3 support with the following `DISM` command:
`DISM /Online /Enable-Feature /FeatureName:NetFx3 /All`
### nix -> Windows cmd tips
### nix -> Windows `cmd` tips
The first tip for using the Windows command shell is to open Powershell and use that instead.
The first tip for using the Windows command shell is to open PowerShell and use that instead.
Start Powershell: `start powershell`.
Start PowerShell: `start powershell`.
Powershell has aliases for all of the following commands so you don't have to learn the native commands:
PowerShell has aliases for all of the following commands so you don't have to learn the native commands:
- `ls` ---> `dir`
- `rm` ---> `del`
......
......@@ -33,7 +33,7 @@ of the box through its main components:
- [Serving](https://github.com/knative/serving): Request-driven compute that can scale to zero.
- [Eventing](https://github.com/knative/eventing): Management and delivery of events.
For more information on Knative, visit the [Knative docs repo](https://github.com/knative/docs).
For more information on Knative, visit the [Knative docs repository](https://github.com/knative/docs).
With GitLab Serverless, you can deploy both functions-as-a-service (FaaS) and serverless applications.
......@@ -61,14 +61,14 @@ To run Knative on GitLab, you will need:
wildcard domain where your applications will be served. Configure your DNS server to use the
external IP address or hostname for that domain.
1. **`.gitlab-ci.yml`:** GitLab uses [Kaniko](https://github.com/GoogleContainerTools/kaniko)
to build the application. We also use [gitlabktl](https://gitlab.com/gitlab-org/gitlabktl)
to build the application. We also use [GitLab Knative tool](https://gitlab.com/gitlab-org/gitlabktl)
CLI to simplify the deployment of services and functions to Knative.
1. **`serverless.yml`** (for [functions only](#deploying-functions)): When using serverless to deploy functions, the `serverless.yml` file
will contain the information for all the functions being hosted in the repository as well as a reference to the
runtime being used.
1. **`Dockerfile`** (for [applications only](#deploying-serverless-applications)): Knative requires a
`Dockerfile` in order to build your applications. It should be included at the root of your
project's repo and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
project's repository and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
using our [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes).
1. **Prometheus** (optional): Installing Prometheus allows you to monitor the scale and traffic of your serverless function/application.
See [Installing Applications](../index.md#installing-applications) for more information.
......@@ -97,9 +97,9 @@ The minimum recommended cluster size to run Knative is 3-nodes, 6 vCPUs, and 22.
1. The Ingress is now available at this address and will route incoming requests to the proper service based on the DNS
name in the request. To support this, a wildcard DNS record should be created for the desired domain name. For example,
if your Knative base domain is `knative.info` then you need to create an A record or CNAME record with domain `*.knative.info`
pointing the ip address or hostname of the Ingress.
pointing the IP address or hostname of the Ingress.
![dns entry](img/dns-entry.png)
![DNS entry](img/dns-entry.png)
NOTE: **Note:**
You can deploy either [functions](#deploying-functions) or [serverless applications](#deploying-serverless-applications)
......@@ -318,7 +318,7 @@ Explanation of the fields used above:
|-----------|-------------|
| `name` | Indicates which provider is used to execute the `serverless.yml` file. In this case, the TriggerMesh middleware. |
| `envs` | Includes the environment variables to be passed as part of function execution for **all** functions in the file, where `FOO` is the variable name and `BAR` are the variable contents. You may replace this with your own variables. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for **all** functions in the file. The secrets are expected in ini format. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for **all** functions in the file. The secrets are expected in INI format. |
### `functions`
......@@ -332,7 +332,7 @@ subsequent lines contain the function attributes.
| `runtime` (optional)| The runtime to be used to execute the function. This can be a runtime alias (see [Runtime aliases](#runtime-aliases)), or it can be a full URL to a custom runtime repository. When the runtime is not specified, we assume that `Dockerfile` is present in the function directory specified by `source`. |
| `description` | A short description of the function. |
| `envs` | Sets an environment variable for the specific function only. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for the specific function only. The secrets are expected in ini format. |
| `secrets` | Includes the contents of the Kubernetes secret as environment variables accessible to be passed as part of function execution for the specific function only. The secrets are expected in INI format. |
### Deployment
......@@ -384,7 +384,7 @@ The sample function can now be triggered from any HTTP client using a simple `PO
http://functions-echo.functions-1.functions.example.com/
```
1. Using a web-based tool (ie. postman, restlet, etc)
1. Using a web-based tool (such as Postman or Restlet)
![function execution](img/function-execution.png)
......
......@@ -82,7 +82,7 @@ For more information regarding the SubGit configuration options, refer to
### Initial translation
Now that SubGit has configured the Git/SVN repos, run `subgit` to perform the
Now that SubGit has configured the Git/SVN repositories, run `subgit` to perform the
initial translation of existing SVN revisions into the Git repository:
```shell
......
......@@ -4,7 +4,7 @@ You can configure GitLab to send notifications to a Webex Teams space.
## Create a webhook for the space
1. Go to the [Incoming Webooks app page](https://apphub.webex.com/teams/applications/incoming-webhooks-cisco-systems).
1. Go to the [Incoming Webhooks app page](https://apphub.webex.com/teams/applications/incoming-webhooks-cisco-systems).
1. Click **Connect** and log in to Webex Teams, if required.
1. Enter a name for the webhook and select the space that will receive the notifications.
1. Click **ADD**.
......
......@@ -68,7 +68,7 @@ Navigate to the **Design Management** page from any issue by clicking the **Desi
To upload design images, click the **Upload Designs** button and select images to upload.
[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/34353) in [GitLab Premium](https://about.gitlab.com/pricing/) 12.9,
you can drag and drop designs onto the dedicated dropzone to upload them.
you can drag and drop designs onto the dedicated drop zone to upload them.
![Drag and drop design uploads](img/design_drag_and_drop_uploads_v12_9.png)
......
......@@ -157,7 +157,7 @@ context, such as past work, dependencies, or duplicates.
### Crosslinking issues
You can [crosslink issues](crosslinking_issues.md) by referencing an issue from another
You can [cross-link issues](crosslinking_issues.md) by referencing an issue from another
issue or merge request by including its URL or ID. The referenced issue displays a
message in the Activity stream about the reference, with a link to the other issue or MR.
......
......@@ -112,7 +112,7 @@ in a Merge Request. To do so, click the **...** button in the gutter of the Merg
If you've set up [GitLab CI/CD](../../../ci/README.md) in your project,
you will be able to see:
- Both pre and post-merge pipelines and the environment information if any.
- Both pre-merge and post-merge pipelines and the environment information if any.
- Which deployments are in progress.
If there's an [environment](../../../ci/environments/index.md) and the application is
......
......@@ -52,5 +52,5 @@ You can take some **optional** further steps:
![Change repo's path](../img/change_path_v12_10.png)
- Now go to your SSG's config file and change the [base URL](../getting_started_part_one.md#urls-and-baseurls)
- Now go to your SSG's configuration file and change the [base URL](../getting_started_part_one.md#urls-and-baseurls)
from `"project-name"` to `""`. The project name setting varies by SSG and may not be in the config file.
......@@ -55,7 +55,7 @@ To update a GitLab Pages website:
| Document | Description |
| -------- | ----------- |
| [GitLab Pages domain names, URLs, and baseurls](getting_started_part_one.md) | Learn about GitLab Pages default domains. |
| [GitLab Pages domain names, URLs, and base URLs](getting_started_part_one.md) | Learn about GitLab Pages default domains. |
| [Explore GitLab Pages](introduction.md) | Requirements, technical aspects, specific GitLab CI/CD configuration options, Access Control, custom 404 pages, limitations, FAQ. |
| [Custom domains and SSL/TLS Certificates](custom_domains_ssl_tls_certification/index.md) | Custom domains and subdomains, DNS records, and SSL/TLS certificates. |
| [Let's Encrypt integration](custom_domains_ssl_tls_certification/lets_encrypt_integration.md) | Secure your Pages sites with Let's Encrypt certificates, which are automatically obtained and renewed by GitLab. |
......
......@@ -25,7 +25,7 @@ In brief, this is what you need to upload your website in GitLab Pages:
1. Domain of the instance: domain name that is used for GitLab Pages
(ask your administrator).
1. GitLab CI/CD: a `.gitlab-ci.yml` file with a specific job named [`pages`](../../../ci/yaml/README.md#pages) in the root directory of your repository.
1. A directory called `public` in your site's repo containing the content
1. A directory called `public` in your site's repository containing the content
to be published.
1. GitLab Runner enabled for the project.
......@@ -87,7 +87,7 @@ will be deleted.
When using Pages under the general domain of a GitLab instance (`*.example.io`),
you _cannot_ use HTTPS with sub-subdomains. That means that if your
username/groupname contains a dot, for example `foo.bar`, the domain
username or group name contains a dot, for example `foo.bar`, the domain
`https://foo.bar.example.io` will _not_ work. This is a limitation of the
[HTTP Over TLS protocol](https://tools.ietf.org/html/rfc2818#section-3.1). HTTP pages will continue to work provided you
don't redirect HTTP to HTTPS.
......
......@@ -105,7 +105,7 @@ operating systems the steps might be slightly different. Follow the
therefore, it needs to be part of the website content under the
repo's [`public`](index.md#how-it-works) folder.
1. Add, commit, and push the file into your repo in GitLab. Once the pipeline
1. Add, commit, and push the file into your repository in GitLab. Once the pipeline
passes, press **Enter** on your terminal to continue issuing your
certificate. CertBot will then prompt you with the following message:
......
......@@ -46,7 +46,7 @@ You can use [repository mirroring](repository_mirroring.md) to keep your fork sy
The main difference is that with repository mirroring your remote fork will be automatically kept up-to-date.
Without mirroring, to work locally you'll have to use `git pull` to update your local repo
Without mirroring, to work locally you'll have to use `git pull` to update your local repository
with the upstream project, then push the changes back to your fork to update it.
CAUTION: **Caution:**
......
......@@ -27,7 +27,7 @@ that you [connect with GitLab via SSH](../../../ssh/README.md).
## Files
Use a repository to store your files in GitLab. From [GitLab 12.10 onwards](https://gitlab.com/gitlab-org/gitlab/-/issues/33806),
Use a repository to store your files in GitLab. In [GitLab 12.10 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/33806),
you'll see on the repository's file tree an icon next to the file name
according to its extension:
......@@ -84,9 +84,9 @@ according to the markup language.
| [AsciiDoc](../../asciidoc.md) | `adoc`, `ad`, `asciidoc` |
| [Textile](https://textile-lang.com/) | `textile` |
| [rdoc](http://rdoc.sourceforge.net/doc/index.html) | `rdoc` |
| [Orgmode](https://orgmode.org/) | `org` |
| [Org mode](https://orgmode.org/) | `org` |
| [creole](http://www.wikicreole.org/) | `creole` |
| [Mediawiki](https://www.mediawiki.org/wiki/MediaWiki) | `wiki`, `mediawiki` |
| [MediaWiki](https://www.mediawiki.org/wiki/MediaWiki) | `wiki`, `mediawiki` |
### Repository README and index files
......@@ -219,7 +219,9 @@ vendored code, and most markup languages are excluded. This behavior can be
adjusted by overriding the default. For example, to enable `.proto` files to be
detected, add the following to `.gitattributes` in the root of your repository.
> *.proto linguist-detectable=true
```plaintext
*.proto linguist-detectable=true
```
## Locked files **(PREMIUM)**
......
......@@ -27,8 +27,7 @@ project](../settings/import_export.html#exporting-a-project-and-its-data).
To make cloning your project faster, rewrite branches and tags to remove
unwanted files.
1. [Install `git
filter-repo`](https://github.com/newren/git-filter-repo/blob/master/INSTALL.md)
1. [Install `git filter-repo`](https://github.com/newren/git-filter-repo/blob/master/INSTALL.md)
using a supported package manager, or from source.
1. Clone a fresh copy of the repository using `--bare`.
......@@ -52,8 +51,7 @@ unwanted files.
git filter-repo --path path/to/big/file.m4v --invert-paths
```
See the [`git filter-repo`
documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
See the [`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
for more examples, and the complete documentation.
1. Force push your changes to overwrite all branches on GitLab.
......@@ -81,8 +79,7 @@ unwanted files.
To reduce the size of your repository in GitLab you will need to remove GitLab
internal refs that reference commits contain large files. Before completing
these steps, first [purged files from your repository
history](#purging-files-from-your-repository-history).
these steps, first [purged files from your repository history](#purging-files-from-your-repository-history).
As well as branches and tags, which are a type of Git ref, GitLab automatically
creates other refs. These refs prevent dead links to commits, or missing diffs
......@@ -97,12 +94,10 @@ fetching faster. The hidden refs to prevent commits with discussion from being
deleted (`refs/keep-around/*`) cannot be fetched at all. These refs can,
however, be accessed from the Git bundle inside the project export.
1. [Install `git
filter-repo`](https://github.com/newren/git-filter-repo/blob/master/INSTALL.md)
1. [Install `git filter-repo`](https://github.com/newren/git-filter-repo/blob/master/INSTALL.md)
using a supported package manager, or from source.
1. Generate a fresh [export the
project](../settings/import_export.html#exporting-a-project-and-its-data) and
1. Generate a fresh [export from the project](../settings/import_export.md#exporting-a-project-and-its-data) and
download to your computer.
1. Decompress the backup using `tar`
......@@ -111,8 +106,7 @@ however, be accessed from the Git bundle inside the project export.
tar xzf project-backup.tar.gz
```
This will contain a `project.bundle` file, which was created by [`git
bundle`](https://git-scm.com/docs/git-bundle)
This will contain a `project.bundle` file, which was created by [`git bundle`](https://git-scm.com/docs/git-bundle)
1. Clone a fresh copy of the repository from the bundle.
......@@ -142,8 +136,7 @@ however, be accessed from the Git bundle inside the project export.
git filter-repo --path path/to/big/file.m4v --invert-paths
```
See the [`git filter-repo`
documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
See the [`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES)
for more examples, and the complete documentation.
1. After running `git filter-repo`, the header and unchanged commits need to be
......
......@@ -65,11 +65,11 @@ git config --global gpg.format x509
### Windows and MacOS
Install [smimesign](https://github.com/github/smimesign) by downloading the
Install [S/MIME Sign](https://github.com/github/smimesign) by downloading the
installer or via `brew install smimesign` on MacOS.
Get the ID of your certificate with `smimesign --list-keys` and set your
signingkey `git config --global user.signingkey ID`, then configure X.509:
signing key `git config --global user.signingkey ID`, then configure X.509:
```shell
git config --global gpg.x509.program smimesign
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment