Commit 6600d8db authored by Craig Norris's avatar Craig Norris

Merge branch 'docs-aqualls-20200610-spelling' into 'master'

Docs: more fixes to usage and spelling

See merge request gitlab-org/gitlab!34304
parents 17fbf0eb 5e722681
......@@ -42,6 +42,7 @@ backtrace
backtraced
backtraces
backtracing
badging
Bamboo
Bitbucket
blockquote
......@@ -121,6 +122,8 @@ Ecto
Elasticsearch
enablement
enqueued
enum
enums
ETag
Excon
expirable
......@@ -139,6 +142,7 @@ Forgerock
Fugit
Gantt
Gemnasium
Gemojione
gettext
Git
Gitaly
......@@ -148,6 +152,7 @@ GitLab
gitlabsos
Gitleaks
Gitter
globals
Gmail
Google
Gosec
......@@ -297,6 +302,7 @@ preloading
preloads
prepend
prepended
prepending
prepends
Pritaly
profiler
......
......@@ -594,7 +594,7 @@ There are two ways to determine the value of `DOCKER_AUTH_CONFIG`:
```
- **Second way -** In some setups, it's possible that Docker client
will use the available system keystore to store the result of `docker
will use the available system key store to store the result of `docker
login`. In that case, it's impossible to read `~/.docker/config.json`,
so you will need to prepare the required base64-encoded version of
`${username}:${password}` and create the Docker configuration JSON manually.
......@@ -712,7 +712,7 @@ To configure credentials store, follow these steps:
```
- Or, if you are running self-managed Runners, add the above JSON to
`${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner will read this config file
`${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner will read this configuration file
and will use the needed helper for this specific repository.
NOTE: **Note:** `credsStore` is used to access ALL the registries.
......@@ -761,7 +761,7 @@ To configure access for `aws_account_id.dkr.ecr.region.amazonaws.com`, follow th
- Or, if you are running self-managed Runners,
add the above JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`.
GitLab Runner will read this config file and will use the needed helper for this
GitLab Runner will read this configuration file and will use the needed helper for this
specific repository.
1. You can now use any private image from `aws_account_id.dkr.ecr.region.amazonaws.com` defined in
......
......@@ -109,7 +109,7 @@ parameter in `.gitlab-ci.yml` to use the custom location instead of the default
Now it's time we set up [GitLab CI/CD](https://about.gitlab.com/stages-devops-lifecycle/continuous-integration/) to automatically build, test and deploy the dependency!
GitLab CI/CD uses a file in the root of the repo, named `.gitlab-ci.yml`, to read the definitions for jobs
GitLab CI/CD uses a file in the root of the repository, named `.gitlab-ci.yml`, to read the definitions for jobs
that will be executed by the configured GitLab Runners. You can read more about this file in the [GitLab Documentation](../../yaml/README.md).
First of all, remember to set up variables for your deployment. Navigate to your project's **Settings > CI/CD > Environment variables** page
......@@ -119,7 +119,7 @@ and add the following ones (replace them with your current values, of course):
- **MAVEN_REPO_USER**: `gitlab` (your Artifactory username)
- **MAVEN_REPO_PASS**: `AKCp2WXr3G61Xjz1PLmYa3arm3yfBozPxSta4taP3SeNu2HPXYa7FhNYosnndFNNgoEds8BCS` (your Artifactory Encrypted Password)
Now it's time to define jobs in `.gitlab-ci.yml` and push it to the repo:
Now it's time to define jobs in `.gitlab-ci.yml` and push it to the repository:
```yaml
image: maven:latest
......@@ -154,7 +154,7 @@ deploy:
GitLab Runner will use the latest [Maven Docker image](https://hub.docker.com/_/maven/), which already contains all the tools and the dependencies you need to manage the project,
in order to run the jobs.
Environment variables are set to instruct Maven to use the `homedir` of the repo instead of the user's home when searching for configuration and dependencies.
Environment variables are set to instruct Maven to use the `homedir` of the repository instead of the user's home when searching for configuration and dependencies.
Caching the `.m2/repository folder` (where all the Maven files are stored), and the `target` folder (where our application will be created), is useful for speeding up the process
by running all Maven phases in a sequential order, therefore, executing `mvn test` will automatically run `mvn compile` if necessary.
......@@ -164,7 +164,7 @@ Both `build` and `test` jobs leverage the `mvn` command to compile the applicati
Deploy to Artifactory is done as defined by the variables we have just set up.
The deployment occurs only if we're pushing or merging to `master` branch, so that the development versions are tested but not published.
Done! Now you have all the changes in the GitLab repo, and a pipeline has already been started for this commit. In the **Pipelines** tab you can see what's happening.
Done! Now you have all the changes in the GitLab repository, and a pipeline has already been started for this commit. In the **Pipelines** tab you can see what's happening.
If the deployment has been successful, the deploy job log will output:
```plaintext
......@@ -177,7 +177,7 @@ If the deployment has been successful, the deploy job log will output:
>**Note**:
the `mvn` command downloads a lot of files from the internet, so you'll see a lot of extra activity in the log the first time you run it.
Yay! You did it! Checking in Artifactory will confirm that you have a new artifact available in the `libs-release-local` repo.
Yay! You did it! Checking in Artifactory will confirm that you have a new artifact available in the `libs-release-local` repository.
## Create the main Maven application
......@@ -228,7 +228,7 @@ Here is how you can get the content of the file directly from Artifactory:
1. Click on **Generate Maven Settings**
1. Click on **Generate Settings**
1. Copy to clipboard the configuration file
1. Save the file as `.m2/settings.xml` in your repo
1. Save the file as `.m2/settings.xml` in your repository
Now you are ready to use the Artifactory repository to resolve dependencies and use `simple-maven-dep` in your main application!
......@@ -239,7 +239,7 @@ You need a last step to have everything in place: configure the `.gitlab-ci.yml`
You want to leverage [GitLab CI/CD](https://about.gitlab.com/stages-devops-lifecycle/continuous-integration/) to automatically build, test and run your awesome application,
and see if you can get the greeting as expected!
All you need to do is to add the following `.gitlab-ci.yml` to the repo:
All you need to do is to add the following `.gitlab-ci.yml` to the repository:
```yaml
image: maven:latest
......
......@@ -56,7 +56,7 @@ To use different provider take a look at long list of [Supported Providers](http
## Using Dpl with Docker
In most cases, you will have configured [GitLab Runner](https://docs.gitlab.com/runner/) to use your server's shell commands.
This means that all commands are run in the context of local user (e.g. gitlab_runner or gitlab_ci_multi_runner).
This means that all commands are run in the context of local user (e.g. `gitlab_runner` or `gitlab_ci_multi_runner`).
It also means that most probably in your Docker container you don't have the Ruby runtime installed.
You will have to install it:
......
......@@ -47,7 +47,7 @@ All these operations will put all files into a `build` folder, which is ready to
## How to transfer files to a live server
You have multiple options: rsync, scp, sftp, and so on. For now, we will use scp.
You have multiple options: rsync, SCP, SFTP, and so on. For now, we will use SCP.
To make this work, you need to add a GitLab CI/CD Variable (accessible on `gitlab.example/your-project-name/variables`). That variable will be called `STAGING_PRIVATE_KEY` and it's the **private** SSH key of your server.
......@@ -123,7 +123,7 @@ Therefore, for a production environment we use additional steps to ensure that a
Since this was a WordPress project, I gave real life code snippets. Some further ideas you can pursue:
- Having a slightly different script for `master` branch will allow you to deploy to a production server from that branch and to a stage server from any other branches.
- Instead of pushing it live, you can push it to WordPress official repo (with creating a SVN commit, etc.).
- Instead of pushing it live, you can push it to WordPress official repository (with creating a SVN commit, etc.).
- You could generate i18n text domains on the fly.
---
......
......@@ -65,7 +65,7 @@ docker-php-ext-install pdo_mysql
```
You might wonder what `docker-php-ext-install` is. In short, it is a script
provided by the official php Docker image that you can use to easily install
provided by the official PHP Docker image that you can use to easily install
extensions. For more information read the documentation at
<https://hub.docker.com/_/php>.
......@@ -174,7 +174,7 @@ Finally, push to GitLab and let the tests begin!
### Test against different PHP versions in Shell builds
The [phpenv](https://github.com/phpenv/phpenv) project allows you to easily manage different versions of PHP
each with its own config. This is especially useful when testing PHP projects
each with its own configuration. This is especially useful when testing PHP projects
with the Shell executor.
You will have to install it on your build machine under the `gitlab-runner`
......
......@@ -44,7 +44,7 @@ You can add a command to your `.gitlab-ci.yml` file to
| `CI_COMMIT_TITLE` | 10.8 | all | The title of the commit - the full first line of the message |
| `CI_CONCURRENT_ID` | all | 11.10 | Unique ID of build execution within a single executor. |
| `CI_CONCURRENT_PROJECT_ID` | all | 11.10 | Unique ID of build execution within a single executor and project. |
| `CI_CONFIG_PATH` | 9.4 | 0.5 | The path to CI config file. Defaults to `.gitlab-ci.yml` |
| `CI_CONFIG_PATH` | 9.4 | 0.5 | The path to CI configuration file. Defaults to `.gitlab-ci.yml` |
| `CI_DEBUG_TRACE` | all | 1.7 | Whether [debug logging (tracing)](README.md#debug-logging) is enabled |
| `CI_DEFAULT_BRANCH` | 12.4 | all | The name of the default branch for the project. |
| `CI_DEPLOY_PASSWORD` | 10.8 | all | Authentication password of the [GitLab Deploy Token](../../user/project/deploy_tokens/index.md#gitlab-deploy-token), only present if the Project has one related. |
......@@ -97,7 +97,7 @@ You can add a command to your `.gitlab-ci.yml` file to
| `CI_PROJECT_DIR` | all | all | The full path where the repository is cloned and where the job is run. If the GitLab Runner `builds_dir` parameter is set, this variable is set relative to the value of `builds_dir`. For more information, see [Advanced configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section) for GitLab Runner. |
| `CI_PROJECT_ID` | all | all | The unique ID of the current project that GitLab CI/CD uses internally |
| `CI_PROJECT_NAME` | 8.10 | 0.5 | The name of the directory for the project that is currently being built. For example, if the project URL is `gitlab.example.com/group-name/project-1`, the `CI_PROJECT_NAME` would be `project-1`. |
| `CI_PROJECT_NAMESPACE` | 8.10 | 0.5 | The project namespace (username or groupname) that is currently being built |
| `CI_PROJECT_NAMESPACE` | 8.10 | 0.5 | The project namespace (username or group name) that is currently being built |
| `CI_PROJECT_PATH` | 8.10 | 0.5 | The namespace with project name |
| `CI_PROJECT_PATH_SLUG` | 9.3 | all | `$CI_PROJECT_PATH` lowercased and with everything except `0-9` and `a-z` replaced with `-`. Use in URLs and domain names. |
| `CI_PROJECT_REPOSITORY_LANGUAGES` | 12.3 | all | Comma-separated, lowercased list of the languages used in the repository (e.g. `ruby,javascript,html,css`) |
......
......@@ -120,7 +120,7 @@ For instance:
In order to validate some parameters in the API request, we validate them
before sending them further (say Gitaly). The following are the
[custom validators](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/api/validations/validators),
[custom validators](https://GitLab.com/gitlab-org/gitlab/-/tree/master/lib/api/validations/validators),
which we have added so far and how to use them. We also wrote a
guide on how you can add a new custom validator.
......
......@@ -83,7 +83,7 @@ module EE
end
```
This looks working as a workaround, however, this approach has some donwside that:
This looks working as a workaround, however, this approach has some downsides that:
- Features could move from EE to FOSS or vice versa. Therefore, the offset might be mixed between FOSS and EE in the future.
e.g. When you move `activity_limit_exceeded` to FOSS, you'll see `{ unknown_failure: 0, config_error: 1, activity_limit_exceeded: 1_000 }`.
......
......@@ -27,7 +27,7 @@ process boundaries, the correlation ID is injected into the outgoing request. Th
the propagation of the correlation ID to each downstream subsystem.
Correlation IDs are normally generated in the Rails application in response to
certain webrequests. Some user facing systems don't generate correlation IDs in
certain web requests. Some user facing systems don't generate correlation IDs in
response to user requests (for example, Git pushes over SSH).
### Developer guidelines for working with correlation IDs
......
......@@ -356,7 +356,7 @@ files.
```
This also allows the nav to be displayed on other
highest-level dirs (`/omnibus/`, `/runner/`, etc),
highest-level directories (`/omnibus/`, `/runner/`, etc),
linking them back to `/ee/`.
The same logic is applied to all sections (`sec[:section_url]`),
......
......@@ -107,13 +107,13 @@ The pipeline in the `gitlab-docs` project:
Once a week on Mondays, a scheduled pipeline runs and rebuilds the Docker images
used in various pipeline jobs, like `docs-lint`. The Docker image configuration files are
located at <https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/dockerfiles>.
located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs/-/tree/master/dockerfiles).
If you need to rebuild the Docker images immediately (must have maintainer level permissions):
CAUTION: **Caution**
If you change the dockerfile configuration and rebuild the images, you can break the master
pipeline in the main `gitlab` repo as well as in `gitlab-docs`. Create an image with
pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with
a different name first and test it to ensure you do not break the pipelines.
1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI / CD > Pipelines**.
......@@ -207,22 +207,22 @@ If you don't specify `editor:`, the simple one is used by default.
## Algolia search engine
The docs site uses [Algolia docsearch](https://community.algolia.com/docsearch/)
The docs site uses [Algolia DocSearch](https://community.algolia.com/docsearch/)
for its search function. This is how it works:
1. GitLab is a member of the [docsearch program](https://community.algolia.com/docsearch/#join-docsearch-program),
1. GitLab is a member of the [DocSearch program](https://community.algolia.com/docsearch/#join-docsearch-program),
which is the free tier of [Algolia](https://www.algolia.com/).
1. Algolia hosts a [DocSearch configuration](https://github.com/algolia/docsearch-configs/blob/master/configs/gitlab.json)
for the GitLab docs site, and we've worked together to refine it.
1. That [config](https://community.algolia.com/docsearch/config-file.html) is
1. That [configuration](https://community.algolia.com/docsearch/config-file.html) is
parsed by their [crawler](https://community.algolia.com/docsearch/crawler-overview.html)
every 24h and [stores](https://community.algolia.com/docsearch/inside-the-engine.html)
the [docsearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html)
the [DocSearch index](https://community.algolia.com/docsearch/how-do-we-build-an-index.html)
on [Algolia's servers](https://community.algolia.com/docsearch/faq.html#where-is-my-data-hosted%3F).
1. On the docs side, we use a [docsearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/docsearch.html) which
1. On the docs side, we use a [DocSearch layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/docsearch.html) which
is present on pretty much every page except <https://docs.gitlab.com/search/>,
which uses its [own layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/master/layouts/instantsearch.html). In those layouts,
there's a JavaScript snippet which initiates docsearch by using an API key
there's a JavaScript snippet which initiates DocSearch by using an API key
and an index name (`gitlab`) that are needed for Algolia to show the results.
NOTE: **For GitLab employees:**
......
......@@ -34,7 +34,7 @@ For additional details on each, see the [template for new docs](#template-for-ne
below.
Note that you can include additional subsections, as appropriate, such as 'How it Works', 'Architecture',
and other logical divisions such as pre- and post-deployment steps.
and other logical divisions such as pre-deployment and post-deployment steps.
## Template for new docs
......
......@@ -24,19 +24,19 @@ See the [Elasticsearch GDK setup instructions](https://gitlab.com/gitlab-org/git
- `gitlab:elastic:test:index_size`: Tells you how much space the current index is using, as well as how many documents are in the index.
- `gitlab:elastic:test:index_size_change`: Outputs index size, reindexes, and outputs index size again. Useful when testing improvements to indexing size.
Additionally, if you need large repos or multiple forks for testing, please consider [following these instructions](rake_tasks.md#extra-project-seed-options)
Additionally, if you need large repositories or multiple forks for testing, please consider [following these instructions](rake_tasks.md#extra-project-seed-options)
## How does it work?
The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a Rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [/ee/app/models/concerns/elastic/application_versioned_search.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a Rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [`/ee/app/models/concerns/elastic/application_versioned_search.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
After initial indexing is complete, create, update, and delete operations for all models except projects (see [#207494](https://gitlab.com/gitlab-org/gitlab/-/issues/207494)) are tracked in a Redis [`ZSET`](https://redis.io/topics/data-types#sorted-sets). A regular `sidekiq-cron` `ElasticIndexBulkCronWorker` processes this queue, updating many Elasticsearch documents at a time with the [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html).
Search queries are generated by the concerns found in [ee/app/models/concerns/elastic](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
Search queries are generated by the concerns found in [`ee/app/models/concerns/elastic`](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
## Existing Analyzers/Tokenizers/Filters
These are all defined in [ee/lib/elastic/latest/config.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/elastic/latest/config.rb)
These are all defined in [`ee/lib/elastic/latest/config.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/elastic/latest/config.rb)
### Analyzers
......@@ -71,7 +71,7 @@ Not directly used for indexing, but rather used to transform a search input. Use
#### `sha_tokenizer`
This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searcheable by any sub-set of it (minimum of 5 chars).
This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searchable by any sub-set of it (minimum of 5 chars).
Example:
......@@ -149,7 +149,7 @@ These proxy objects would talk to Elasticsearch server directly (see top half of
![Elasticsearch Architecture](img/elasticsearch_architecture.svg)
In the planned new design, each model would have a pair of corresponding subclassed proxy objects, in which model-specific logic is located. For example, `Snippet` would have `SnippetClassProxy` and `SnippetInstanceProxy` (being subclass of `Elasticsearch::Model::Proxy::ClassMethodsProxy` and `Elasticsearch::Model::Proxy::InstanceMethodsProxy`, respectively).
In the planned new design, each model would have a pair of corresponding sub-classed proxy objects, in which model-specific logic is located. For example, `Snippet` would have `SnippetClassProxy` and `SnippetInstanceProxy` (being subclass of `Elasticsearch::Model::Proxy::ClassMethodsProxy` and `Elasticsearch::Model::Proxy::InstanceMethodsProxy`, respectively).
`__elasticsearch__` would represent another layer of proxy object, keeping track of multiple actual proxy objects. It would forward method calls to the appropriate index. For example:
......
# Axios
We use [axios](https://github.com/axios/axios) to communicate with the server in Vue applications and most new code.
We use [Axios](https://github.com/axios/axios) to communicate with the server in Vue applications and most new code.
In order to guarantee all defaults are set you *should not use `axios` directly*, you should import `axios` from `axios_utils`.
In order to guarantee all defaults are set you *should not use Axios directly*, you should import Axios from `axios_utils`.
## CSRF token
All our request require a CSRF token.
To guarantee this token is set, we are importing [axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
All our requests require a CSRF token.
To guarantee this token is set, we are importing [Axios](https://github.com/axios/axios), setting the token, and exporting `axios` .
This exported module should be used instead of directly using `axios` to ensure the token is set.
This exported module should be used instead of directly using Axios to ensure the token is set.
## Usage
......@@ -30,7 +30,7 @@ This exported module should be used instead of directly using `axios` to ensure
});
```
## Mock axios response in tests
## Mock Axios response in tests
To help us mock the responses we are using [axios-mock-adapter](https://github.com/ctimmerm/axios-mock-adapter).
......@@ -41,7 +41,7 @@ Advantages over [`spyOn()`](https://jasmine.github.io/api/edge/global.html#spyOn
- simple API to test error cases
- provides `replyOnce()` to allow for different responses
We have also decided against using [axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
We have also decided against using [Axios interceptors](https://github.com/axios/axios#interceptors) because they are not suitable for mocking.
### Example
......@@ -67,7 +67,7 @@ We have also decided against using [axios interceptors](https://github.com/axios
});
```
### Mock poll requests in tests with axios
### Mock poll requests in tests with Axios
Because polling function requires a header object, we need to always include an object as the third argument:
......
......@@ -6,7 +6,7 @@
Add the `Ajax` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
`Ajax` requires 2 config values, the `endpoint` and `method`.
`Ajax` requires 2 configuration values, the `endpoint` and `method`.
- `endpoint` should be a URL to the request endpoint.
- `method` should be `setData` or `addData`.
......
......@@ -7,7 +7,7 @@ to the dropdown using a simple fuzzy string search of an input value.
Add the `Filter` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
- `Filter` requires a config value for `template`.
- `Filter` requires a configuration value for `template`.
- `template` should be the key of the objects within your data array that you want to compare
to the user input string, for filtering.
......
......@@ -6,12 +6,12 @@
Add the `InputSetter` object to the plugins array of a `DropLab.prototype.init` or `DropLab.prototype.addHook` call.
- `InputSetter` requires a config value for `input` and `valueAttribute`.
- `InputSetter` requires a configuration value for `input` and `valueAttribute`.
- `input` should be the DOM element that you want to manipulate.
- `valueAttribute` should be a string that is the name of an attribute on your list items that is used to get the value
to update the `input` element with.
You can also set the `InputSetter` config to an array of objects, which will allow you to update multiple elements.
You can also set the `InputSetter` configuration to an array of objects, which will allow you to update multiple elements.
```html
<input id="input" value="">
......
# Emojis
GitLab supports native unicode emojis and fallsback to image-based emojis selectively
GitLab supports native Unicode emojis and falls back to image-based emojis selectively
when your platform does not support it.
## How to update Emojis
......@@ -21,7 +21,7 @@ when your platform does not support it.
1. Ensure you see new individual images copied into `app/assets/images/emoji/`
1. Ensure you can see the new emojis and their aliases in the GFM Autocomplete
1. Ensure you can see the new emojis and their aliases in the award emoji menu
1. You might need to add new emoji unicode support checks and rules for platforms
1. You might need to add new emoji Unicode support checks and rules for platforms
that do not support a certain emoji and we need to fallback to an image.
See `app/assets/javascripts/emoji/support/is_emoji_unicode_supported.js`
and `app/assets/javascripts/emoji/support/unicode_support_map.js`
......@@ -89,7 +89,7 @@ Default client accepts two parameters: `resolvers` and `config`.
## GraphQL Queries
To save query compilation at runtime, webpack can directly import `.graphql`
files. This allows webpack to preprocess the query at compile time instead
files. This allows webpack to pre-process the query at compile time instead
of the client doing compilation of queries.
To distinguish queries from mutations and fragments, the following naming convention is recommended:
......
......@@ -24,7 +24,7 @@ sprite_icon(icon_name, size: nil, css_class: '')
- **icon_name** Use the icon_name that you can find in the SVG Sprite
([Overview is available here](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size (optional)** Use one of the following sizes : 16, 24, 32, 48, 72 (this will be translated into a `s16` class)
- **css_class (optional)** If you want to add additional css classes
- **css_class (optional)** If you want to add additional CSS classes
**Example**
......@@ -67,8 +67,8 @@ export default {
- **name** Name of the Icon in the SVG Sprite ([Overview is available here](https://gitlab-org.gitlab.io/gitlab-svgs)).
- **size (optional)** Number value for the size which is then mapped to a specific CSS class
(Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` css classes)
- **css-classes (optional)** Additional CSS Classes to add to the svg tag.
(Available Sizes: 8, 12, 16, 18, 24, 32, 48, 72 are mapped to `sXX` CSS classes)
- **css-classes (optional)** Additional CSS Classes to add to the SVG tag.
### Usage in HTML/JS
......@@ -91,7 +91,7 @@ Please use the class `svg-content` around it to ensure nice rendering.
### Usage in Vue
To use an SVG illustrations in a template provide the path as a property and display it through a standard img tag.
To use an SVG illustrations in a template provide the path as a property and display it through a standard `img` tag.
Component:
......
......@@ -6,9 +6,9 @@ To get started with Vue, read through [their documentation](https://vuejs.org/v2
What is described in the following sections can be found in these examples:
- web ide: <https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/ide/stores>
- security products: <https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/assets/javascripts/vue_shared/security_reports>
- registry: <https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores>
- [Web IDE](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/ide/stores)
- [Security products](https://gitlab.com/gitlab-org/gitlab/tree/master/ee/app/assets/javascripts/vue_shared/security_reports)
- [Registry](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/app/assets/javascripts/registry/stores)
## Vue architecture
......@@ -16,7 +16,7 @@ All new features built with Vue.js must follow a [Flux architecture](https://fac
The main goal we are trying to achieve is to have only one data flow and only one data entry.
In order to achieve this goal we use [vuex](#vuex).
You can also read about this architecture in vue docs about [state management](https://vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
You can also read about this architecture in Vue docs about [state management](https://vuejs.org/v2/guide/state-management.html#Simple-State-Management-from-Scratch)
and about [one way data flow](https://vuejs.org/v2/guide/components.html#One-Way-Data-Flow).
### Components and Store
......@@ -59,7 +59,7 @@ To do that, provide the data through `data` attributes in the HTML element and q
_Note:_ You should only do this while initializing the application, because the mounted element will be replaced with Vue-generated DOM.
The advantage of providing data from the DOM to the Vue instance through `props` in the `render` function
instead of querying the DOM inside the main vue component is that makes tests easier by avoiding the need to
instead of querying the DOM inside the main Vue component is that makes tests easier by avoiding the need to
create a fixture or an HTML element in the unit test. See the following example:
```javascript
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment