Commit f5379925 authored by Craig Norris's avatar Craig Norris

Merge branch 'aqualls-future-tense-1' into 'master'

Fix future tense issues in Monitor docset

See merge request gitlab-org/gitlab!48036
parents fb9c4fcc e9d2e0b8
...@@ -59,7 +59,7 @@ on non-Go GitLab subsystems. ...@@ -59,7 +59,7 @@ on non-Go GitLab subsystems.
GitLab uses the `GITLAB_TRACING` environment variable to configure distributed tracing. The same GitLab uses the `GITLAB_TRACING` environment variable to configure distributed tracing. The same
configuration is used for all components (e.g., Workhorse, Rails, etc). configuration is used for all components (e.g., Workhorse, Rails, etc).
When `GITLAB_TRACING` is not set, the application will not be instrumented, meaning that there is When `GITLAB_TRACING` is not set, the application isn't instrumented, meaning that there is
no overhead at all. no overhead at all.
To enable `GITLAB_TRACING`, a valid _"configuration-string"_ value should be set, with a URL-like To enable `GITLAB_TRACING`, a valid _"configuration-string"_ value should be set, with a URL-like
...@@ -94,8 +94,8 @@ by typing `p` `b` in the browser window. ...@@ -94,8 +94,8 @@ by typing `p` `b` in the browser window.
Once the performance bar is enabled, click on the **Trace** link in the performance bar to go to Once the performance bar is enabled, click on the **Trace** link in the performance bar to go to
the Jaeger UI. the Jaeger UI.
The Jaeger search UI will return a query for the `Correlation-ID` of the current request. Normally, The Jaeger search UI returns a query for the `Correlation-ID` of the current request. Normally,
this search should return a single trace result. Clicking this result will show the detail of the this search should return a single trace result. Clicking this result shows the detail of the
trace in a hierarchical time-line. trace in a hierarchical time-line.
![Jaeger Search UI](img/distributed_tracing_jaeger_ui.png) ![Jaeger Search UI](img/distributed_tracing_jaeger_ui.png)
...@@ -154,7 +154,7 @@ This should start the process with the default listening ports. ...@@ -154,7 +154,7 @@ This should start the process with the default listening ports.
### 2. Configure the `GITLAB_TRACING` environment variable ### 2. Configure the `GITLAB_TRACING` environment variable
Once you have Jaeger running, you'll need to configure the `GITLAB_TRACING` variable with the Once you have Jaeger running, configure the `GITLAB_TRACING` variable with the
appropriate configuration string. appropriate configuration string.
**TL;DR:** If you are running everything on the same host, use the following value: **TL;DR:** If you are running everything on the same host, use the following value:
...@@ -178,7 +178,7 @@ This configuration string uses the Jaeger driver `opentracing://jaeger` with the ...@@ -178,7 +178,7 @@ This configuration string uses the Jaeger driver `opentracing://jaeger` with the
| `udp_endpoint` | `localhost:6831` | This is the default. Configures Jaeger to send trace information to the UDP listener on port `6831` using compact thrift protocol. Note that we've experienced some issues with the [Jaeger Client for Ruby](https://github.com/salemove/jaeger-client-ruby) when using this protocol. | | `udp_endpoint` | `localhost:6831` | This is the default. Configures Jaeger to send trace information to the UDP listener on port `6831` using compact thrift protocol. Note that we've experienced some issues with the [Jaeger Client for Ruby](https://github.com/salemove/jaeger-client-ruby) when using this protocol. |
| `sampler` | `probabalistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. | | `sampler` | `probabalistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabalistic` sampler to randomly sample _1%_ of traces. | | `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabalistic` sampler to randomly sample _1%_ of traces. |
| `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter will take precedence over the application-supplied value. | | `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter takes precedence over the application-supplied value. |
NOTE: **Note:** NOTE: **Note:**
The same `GITLAB_TRACING` value should to be configured in the environment The same `GITLAB_TRACING` value should to be configured in the environment
...@@ -189,7 +189,7 @@ variables for all GitLab processes, including Workhorse, Gitaly, Rails, and Side ...@@ -189,7 +189,7 @@ variables for all GitLab processes, including Workhorse, Gitaly, Rails, and Side
After the `GITLAB_TRACING` environment variable is exported to all GitLab services, start the After the `GITLAB_TRACING` environment variable is exported to all GitLab services, start the
application. application.
When `GITLAB_TRACING` is configured properly, the application will log this on startup: When `GITLAB_TRACING` is configured properly, the application logs this on startup:
```shell ```shell
13:41:53 gitlab-workhorse.1 | 2019/02/12 13:41:53 Tracing enabled 13:41:53 gitlab-workhorse.1 | 2019/02/12 13:41:53 Tracing enabled
...@@ -198,7 +198,7 @@ When `GITLAB_TRACING` is configured properly, the application will log this on s ...@@ -198,7 +198,7 @@ When `GITLAB_TRACING` is configured properly, the application will log this on s
... ...
``` ```
If `GITLAB_TRACING` is not configured correctly, this will also be logged: If `GITLAB_TRACING` is not configured correctly, this issue is logged:
```shell ```shell
13:43:45 gitaly.1 | 2019/02/12 13:43:45 skipping tracing configuration step: tracer: unable to load driver mytracer 13:43:45 gitaly.1 | 2019/02/12 13:43:45 skipping tracing configuration step: tracer: unable to load driver mytracer
...@@ -216,5 +216,5 @@ not set. ...@@ -216,5 +216,5 @@ not set.
By default, the Jaeger search UI is available at <http://localhost:16686/search>. By default, the Jaeger search UI is available at <http://localhost:16686/search>.
TIP: **Tip:** TIP: **Tip:**
Don't forget that you will need to generate traces by using the application before Don't forget that you must generate traces by using the application before
they appear in the Jaeger UI. they appear in the Jaeger UI.
...@@ -38,8 +38,8 @@ Documentation issues and merge requests are part of their respective repositorie ...@@ -38,8 +38,8 @@ Documentation issues and merge requests are part of their respective repositorie
The [CI pipeline for the main GitLab project](../pipelines.md) is configured to automatically The [CI pipeline for the main GitLab project](../pipelines.md) is configured to automatically
run only the jobs that match the type of contribution. If your contribution contains run only the jobs that match the type of contribution. If your contribution contains
**only** documentation changes, then only documentation-related jobs will be run, and **only** documentation changes, then only documentation-related jobs run, and
the pipeline will complete much faster than a code contribution. the pipeline completes much faster than a code contribution.
If you are submitting documentation-only changes to Runner, Omnibus, or Charts, If you are submitting documentation-only changes to Runner, Omnibus, or Charts,
the fast pipeline is not determined automatically. Instead, create branches for the fast pipeline is not determined automatically. Instead, create branches for
...@@ -152,7 +152,7 @@ comments: false ...@@ -152,7 +152,7 @@ comments: false
Each page can have additional, optional metadata (set in the Each page can have additional, optional metadata (set in the
[default.html](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fc3577921343173d589dfa43d837b4307e4e620f/layouts/default.html#L30-52) [default.html](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/fc3577921343173d589dfa43d837b4307e4e620f/layouts/default.html#L30-52)
Nanoc layout), which will be displayed at the top of the page if defined: Nanoc layout), which is displayed at the top of the page if defined:
- `reading_time`: If you want to add an indication of the approximate reading - `reading_time`: If you want to add an indication of the approximate reading
time of a page, you can set `reading_time` to `true`. This uses a simple time of a page, you can set `reading_time` to `true`. This uses a simple
...@@ -225,9 +225,9 @@ Things to note: ...@@ -225,9 +225,9 @@ Things to note:
the document might also be referenced in the views of GitLab (`app/`) which will the document might also be referenced in the views of GitLab (`app/`) which will
render when visiting `/help`, and sometimes in the testing suite (`spec/`). render when visiting `/help`, and sometimes in the testing suite (`spec/`).
You must search these paths for references to the doc and update them as well. You must search these paths for references to the doc and update them as well.
- The above `git grep` command will search recursively in the directory you run - The above `git grep` command searches recursively in the directory you run
it in for `workflow/lfs/lfs_administration` and `lfs/lfs_administration` it in for `workflow/lfs/lfs_administration` and `lfs/lfs_administration`
and will print the file and the line where this file is mentioned. and prints the file and the line where this file is mentioned.
You may ask why the two greps. Since [we use relative paths to link to You may ask why the two greps. Since [we use relative paths to link to
documentation](styleguide/index.md#links), sometimes it might be useful to search a path deeper. documentation](styleguide/index.md#links), sometimes it might be useful to search a path deeper.
- The `*.md` extension is not used when a document is linked to GitLab's - The `*.md` extension is not used when a document is linked to GitLab's
...@@ -267,7 +267,7 @@ Before getting started, make sure you read the introductory section ...@@ -267,7 +267,7 @@ Before getting started, make sure you read the introductory section
- Label the MR `Documentation` (can only be done by people with `developer` access, for example, GitLab team members) - Label the MR `Documentation` (can only be done by people with `developer` access, for example, GitLab team members)
- Assign the correct milestone per note below (can only be done by people with `developer` access, for example, GitLab team members) - Assign the correct milestone per note below (can only be done by people with `developer` access, for example, GitLab team members)
Documentation will be merged if it is an improvement on existing content, Documentation is merged if it is an improvement on existing content,
represents a good-faith effort to follow the template and style standards, represents a good-faith effort to follow the template and style standards,
and is believed to be accurate. and is believed to be accurate.
...@@ -285,16 +285,16 @@ Every GitLab instance includes the documentation, which is available at `/help` ...@@ -285,16 +285,16 @@ Every GitLab instance includes the documentation, which is available at `/help`
(`https://gitlab.example.com/help`). For example, <https://gitlab.com/help>. (`https://gitlab.example.com/help`). For example, <https://gitlab.com/help>.
The documentation available online on <https://docs.gitlab.com> is deployed every four hours from the `master` branch of GitLab, Omnibus, and Runner. Therefore, The documentation available online on <https://docs.gitlab.com> is deployed every four hours from the `master` branch of GitLab, Omnibus, and Runner. Therefore,
after a merge request gets merged, it will be available online on the same day. after a merge request gets merged, it is available online on the same day.
However, it will be shipped (and available on `/help`) within the milestone assigned However, it's shipped (and available on `/help`) within the milestone assigned
to the MR. to the MR.
For example, let's say your merge request has a milestone set to 11.3, which For example, let's say your merge request has a milestone set to 11.3, which
will be released on 2018-09-22. If it gets merged on 2018-09-15, it will be a release date of 2018-09-22. If it gets merged on 2018-09-15, it is
available online on 2018-09-15, but, as the feature freeze date has passed, if available online on 2018-09-15, but, as the feature freeze date has passed, if
the MR does not have a `~"Pick into 11.3"` label, the milestone has to be changed the MR does not have a `~"Pick into 11.3"` label, the milestone has to be changed
to 11.4 and it will be shipped with all GitLab packages only on 2018-10-22, to 11.4 and it ships with all GitLab packages only on 2018-10-22,
with GitLab 11.4. Meaning, it will only be available under `/help` from GitLab with GitLab 11.4. Meaning, it's available only with `/help` from GitLab
11.4 onward, but available on <https://docs.gitlab.com/> on the same day it was merged. 11.4 onward, but available on <https://docs.gitlab.com/> on the same day it was merged.
### Linking to `/help` ### Linking to `/help`
...@@ -365,7 +365,7 @@ You can combine one or more of the following: ...@@ -365,7 +365,7 @@ You can combine one or more of the following:
### GitLab `/help` tests ### GitLab `/help` tests
Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/features/help_pages_spec.rb) Several [RSpec tests](https://gitlab.com/gitlab-org/gitlab/blob/master/spec/features/help_pages_spec.rb)
are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../README.md) will work correctly from `/help`. are run to ensure GitLab documentation renders and works correctly. In particular, that [main docs landing page](../../README.md) works correctly from `/help`.
For example, [GitLab.com's `/help`](https://gitlab.com/help). For example, [GitLab.com's `/help`](https://gitlab.com/help).
## Docs site architecture ## Docs site architecture
...@@ -392,20 +392,20 @@ The live preview is currently enabled for the following projects: ...@@ -392,20 +392,20 @@ The live preview is currently enabled for the following projects:
If your merge request has docs changes, you can use the manual `review-docs-deploy` job If your merge request has docs changes, you can use the manual `review-docs-deploy` job
to deploy the docs review app for your merge request. to deploy the docs review app for your merge request.
You will need at least Maintainer permissions to be able to run it. You need at least Maintainer permissions to be able to run it.
![Manual trigger a docs build](img/manual_build_docs.png) ![Manual trigger a docs build](img/manual_build_docs.png)
You must push a branch to those repositories, as it doesn't work for forks. You must push a branch to those repositories, as it doesn't work for forks.
The `review-docs-deploy*` job will: The `review-docs-deploy*` job:
1. Create a new branch in the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs) 1. Creates a new branch in the [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs)
project named after the scheme: `docs-preview-$DOCS_GITLAB_REPO_SUFFIX-$CI_MERGE_REQUEST_IID`, project named after the scheme: `docs-preview-$DOCS_GITLAB_REPO_SUFFIX-$CI_MERGE_REQUEST_IID`,
where `DOCS_GITLAB_REPO_SUFFIX` is the suffix for each product, e.g, `ee` for where `DOCS_GITLAB_REPO_SUFFIX` is the suffix for each product, e.g, `ee` for
EE, `omnibus` for Omnibus GitLab, etc, and `CI_MERGE_REQUEST_IID` is the ID EE, `omnibus` for Omnibus GitLab, etc, and `CI_MERGE_REQUEST_IID` is the ID
of the respective merge request. of the respective merge request.
1. Trigger a cross project pipeline and build the docs site with your changes. 1. Triggers a cross project pipeline and build the docs site with your changes.
In case the review app URL returns 404, this means that either the site is not In case the review app URL returns 404, this means that either the site is not
yet deployed, or something went wrong with the remote pipeline. Give it a few yet deployed, or something went wrong with the remote pipeline. Give it a few
...@@ -414,8 +414,8 @@ remote pipeline from the link in the merge request's job output. ...@@ -414,8 +414,8 @@ remote pipeline from the link in the merge request's job output.
If the pipeline failed or got stuck, drop a line in the `#docs` chat channel. If the pipeline failed or got stuck, drop a line in the `#docs` chat channel.
Make sure that you always delete the branch of the merge request you were Make sure that you always delete the branch of the merge request you were
working on. If you don't, the remote docs branch won't be removed either, working on. If you don't, the remote docs branch isn't removed either,
and the server where the Review Apps are hosted will eventually be out of and the server where the Review Apps are hosted can eventually run out of
disk space. disk space.
TIP: **Tip:** TIP: **Tip:**
...@@ -449,7 +449,7 @@ If you want to know the in-depth details, here's what's really happening: ...@@ -449,7 +449,7 @@ If you want to know the in-depth details, here's what's really happening:
- The number of the merge request is added so that you can know by the - The number of the merge request is added so that you can know by the
`gitlab-docs` branch name the merge request it originated from. `gitlab-docs` branch name the merge request it originated from.
1. The remote branch is then created if it doesn't exist (meaning you can 1. The remote branch is then created if it doesn't exist (meaning you can
re-run the manual job as many times as you want and this step will be skipped). re-run the manual job as many times as you want and this step is skipped).
1. A new cross-project pipeline is triggered in the docs project. 1. A new cross-project pipeline is triggered in the docs project.
1. The preview URL is shown both at the job output and in the merge request 1. The preview URL is shown both at the job output and in the merge request
widget. You also get the link to the remote pipeline. widget. You also get the link to the remote pipeline.
...@@ -537,7 +537,8 @@ To have the screenshot focuses few more steps are needed: ...@@ -537,7 +537,8 @@ To have the screenshot focuses few more steps are needed:
- **wait for the content**: `expect(screenshot_area).to have_content 'Expiration interval'` - **wait for the content**: `expect(screenshot_area).to have_content 'Expiration interval'`
- **set the crop area**: `set_crop_data(screenshot_area, 20)` - **set the crop area**: `set_crop_data(screenshot_area, 20)`
In particular `set_crop_data` accepts as arguments: a `DOM` element and a padding, the padding will be added around the element enlarging the screenshot area. In particular, `set_crop_data` accepts as arguments: a `DOM` element and a
padding. The padding is added around the element, enlarging the screenshot area.
#### Live example #### Live example
......
...@@ -19,13 +19,14 @@ Instrumenting methods is done by using the `Gitlab::Metrics::Instrumentation` ...@@ -19,13 +19,14 @@ Instrumenting methods is done by using the `Gitlab::Metrics::Instrumentation`
module. This module offers a few different methods that can be used to module. This module offers a few different methods that can be used to
instrument code: instrument code:
- `instrument_method`: instruments a single class method. - `instrument_method`: Instruments a single class method.
- `instrument_instance_method`: instruments a single instance method. - `instrument_instance_method`: Instruments a single instance method.
- `instrument_class_hierarchy`: given a Class this method will recursively - `instrument_class_hierarchy`: Given a Class, this method recursively
instrument all sub-classes (both class and instance methods). instruments all sub-classes (both class and instance methods).
- `instrument_methods`: instruments all public and private class methods of a Module. - `instrument_methods`: Instruments all public and private class methods of a
- `instrument_instance_methods`: instruments all public and private instance methods of a
Module. Module.
- `instrument_instance_methods`: Instruments all public and private instance
methods of a Module.
To remove the need for typing the full `Gitlab::Metrics::Instrumentation` To remove the need for typing the full `Gitlab::Metrics::Instrumentation`
namespace you can use the `configure` class method. This method simply yields namespace you can use the `configure` class method. This method simply yields
...@@ -91,7 +92,7 @@ Ruby code. In case of the above snippet you'd run the following: ...@@ -91,7 +92,7 @@ Ruby code. In case of the above snippet you'd run the following:
- `$ Banzai::Renderer.render` - `$ Banzai::Renderer.render`
This will print out something along the lines of: This prints a result similar to:
```plaintext ```plaintext
From: /path/to/your/gitlab/lib/gitlab/metrics/instrumentation.rb @ line 148: From: /path/to/your/gitlab/lib/gitlab/metrics/instrumentation.rb @ line 148:
...@@ -131,7 +132,7 @@ Three values are measured for a block: ...@@ -131,7 +132,7 @@ Three values are measured for a block:
Both the real and CPU timings are measured in milliseconds. Both the real and CPU timings are measured in milliseconds.
Multiple calls to the same block will result in the final values being the sum Multiple calls to the same block results in the final values being the sum
of all individual values. Take this code for example: of all individual values. Take this code for example:
```ruby ```ruby
...@@ -142,7 +143,7 @@ of all individual values. Take this code for example: ...@@ -142,7 +143,7 @@ of all individual values. Take this code for example:
end end
``` ```
Here the final value of `sleep_real_time` will be `3`, _not_ `1`. Here, the final value of `sleep_real_time` is `3`, and not `1`.
## Tracking Custom Events ## Tracking Custom Events
......
...@@ -35,14 +35,14 @@ Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms) ...@@ -35,14 +35,14 @@ Completed 200 OK in 166ms (Views: 117.4ms | ActiveRecord: 27.2ms)
These logs suffer from a number of problems: These logs suffer from a number of problems:
1. They often lack timestamps or other contextual information (e.g. project ID, user) 1. They often lack timestamps or other contextual information (for example, project ID or user)
1. They may span multiple lines, which make them hard to find via Elasticsearch. 1. They may span multiple lines, which make them hard to find via Elasticsearch.
1. They lack a common structure, which make them hard to parse by log 1. They lack a common structure, which make them hard to parse by log
forwarders, such as Logstash or Fluentd. This also makes them hard to forwarders, such as Logstash or Fluentd. This also makes them hard to
search. search.
Note that currently on GitLab.com, any messages in `production.log` will Note that currently on GitLab.com, any messages in `production.log` aren't
NOT get indexed by Elasticsearch due to the sheer volume and noise. They indexed by Elasticsearch due to the sheer volume and noise. They
do end up in Google Stackdriver, but it is still harder to search for do end up in Google Stackdriver, but it is still harder to search for
logs there. See the [GitLab.com logging logs there. See the [GitLab.com logging
documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/logging/doc/README.md) documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/logging/doc/README.md)
...@@ -73,7 +73,7 @@ importer progresses. Here's what to do: ...@@ -73,7 +73,7 @@ importer progresses. Here's what to do:
make it easy for people to search pertinent logs in one place. For make it easy for people to search pertinent logs in one place. For
example, `geo.log` contains all logs pertaining to GitLab Geo. example, `geo.log` contains all logs pertaining to GitLab Geo.
To create a new file: To create a new file:
1. Choose a filename (e.g. `importer_json.log`). 1. Choose a filename (for example, `importer_json.log`).
1. Create a new subclass of `Gitlab::JsonLogger`: 1. Create a new subclass of `Gitlab::JsonLogger`:
```ruby ```ruby
...@@ -99,7 +99,7 @@ importer progresses. Here's what to do: ...@@ -99,7 +99,7 @@ importer progresses. Here's what to do:
``` ```
Note that it's useful to memoize this because creating a new logger Note that it's useful to memoize this because creating a new logger
each time you log will open a file, adding unnecessary overhead. each time you log opens a file, adding unnecessary overhead.
1. Now insert log messages into your code. When adding logs, 1. Now insert log messages into your code. When adding logs,
make sure to include all the context as key-value pairs: make sure to include all the context as key-value pairs:
...@@ -129,7 +129,7 @@ an Elasticsearch-specific way, the concepts should translate to many systems you ...@@ -129,7 +129,7 @@ an Elasticsearch-specific way, the concepts should translate to many systems you
might use to index structured logs. GitLab.com uses Elasticsearch to index log might use to index structured logs. GitLab.com uses Elasticsearch to index log
data. data.
Unless a field type is explicitly mapped, Elasticsearch will infer the type from Unless a field type is explicitly mapped, Elasticsearch infers the type from
the first instance of that field value it sees. Subsequent instances of that the first instance of that field value it sees. Subsequent instances of that
field value with different types will either fail to be indexed, or in some field value with different types will either fail to be indexed, or in some
cases (scalar/object conflict), the whole log line will be dropped. cases (scalar/object conflict), the whole log line will be dropped.
...@@ -138,7 +138,7 @@ GitLab.com's logging Elasticsearch sets ...@@ -138,7 +138,7 @@ GitLab.com's logging Elasticsearch sets
[`ignore_malformed`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html), [`ignore_malformed`](https://www.elastic.co/guide/en/elasticsearch/reference/current/ignore-malformed.html),
which allows documents to be indexed even when there are simpler sorts of which allows documents to be indexed even when there are simpler sorts of
mapping conflict (for example, number / string), although indexing on the affected fields mapping conflict (for example, number / string), although indexing on the affected fields
will break. breaks.
Examples: Examples:
...@@ -177,17 +177,24 @@ challenged to choose between seconds, milliseconds or any other unit, lean towar ...@@ -177,17 +177,24 @@ challenged to choose between seconds, milliseconds or any other unit, lean towar
(with microseconds precision, i.e. `Gitlab::InstrumentationHelper::DURATION_PRECISION`). (with microseconds precision, i.e. `Gitlab::InstrumentationHelper::DURATION_PRECISION`).
In order to make it easier to track timings in the logs, make sure the log key has `_s` as In order to make it easier to track timings in the logs, make sure the log key has `_s` as
suffix and `duration` within its name (e.g., `view_duration_s`). suffix and `duration` within its name (for example, `view_duration_s`).
## Multi-destination Logging ## Multi-destination Logging
GitLab is transitioning from unstructured/plaintext logs to structured/JSON logs. During this transition period some logs will be recorded in multiple formats through multi-destination logging. GitLab is transitioning from unstructured/plaintext logs to structured/JSON logs. During this transition period some logs are recorded in multiple formats through multi-destination logging.
### How to use multi-destination logging ### How to use multi-destination logging
Create a new logger class, inheriting from `MultiDestinationLogger` and add an array of loggers to a `LOGGERS` constant. The loggers should be classes that descend from `Gitlab::Logger`. e.g. the user defined loggers in the following examples, could be inheriting from `Gitlab::Logger` and `Gitlab::JsonLogger`, respectively. Create a new logger class, inheriting from `MultiDestinationLogger` and add an
array of loggers to a `LOGGERS` constant. The loggers should be classes that
descend from `Gitlab::Logger`. For example, the user-defined loggers in the
following examples could be inheriting from `Gitlab::Logger` and
`Gitlab::JsonLogger`, respectively.
You must specify one of the loggers as the `primary_logger`. The `primary_logger` will be used when information about this multi-destination logger is displayed in the app, e.g. using the `Gitlab::Logger.read_latest` method. You must specify one of the loggers as the `primary_logger`. The
`primary_logger` is used when information about this multi-destination logger is
displayed in the application (for example, using the `Gitlab::Logger.read_latest`
method).
The following example sets one of the defined `LOGGERS` as a `primary_logger`. The following example sets one of the defined `LOGGERS` as a `primary_logger`.
...@@ -207,19 +214,19 @@ module Gitlab ...@@ -207,19 +214,19 @@ module Gitlab
end end
``` ```
You can now call the usual logging methods on this multi-logger, e.g. You can now call the usual logging methods on this multi-logger. For example:
```ruby ```ruby
FancyMultiLogger.info(message: "Information") FancyMultiLogger.info(message: "Information")
``` ```
This message will be logged by each logger registered in `FancyMultiLogger.loggers`. This message is logged by each logger registered in `FancyMultiLogger.loggers`.
### Passing a string or hash for logging ### Passing a string or hash for logging
When passing a string or hash to a `MultiDestinationLogger`, the log lines could be formatted differently, depending on the kinds of `LOGGERS` set. When passing a string or hash to a `MultiDestinationLogger`, the log lines could be formatted differently, depending on the kinds of `LOGGERS` set.
e.g. let's partially define the loggers from the previous example: For example, let's partially define the loggers from the previous example:
```ruby ```ruby
module Gitlab module Gitlab
...@@ -356,7 +363,7 @@ end ...@@ -356,7 +363,7 @@ end
## Additional steps with new log files ## Additional steps with new log files
1. Consider log retention settings. By default, Omnibus will rotate any 1. Consider log retention settings. By default, Omnibus rotates any
logs in `/var/log/gitlab/gitlab-rails/*.log` every hour and [keep at logs in `/var/log/gitlab/gitlab-rails/*.log` every hour and [keep at
most 30 compressed files](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate). most 30 compressed files](https://docs.gitlab.com/omnibus/settings/logs.html#logrotate).
On GitLab.com, that setting is only 6 compressed files. These settings should suffice On GitLab.com, that setting is only 6 compressed files. These settings should suffice
......
...@@ -31,7 +31,7 @@ The requirement for adding a new metric is to make each query to have an unique ...@@ -31,7 +31,7 @@ The requirement for adding a new metric is to make each query to have an unique
### Update existing metrics ### Update existing metrics
After you add or change an existing common metric, you must [re-run the import script](../administration/raketasks/maintenance.md#import-common-metrics) that will query and update all existing metrics. After you add or change an existing common metric, you must [re-run the import script](../administration/raketasks/maintenance.md#import-common-metrics) that queries and updates all existing metrics.
Or, you can create a database migration: Or, you can create a database migration:
...@@ -51,7 +51,7 @@ class ImportCommonMetrics < ActiveRecord::Migration[4.2] ...@@ -51,7 +51,7 @@ class ImportCommonMetrics < ActiveRecord::Migration[4.2]
end end
``` ```
If a query metric (which is identified by `id:`) is removed it will not be removed from database by default. If a query metric (which is identified by `id:`) is removed, it isn't removed from database by default.
You might want to add additional database migration that makes a decision what to do with removed one. You might want to add additional database migration that makes a decision what to do with removed one.
For example: you might be interested in migrating all dependent data to a different metric. For example: you might be interested in migrating all dependent data to a different metric.
...@@ -75,5 +75,5 @@ This section describes how to add new metrics for self-monitoring ...@@ -75,5 +75,5 @@ This section describes how to add new metrics for self-monitoring
1. Select the appropriate name for your metric. Refer to the guidelines 1. Select the appropriate name for your metric. Refer to the guidelines
for [Prometheus metric names](https://prometheus.io/docs/practices/naming/#metric-names). for [Prometheus metric names](https://prometheus.io/docs/practices/naming/#metric-names).
1. Update the list of [GitLab Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md). 1. Update the list of [GitLab Prometheus metrics](../administration/monitoring/prometheus/gitlab_metrics.md).
1. Trigger the relevant page/code that will record the new metric. 1. Trigger the relevant page or code that records the new metric.
1. Check that the new metric appears at `/-/metrics`. 1. Check that the new metric appears at `/-/metrics`.
...@@ -79,8 +79,8 @@ project. That way you can have different clusters for different environments, ...@@ -79,8 +79,8 @@ project. That way you can have different clusters for different environments,
like dev, staging, production, and so on. like dev, staging, production, and so on.
Simply add another cluster, like you did the first time, and make sure to Simply add another cluster, like you did the first time, and make sure to
[set an environment scope](#setting-the-environment-scope) that will [set an environment scope](#setting-the-environment-scope) that
differentiate the new cluster with the rest. differentiates the new cluster from the rest.
#### Setting the environment scope #### Setting the environment scope
...@@ -89,9 +89,9 @@ them with an environment scope. The environment scope associates clusters with [ ...@@ -89,9 +89,9 @@ them with an environment scope. The environment scope associates clusters with [
[environment-specific variables](../../../ci/variables/README.md#limit-the-environment-scopes-of-environment-variables) work. [environment-specific variables](../../../ci/variables/README.md#limit-the-environment-scopes-of-environment-variables) work.
The default environment scope is `*`, which means all jobs, regardless of their The default environment scope is `*`, which means all jobs, regardless of their
environment, will use that cluster. Each scope can only be used by a single environment, use that cluster. Each scope can be used only by a single cluster
cluster in a project, and a validation error will occur if otherwise. in a project, and a validation error occurs if otherwise. Also, jobs that don't
Also, jobs that don't have an environment keyword set will not be able to access any cluster. have an environment keyword set can't access any cluster.
For example, let's say the following Kubernetes clusters exist in a project: For example, let's say the following Kubernetes clusters exist in a project:
...@@ -127,13 +127,13 @@ deploy to production: ...@@ -127,13 +127,13 @@ deploy to production:
url: https://example.com/ url: https://example.com/
``` ```
The result will then be: The results:
- The Development cluster details will be available in the `deploy to staging` - The Development cluster details are available in the `deploy to staging`
job. job.
- The production cluster details will be available in the `deploy to production` - The production cluster details are available in the `deploy to production`
job. job.
- No cluster details will be available in the `test` job because it doesn't - No cluster details are available in the `test` job because it doesn't
define any environment. define any environment.
## Configuring your Kubernetes cluster ## Configuring your Kubernetes cluster
...@@ -157,15 +157,15 @@ applications running on the cluster. ...@@ -157,15 +157,15 @@ applications running on the cluster.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22011) in GitLab 11.5. > - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22011) in GitLab 11.5.
> - Became [optional](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/26565) in GitLab 11.11. > - Became [optional](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/26565) in GitLab 11.11.
You can choose to allow GitLab to manage your cluster for you. If your cluster is You can choose to allow GitLab to manage your cluster for you. If your cluster
managed by GitLab, resources for your projects will be automatically created. See the is managed by GitLab, resources for your projects are automatically created. See
[Access controls](add_remove_clusters.md#access-controls) section for details on which resources will the [Access controls](add_remove_clusters.md#access-controls) section for
be created. details about the created resources.
If you choose to manage your own cluster, project-specific resources will not be created If you choose to manage your own cluster, project-specific resources aren't created
automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md), you will automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md), you must
need to explicitly provide the `KUBE_NAMESPACE` [deployment variable](#deployment-variables) explicitly provide the `KUBE_NAMESPACE` [deployment variable](#deployment-variables)
that will be used by your deployment jobs, otherwise a namespace will be created for you. for your deployment jobs to use; otherwise a namespace is created for you.
#### Important notes #### Important notes
...@@ -198,10 +198,10 @@ To clear the cache: ...@@ -198,10 +198,10 @@ To clear the cache:
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24580) in GitLab 11.8. > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24580) in GitLab 11.8.
You do not need to specify a base domain on cluster settings when using GitLab Serverless. The domain in that case You do not need to specify a base domain on cluster settings when using GitLab Serverless. The domain in that case
will be specified as part of the Knative installation. See [Installing Applications](#installing-applications). is specified as part of the Knative installation. See [Installing Applications](#installing-applications).
Specifying a base domain will automatically set `KUBE_INGRESS_BASE_DOMAIN` as an environment variable. Specifying a base domain automatically sets `KUBE_INGRESS_BASE_DOMAIN` as an environment variable.
If you are using [Auto DevOps](../../../topics/autodevops/index.md), this domain will be used for the different If you are using [Auto DevOps](../../../topics/autodevops/index.md), this domain is used for the different
stages. For example, Auto Review Apps and Auto Deploy. stages. For example, Auto Review Apps and Auto Deploy.
The domain should have a wildcard DNS configured to the Ingress IP address. After Ingress has been installed (see [Installing Applications](#installing-applications)), The domain should have a wildcard DNS configured to the Ingress IP address. After Ingress has been installed (see [Installing Applications](#installing-applications)),
...@@ -224,7 +224,7 @@ Auto DevOps automatically detects, builds, tests, deploys, and monitors your ...@@ -224,7 +224,7 @@ Auto DevOps automatically detects, builds, tests, deploys, and monitors your
applications. applications.
To make full use of Auto DevOps (Auto Deploy, Auto Review Apps, and To make full use of Auto DevOps (Auto Deploy, Auto Review Apps, and
Auto Monitoring) you will need the Kubernetes project integration enabled, but Auto Monitoring) the Kubernetes project integration must be enabled, but
Kubernetes clusters can be used without Auto DevOps. Kubernetes clusters can be used without Auto DevOps.
[Read more about Auto DevOps](../../../topics/autodevops/index.md) [Read more about Auto DevOps](../../../topics/autodevops/index.md)
...@@ -238,7 +238,7 @@ A Kubernetes cluster can be the destination for a deployment job. If ...@@ -238,7 +238,7 @@ A Kubernetes cluster can be the destination for a deployment job. If
and configuration is not required. You can immediately begin interacting with and configuration is not required. You can immediately begin interacting with
the cluster from your jobs using tools such as `kubectl` or `helm`. the cluster from your jobs using tools such as `kubectl` or `helm`.
- You don't use GitLab's cluster integration you can still deploy to your - You don't use GitLab's cluster integration you can still deploy to your
cluster. However, you will need configure Kubernetes tools yourself cluster. However, you must configure Kubernetes tools yourself
using [environment variables](../../../ci/variables/README.md#custom-environment-variables) using [environment variables](../../../ci/variables/README.md#custom-environment-variables)
before you can interact with the cluster from your jobs. before you can interact with the cluster from your jobs.
...@@ -257,14 +257,14 @@ The Kubernetes cluster integration exposes the following ...@@ -257,14 +257,14 @@ The Kubernetes cluster integration exposes the following
GitLab CI/CD build environment to deployment jobs, which are jobs that have GitLab CI/CD build environment to deployment jobs, which are jobs that have
[defined a target environment](../../../ci/environments/index.md#defining-environments). [defined a target environment](../../../ci/environments/index.md#defining-environments).
| Variable | Description | | Variable | Description |
| -------- | ----------- | |----------------------------|-------------|
| `KUBE_URL` | Equal to the API URL. | | `KUBE_URL` | Equal to the API URL. |
| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. | | `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. |
| `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. If your cluster was created before GitLab 12.2, the default `KUBE_NAMESPACE` is set to `<project_name>-<project_id>`. | | `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. If your cluster was created before GitLab 12.2, the default `KUBE_NAMESPACE` is set to `<project_name>-<project_id>`. |
| `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. | | `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. |
| `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. | | `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. |
| `KUBECONFIG` | Path to a file containing `kubeconfig` for this deployment. CA bundle would be embedded if specified. This configuration also embeds the same token defined in `KUBE_TOKEN` so you likely will only need this variable. This variable name is also automatically picked up by `kubectl` so you won't actually need to reference it explicitly if using `kubectl`. | | `KUBECONFIG` | Path to a file containing `kubeconfig` for this deployment. CA bundle would be embedded if specified. This configuration also embeds the same token defined in `KUBE_TOKEN` so you likely need only this variable. This variable name is also automatically picked up by `kubectl` so you don't need to reference it explicitly if using `kubectl`. |
| `KUBE_INGRESS_BASE_DOMAIN` | From GitLab 11.8, this variable can be used to set a domain per cluster. See [cluster domains](#base-domain) for more information. | | `KUBE_INGRESS_BASE_DOMAIN` | From GitLab 11.8, this variable can be used to set a domain per cluster. See [cluster domains](#base-domain) for more information. |
### Custom namespace ### Custom namespace
...@@ -362,7 +362,7 @@ the deployment job: ...@@ -362,7 +362,7 @@ the deployment job:
- A namespace. - A namespace.
- A service account. - A service account.
However, sometimes GitLab can not create them. In such instances, your job will fail with the message: However, sometimes GitLab can not create them. In such instances, your job can fail with the message:
```plaintext ```plaintext
This job failed because the necessary resources were not successfully created. This job failed because the necessary resources were not successfully created.
...@@ -376,7 +376,7 @@ Reasons for failure include: ...@@ -376,7 +376,7 @@ Reasons for failure include:
privileges required by GitLab. privileges required by GitLab.
- Missing `KUBECONFIG` or `KUBE_TOKEN` variables. To be passed to your job, they must have a matching - Missing `KUBECONFIG` or `KUBE_TOKEN` variables. To be passed to your job, they must have a matching
[`environment:name`](../../../ci/environments/index.md#defining-environments). If your job has no [`environment:name`](../../../ci/environments/index.md#defining-environments). If your job has no
`environment:name` set, it will not be passed the Kubernetes credentials. `environment:name` set, the Kubernetes credentials are not passed to it.
NOTE: **Note:** NOTE: **Note:**
Project-level clusters upgraded from GitLab 12.0 or older may be configured Project-level clusters upgraded from GitLab 12.0 or older may be configured
...@@ -396,6 +396,6 @@ Automatically detect and monitor Kubernetes metrics. Automatic monitoring of ...@@ -396,6 +396,6 @@ Automatically detect and monitor Kubernetes metrics. Automatic monitoring of
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6. > - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Core in 13.2. > - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Core in 13.2.
When [Prometheus is deployed](#installing-applications), GitLab will automatically monitor the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start. When [Prometheus is deployed](#installing-applications), GitLab monitors the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
![Cluster Monitoring](img/k8s_cluster_monitoring.png) ![Cluster Monitoring](img/k8s_cluster_monitoring.png)
...@@ -19,7 +19,7 @@ There are two ways to set up Prometheus integration, depending on where your app ...@@ -19,7 +19,7 @@ There are two ways to set up Prometheus integration, depending on where your app
- For deployments on Kubernetes, GitLab can automatically [deploy and manage Prometheus](#managed-prometheus-on-kubernetes). - For deployments on Kubernetes, GitLab can automatically [deploy and manage Prometheus](#managed-prometheus-on-kubernetes).
- For other deployment targets, simply [specify the Prometheus server](#manual-configuration-of-prometheus). - For other deployment targets, simply [specify the Prometheus server](#manual-configuration-of-prometheus).
Once enabled, GitLab will automatically detect metrics from known services in the [metric library](prometheus_library/index.md). You can also [add your own metrics](../../../operations/metrics/index.md#adding-custom-metrics) and create Once enabled, GitLab detects metrics from known services in the [metric library](prometheus_library/index.md). You can also [add your own metrics](../../../operations/metrics/index.md#adding-custom-metrics) and create
[custom dashboards](../../../operations/metrics/dashboards/index.md). [custom dashboards](../../../operations/metrics/dashboards/index.md).
## Enabling Prometheus Integration ## Enabling Prometheus Integration
...@@ -48,7 +48,7 @@ Once you have a connected Kubernetes cluster, deploying a managed Prometheus is ...@@ -48,7 +48,7 @@ Once you have a connected Kubernetes cluster, deploying a managed Prometheus is
Prometheus is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/prometheus). Prometheus is only accessible within the cluster, with GitLab communicating through the [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/). Prometheus is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/prometheus). Prometheus is only accessible within the cluster, with GitLab communicating through the [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
The Prometheus server will [automatically detect and monitor](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) nodes, pods, and endpoints. To configure a resource to be monitored by Prometheus, simply set the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/): The Prometheus server [automatically detects and monitors](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) nodes, pods, and endpoints. To configure a resource to be monitored by Prometheus, simply set the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/):
- `prometheus.io/scrape` to `true` to enable monitoring of the resource. - `prometheus.io/scrape` to `true` to enable monitoring of the resource.
- `prometheus.io/port` to define the port of the metrics endpoint. - `prometheus.io/port` to define the port of the metrics endpoint.
...@@ -165,8 +165,8 @@ Installing and configuring Prometheus to monitor applications is fairly straight ...@@ -165,8 +165,8 @@ Installing and configuring Prometheus to monitor applications is fairly straight
#### Configuration in GitLab #### Configuration in GitLab
The actual configuration of Prometheus integration within GitLab is very simple. The actual configuration of Prometheus integration within GitLab
All you will need is the domain name or IP address of the Prometheus server you'd like requires the domain name or IP address of the Prometheus server you'd like
to integrate with. If the Prometheus resource is secured with Google's Identity-Aware Proxy (IAP), to integrate with. If the Prometheus resource is secured with Google's Identity-Aware Proxy (IAP),
additional information like Client ID and Service Account credentials can be passed which additional information like Client ID and Service Account credentials can be passed which
GitLab can use to access the resource. More information about authentication from a GitLab can use to access the resource. More information about authentication from a
...@@ -189,7 +189,7 @@ service account can be found at Google's documentation for ...@@ -189,7 +189,7 @@ service account can be found at Google's documentation for
#### Thanos configuration in GitLab #### Thanos configuration in GitLab
You can configure [Thanos](https://thanos.io/) as a drop-in replacement for Prometheus You can configure [Thanos](https://thanos.io/) as a drop-in replacement for Prometheus
with GitLab. You will need the domain name or IP address of the Thanos server you'd like with GitLab, using the domain name or IP address of the Thanos server you'd like
to integrate with. to integrate with.
1. Navigate to the [Integrations page](overview.md#accessing-integrations). 1. Navigate to the [Integrations page](overview.md#accessing-integrations).
...@@ -199,9 +199,10 @@ to integrate with. ...@@ -199,9 +199,10 @@ to integrate with.
### Precedence with multiple Prometheus configurations ### Precedence with multiple Prometheus configurations
12345678901234567890123456789012345678901234567890123456789012345678901234567890
Although you can enable both a [manual configuration](#manual-configuration-of-prometheus) Although you can enable both a [manual configuration](#manual-configuration-of-prometheus)
and [auto configuration](#managed-prometheus-on-kubernetes) of Prometheus, only and [auto configuration](#managed-prometheus-on-kubernetes) of Prometheus, you
one of them will be used: can use only one:
- If you have enabled a - If you have enabled a
[Prometheus manual configuration](#manual-configuration-of-prometheus) [Prometheus manual configuration](#manual-configuration-of-prometheus)
...@@ -225,16 +226,16 @@ Developers can view the performance impact of their changes within the merge ...@@ -225,16 +226,16 @@ Developers can view the performance impact of their changes within the merge
request workflow. This feature requires [Kubernetes](prometheus_library/kubernetes.md) metrics. request workflow. This feature requires [Kubernetes](prometheus_library/kubernetes.md) metrics.
When a source branch has been deployed to an environment, a sparkline and When a source branch has been deployed to an environment, a sparkline and
numeric comparison of the average memory consumption will appear. On the numeric comparison of the average memory consumption displays. On the
sparkline, a dot indicates when the current changes were deployed, with up to 30 minutes of sparkline, a dot indicates when the current changes were deployed, with up to 30 minutes of
performance data displayed before and after. The comparison shows the difference performance data displayed before and after. The comparison shows the difference
between the 30 minute average before and after the deployment. This information between the 30 minute average before and after the deployment. This information
is updated after each commit has been deployed. is updated after each commit has been deployed.
Once merged and the target branch has been redeployed, the metrics will switch Once merged and the target branch has been redeployed, the metrics switches
to show the new environments this revision has been deployed to. to show the new environments this revision has been deployed to.
Performance data will be available for the duration it is persisted on the Performance data is available for the duration it is persisted on the
Prometheus server. Prometheus server.
![Merge Request with Performance Impact](img/merge_request_performance.png) ![Merge Request with Performance Impact](img/merge_request_performance.png)
...@@ -33,4 +33,4 @@ A sample Cloudwatch Exporter configuration file, configured for basic AWS ELB mo ...@@ -33,4 +33,4 @@ A sample Cloudwatch Exporter configuration file, configured for basic AWS ELB mo
## Specifying the Environment label ## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](index.md#identifying-environments). however, GitLab needs a method to detect which labels are associated. To do this, GitLab [looks for an `environment` label](index.md#identifying-environments).
...@@ -28,4 +28,4 @@ To get started with NGINX monitoring, you should install and configure the [HAPr ...@@ -28,4 +28,4 @@ To get started with NGINX monitoring, you should install and configure the [HAPr
## Specifying the Environment label ## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](index.md#identifying-environments). however, GitLab needs a method to detect which labels are associated. To do this, GitLab [looks for an `environment` label](index.md#identifying-environments).
...@@ -21,8 +21,8 @@ Currently supported exporters are: ...@@ -21,8 +21,8 @@ Currently supported exporters are:
- [HAProxy](haproxy.md) - [HAProxy](haproxy.md)
- [Amazon Cloud Watch](cloudwatch.md) - [Amazon Cloud Watch](cloudwatch.md)
We have tried to surface the most important metrics for each exporter, and will We have tried to surface the most important metrics for each exporter, and
be continuing to add support for additional exporters in future releases. If you continue to add support for additional exporters in future releases. If you
would like to add support for other official exporters, contributions are welcome. would like to add support for other official exporters, contributions are welcome.
## Identifying Environments ## Identifying Environments
......
...@@ -29,11 +29,11 @@ NGINX server metrics are detected, which tracks the pages and content directly s ...@@ -29,11 +29,11 @@ NGINX server metrics are detected, which tracks the pages and content directly s
## Configuring Prometheus to monitor for NGINX metrics ## Configuring Prometheus to monitor for NGINX metrics
To get started with NGINX monitoring, you should first enable the [VTS statistics](https://github.com/vozlt/nginx-module-vts) module for your NGINX server. This will capture and display statistics in an HTML readable form. Next, you should install and configure the [NGINX VTS exporter](https://github.com/hnlq715/nginx-vts-exporter) which parses these statistics and translates them into a Prometheus monitoring endpoint. To get started with NGINX monitoring, you should first enable the [VTS statistics](https://github.com/vozlt/nginx-module-vts) module for your NGINX server. This captures and displays statistics in an HTML readable form. Next, you should install and configure the [NGINX VTS exporter](https://github.com/hnlq715/nginx-vts-exporter) which parses these statistics and translates them into a Prometheus monitoring endpoint.
If you are using NGINX as your Kubernetes Ingress, GitLab will [automatically detect](nginx_ingress.md) the metrics once enabled in 0.9.0 and later releases. If you are using NGINX as your Kubernetes Ingress, GitLab [automatically detects](nginx_ingress.md) the metrics once enabled in 0.9.0 and later releases.
## Specifying the Environment label ## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment In order to isolate and only display relevant metrics for a given environment
however, GitLab needs a method to detect which labels are associated. To do this, GitLab will [look for an `environment` label](index.md#identifying-environments). however, GitLab needs a method to detect which labels are associated. To do this, GitLab [looks for an `environment` label](index.md#identifying-environments).
...@@ -27,7 +27,7 @@ NGINX Ingress versions prior to 0.16.0 offer an included [VTS Prometheus metrics ...@@ -27,7 +27,7 @@ NGINX Ingress versions prior to 0.16.0 offer an included [VTS Prometheus metrics
## Configuring NGINX Ingress monitoring ## Configuring NGINX Ingress monitoring
If you have deployed NGINX Ingress using GitLab's [Kubernetes cluster integration](../../clusters/index.md#installing-applications), it will [automatically be monitored](#about-managed-nginx-ingress-deployments) by Prometheus. If you have deployed NGINX Ingress using GitLab's [Kubernetes cluster integration](../../clusters/index.md#installing-applications), Prometheus [automatically monitors it](#about-managed-nginx-ingress-deployments).
For other deployments, there is [some configuration](#manually-setting-up-nginx-ingress-for-prometheus-monitoring) required depending on your installation: For other deployments, there is [some configuration](#manually-setting-up-nginx-ingress-for-prometheus-monitoring) required depending on your installation:
...@@ -37,7 +37,7 @@ For other deployments, there is [some configuration](#manually-setting-up-nginx- ...@@ -37,7 +37,7 @@ For other deployments, there is [some configuration](#manually-setting-up-nginx-
### About managed NGINX Ingress deployments ### About managed NGINX Ingress deployments
NGINX Ingress is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress). NGINX Ingress will be [externally reachable via the Load Balancer's Endpoint](../../../clusters/applications.md#ingress). NGINX Ingress is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress). NGINX Ingress is [externally reachable via the Load Balancer's Endpoint](../../../clusters/applications.md#ingress).
NGINX is configured for Prometheus monitoring, by setting: NGINX is configured for Prometheus monitoring, by setting:
...@@ -45,11 +45,11 @@ NGINX is configured for Prometheus monitoring, by setting: ...@@ -45,11 +45,11 @@ NGINX is configured for Prometheus monitoring, by setting:
- `prometheus.io/scrape: "true"`, to enable automatic discovery. - `prometheus.io/scrape: "true"`, to enable automatic discovery.
- `prometheus.io/port: "10254"`, to specify the metrics port. - `prometheus.io/port: "10254"`, to specify the metrics port.
When used in conjunction with the GitLab deployed Prometheus service, response metrics will be automatically collected. When used in conjunction with the GitLab deployed Prometheus service, response metrics are automatically collected.
### Manually setting up NGINX Ingress for Prometheus monitoring ### Manually setting up NGINX Ingress for Prometheus monitoring
Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) have built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint will start running on port 10254. Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) have built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint starts running on port 10254.
Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added: Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
...@@ -60,6 +60,6 @@ Managing these settings depends on how NGINX Ingress has been deployed. If you h ...@@ -60,6 +60,6 @@ Managing these settings depends on how NGINX Ingress has been deployed. If you h
## Specifying the Environment label ## Specifying the Environment label
In order to isolate and only display relevant metrics for a given environment, GitLab needs a method to detect which labels are associated. To do this, GitLab will search for metrics with appropriate labels. In this case, the `ingress` label must `<CI_ENVIRONMENT_SLUG>`. In order to isolate and only display relevant metrics for a given environment, GitLab needs a method to detect which labels are associated. To do this, GitLab searches for metrics with appropriate labels. In this case, the `ingress` label must `<CI_ENVIRONMENT_SLUG>`.
If you have used [Auto Deploy](../../../../topics/autodevops/stages.md#auto-deploy) to deploy your app, this format will be used automatically and metrics will be detected with no action on your part. If you have used [Auto Deploy](../../../../topics/autodevops/stages.md#auto-deploy) to deploy your app, this format is used automatically and metrics are detected with no action on your part.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment