Commit e824a2de authored by Kati Paizee's avatar Kati Paizee

Merge branch 'docs-eol-whitespace' into 'master'

Remove trailing whitespace in docs

See merge request gitlab-org/gitlab!81477
parents 8de4617a c8d4ec09
......@@ -18,7 +18,7 @@ To host the GitLab product documentation, you can use:
- Your own web server
After you create a website by using one of these methods, you redirect the UI links
in the product to point to your website.
in the product to point to your website.
NOTE:
The website you create must be hosted under a subdirectory that matches
......
......@@ -616,7 +616,7 @@ Note the following:
- IP address, you must add it as a Subject Alternative Name to the certificate.
- When running Praefect sub-commands such as `dial-nodes` and `list-untracked-repositories` from the command line with
[Gitaly TLS enabled](configure_gitaly.md#enable-tls-support), you must set the `SSL_CERT_DIR` or `SSL_CERT_FILE`
environment variable so that the Gitaly certificate is trusted. For example:
environment variable so that the Gitaly certificate is trusted. For example:
```shell
sudo SSL_CERT_DIR=/etc/gitlab/trusted_certs /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml dial-nodes
......
......@@ -79,7 +79,7 @@ The Authorization code with PKCE flow, PKCE for short, makes it possible to secu
the OAuth exchange of client credentials for access tokens on public clients without
requiring access to the _Client Secret_ at all. This makes the PKCE flow advantageous
for single page JavaScript applications or other client side apps where keeping secrets
from the user is a technical impossibility.
from the user is a technical impossibility.
Before starting the flow, generate the `STATE`, the `CODE_VERIFIER` and the `CODE_CHALLENGE`.
......
......@@ -128,7 +128,7 @@ and you should make sure your version matches the version used by GitLab.
## Update linter configuration
[Vale configuration](#vale) and [markdownlint configuration](#markdownlint) is under source control in each
[Vale configuration](#vale) and [markdownlint configuration](#markdownlint) is under source control in each
project, so updates must be committed to each project individually.
We consider the configuration in the `gitlab` project as the source of truth and that's where all updates should
......
......@@ -18,7 +18,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Act as SaaS
When developing locally, there are times when you need your instance to act like the SaaS version of the product.
When developing locally, there are times when you need your instance to act like the SaaS version of the product.
In those instances, you can simulate SaaS by exporting an environment variable as seen below:
`export GITLAB_SIMULATE_SAAS=1`
......
......@@ -262,7 +262,7 @@ module Gitlab
end
```
A worker that is only defined in the EE codebase can subscribe to an event in the same way by
A worker that is only defined in the EE codebase can subscribe to an event in the same way by
declaring the subscription in `ee/lib/ee/gitlab/event_store.rb`.
Subscriptions are stored in memory when the Rails app is loaded and they are immediately frozen.
......
......@@ -448,7 +448,7 @@ The first way is simply by running the experiment. Assuming the experiment has b
The second way doesn't run the experiment and is intended to be used if the experiment only needs to surface in the client layer. To accomplish this we can simply `.publish` the experiment. This won't run any logic, but does surface the experiment details in the client layer so they can be utilized there.
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
An example might be to publish an experiment in a `before_action` in a controller. Assuming we've defined the `PillColorExperiment` class, like we have above, we can surface it to the client by publishing it instead of running it:
```ruby
before_action -> { experiment(:pill_color).publish }, only: [:show]
......
......@@ -136,7 +136,7 @@ the class name with `js-`.
## ES Module Syntax
For most JavaScript files, use ES module syntax to import or export from modules.
For most JavaScript files, use ES module syntax to import or export from modules.
Prefer named exports, as they improve name consistency.
```javascript
......
......@@ -127,11 +127,11 @@ A project with that name already exists.
##### `Exception: Error importing repository into (namespace) - No space left on device`
The disk has insufficient space to complete the import.
The disk has insufficient space to complete the import.
During import, the tarball is cached in your configured `shared_path` directory. Verify the
disk has enough free space to accommodate both the cached tarball and the unpacked
project files on disk.
project files on disk.
### Importing via the Rails console
......
......@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab, like most large applications, enforces limits within certain features.
The absences of limits can affect security, performance, data, or could even
exhaust the allocated resources for the application.
exhaust the allocated resources for the application.
Every new feature should have safe usage limits included in its implementation.
Limits are applicable for:
......@@ -23,6 +23,6 @@ Limits are applicable for:
## Additional reading
- Existing [GitLab application limits](../administration/instance_limits.md)
- Existing [GitLab application limits](../administration/instance_limits.md)
- Product processes: [introducing application limits](https://about.gitlab.com/handbook/product/product-processes/#introducing-application-limits)
- Development docs: [guide for adding application limits](application_limits.md)
......@@ -112,7 +112,7 @@ while and there are no issues, we can proceed.
### Proposed solution: Migrate data by using MultiStore with the fallback strategy
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We need a way to migrate users to a new Redis store without causing any inconveniences from UX perspective.
We also want the ability to fall back to the "old" Redis instance if something goes wrong with the new instance.
Migration Requirements:
......@@ -129,13 +129,13 @@ We need to write data into both Redis instances (old + new).
We read from the new instance, but we need to fall back to the old instance when pre-fetching from the new dedicated Redis instance that failed.
We need to log any issues or exceptions with a new instance, but still fall back to the old instance.
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
The proposed migration strategy is to implement and use the [MultiStore](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/multi_store.rb).
We used this approach with [adding new dedicated Redis instance for session keys](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/579).
Also MultiStore comes with corresponding [specs](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/lib/gitlab/redis/multi_store_spec.rb).
The MultiStore looks like a `redis-rb ::Redis` instance.
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance),
In the new Redis instance class you added in [Step 1](#step-1-support-configuring-the-new-instance),
override the [Redis](https://gitlab.com/gitlab-org/gitlab/-/blob/fcc42e80ed261a862ee6ca46b182eee293ae60b6/lib/gitlab/redis/sessions.rb#L20-28) method from the `::Gitlab::Redis::Wrapper`
```ruby
......@@ -177,7 +177,7 @@ bin/feature-flag use_primary_store_as_default_for_foo
```
By enabling `use_primary_and_secondary_stores_for_foo` feature flag, our `Gitlab::Redis::Foo` will use `MultiStore` to write to both new Redis instance
and the [old (fallback-instance)](#fallback-instance).
and the [old (fallback-instance)](#fallback-instance).
If we fail to fetch data from the new instance, we will fallback and read from the old Redis instance.
We can monitor logs for `Gitlab::Redis::MultiStore::ReadFromPrimaryError`, and also the Prometheus counter `gitlab_redis_multi_store_read_fallback_total`.
......@@ -218,7 +218,7 @@ When a command outside of the supported list is used, `method_missing` will pass
This ensures that anything unexpected behaves like it would before.
NOTE:
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
By tracking `gitlab_redis_multi_store_method_missing_total` counter and `Gitlab::Redis::MultiStore::MethodMissingError`,
a developer will need to add an implementation for missing Redis commands before proceeding with the migration.
##### Errors
......
......@@ -209,17 +209,17 @@ sequenceDiagram
- `uuid` - GitLab instance unique identifier
- `hostname` - GitLab instance hostname
- `version` - GitLab instance current versions
- `version` - GitLab instance current versions
- `elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
- `message` - Error message
<pre>
<code>
{
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
}
</code>
......@@ -576,7 +576,7 @@ skip_db_write:
ServicePing::SubmitService.new(skip_db_write: true).execute
```
## Manually upload Service Ping payload
## Manually upload Service Ping payload
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/7388) in GitLab 14.8 with a flag named `admin_application_settings_service_usage_data_center`. Disabled by default.
......@@ -596,7 +596,7 @@ To upload payload manually:
## Monitoring
Service Ping reporting process state is monitored with [internal SiSense dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
Service Ping reporting process state is monitored with [internal SiSense dashboard](https://app.periscopedata.com/app/gitlab/968489/Product-Intelligence---Service-Ping-Health).
## Troubleshooting
......
......@@ -259,7 +259,7 @@ these scenarios, since `:always` should be considered the exception, not the rul
To allow for reads to be served from replicas, we added two additional consistency modes: `:sticky` and `:delayed`.
When you declare either `:sticky` or `:delayed` consistency, workers become eligible for database
load-balancing.
load-balancing.
In both cases, if the replica is not up-to-date and the time from scheduling the job was less than the minimum delay interval,
the jobs sleep up to the minimum delay interval (0.8 seconds). This gives the replication process time to finish.
......
......@@ -130,7 +130,7 @@ You can set this variable inside the `fabricate_via_api` call. For a consistent
- Add the word `activated` to the end of a variable's name.
- Inside the `initialize` method, set the variable's default value.
For example:
For example:
```ruby
def initialize
......
......@@ -20,7 +20,7 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:github` | The test requires a GitHub personal access token. |
| `:group_saml` | The test requires a GitLab instance that has SAML SSO enabled at the group level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:instance_saml` | The test requires a GitLab instance that has SAML SSO enabled at the instance level. Interacts with an external SAML identity provider. Paired with the `:orchestrated` tag. |
| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/overview.md#integrations-listing). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
| `:integrations` | This aims to test the available [integrations](../../../user/project/integrations/overview.md#integrations-listing). The test requires Docker to be installed in the run context. It will provision the containers and can be run against a local instance or using the `gitlab-qa` scenario `Test::Integration::Integrations` |
| `:service_ping_disabled` | The test interacts with the GitLab configuration service ping at the instance level to turn admin setting service ping checkbox on or off. This tag will have the test run only in the `service_ping_disabled` job and must be paired with the `:orchestrated` and `:requires_admin` tags. |
| `:jira` | The test requires a Jira Server. [GitLab-QA](https://gitlab.com/gitlab-org/gitlab-qa) provisions the Jira Server in a Docker container when the `Test::Integration::Jira` test scenario is run.
| `:kubernetes` | The test includes a GitLab instance that is configured to be run behind an SSH tunnel, allowing a TLS-accessible GitLab. This test also includes provisioning of at least one Kubernetes cluster to test against. _This tag is often be paired with `:orchestrated`._ |
......
......@@ -122,7 +122,7 @@ signed in.
FLAG:
On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the feature flag](../administration/feature_flags.md) named `omniauth_login_minimal_scopes`. On GitLab.com, this feature is not available.
If you use a GitLab instance for authentication, you can reduce access rights when an OAuth application is used for sign in.
If you use a GitLab instance for authentication, you can reduce access rights when an OAuth application is used for sign in.
Any OAuth application can advertise the purpose of the application with the
authorization parameter: `gl_auth_type=login`. If the application is
......
......@@ -1461,7 +1461,7 @@ To prepare the new server:
1. On the top bar, select **Menu > Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. Under the Sidekiq dashboard, verify that the numbers
match with what was shown on the old server.
match with what was shown on the old server.
1. While still under the Sidekiq dashboard, select **Cron** and then **Enable All**
to re-enable periodic background jobs.
1. Test that read-only operations on the GitLab instance work as expected. For example, browse through project repository files, merge requests, and issues.
......
......@@ -93,7 +93,7 @@ The **rate limit** is 5 requests per minute per user.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/339151) in GitLab 14.7.
There is a rate limit per IP address on the `/users/sign_up` endpoint. This is to mitigate attempts to misuse the endpoint. For example, to mass
discover usernames or email addresses in use.
discover usernames or email addresses in use.
The **rate limit** is 20 calls per minute per IP address.
......@@ -113,7 +113,7 @@ The **rate limit** is 10 calls per minute per signed-in user.
There is a rate limit for the internal endpoint `/users/:username/exists`, used upon sign up to check if a chosen username has already been taken.
This is to mitigate the risk of misuses, such as mass discovery of usernames in use.
The **rate limit** is 20 calls per minute per IP address.
The **rate limit** is 20 calls per minute per IP address.
## Troubleshooting
......
......@@ -154,7 +154,7 @@ To change the namespace linked to a subscription:
for that group.
1. Select **Proceed to checkout**.
Subscription charges are calculated based on the total number of users in a group, including its subgroups and nested projects. If the [total number of users](gitlab_com/index.md#view-seat-usage) exceeds the number of seats in your subscription, your account is charged for the additional users and you need to pay for the overage before you can change the linked namespace.
Subscription charges are calculated based on the total number of users in a group, including its subgroups and nested projects. If the [total number of users](gitlab_com/index.md#view-seat-usage) exceeds the number of seats in your subscription, your account is charged for the additional users and you need to pay for the overage before you can change the linked namespace.
Only one namespace can be linked to a subscription.
......
......@@ -190,7 +190,7 @@ You can override the default values in the `values.yaml` file in the
`HELM_UPGRADE_VALUES_FILE` [CI/CD variable](#cicd-variables) with
the path and name.
Some values can not be overridden with the options above. Settings like `replicaCount` should instead be overridden with the `REPLICAS`
Some values can not be overridden with the options above. Settings like `replicaCount` should instead be overridden with the `REPLICAS`
[build and deployment](#build-and-deployment) CI/CD variable. Follow [this issue](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/issues/31) for more information.
NOTE:
......
......@@ -232,7 +232,7 @@ cannot guarantee that upgrading between major versions is seamless.
A *major* upgrade requires the following steps:
1. Start by identifying a [supported upgrade path](#upgrade-paths). This is essential for a successful *major* version upgrade.
1. Start by identifying a [supported upgrade path](#upgrade-paths). This is essential for a successful *major* version upgrade.
1. Upgrade to the latest minor version of the preceding major version.
1. Upgrade to the "dot zero" release of the next major version (`X.0.Z`).
1. Optional. Follow the [upgrade path](#upgrade-paths), and proceed with upgrading to newer releases of that major version.
......
......@@ -39,7 +39,7 @@ release if the patch release is not the latest. For example, upgrading from
14.1.1 to 14.2.0 should be safe even if 14.1.2 has been released. We do recommend
you check the release posts of any releases between your current and target
version just in case they include any migrations that may require you to upgrade
one release at a time.
one release at a time.
We also recommend you verify the [version specific upgrading instructions](index.md#version-specific-upgrading-instructions) relevant to your [upgrade path](index.md#upgrade-paths).
......
......@@ -40,7 +40,7 @@ To view the number of merge requests merged per month:
1. On the top bar, select **Menu > Projects** and find your project.
1. On the left sidebar, select **Analytics > Merge request**.
1. Optional. Filter results:
1. Optional. Filter results:
1. Select the filter bar.
1. Select a parameter.
1. Select a value or enter text to refine the results.
......
......@@ -68,7 +68,7 @@ To view the median time spent in each stage:
- In the **From** field, select a start date.
- In the **To** field, select an end date.
1. To view the median time for each stage, above the **Filter results** text box, point to a stage.
## View the lead time and cycle time for issues
Value stream analytics shows the lead time and cycle time for issues in your project:
......@@ -116,7 +116,7 @@ To view deployment metrics, you must have a
[production environment configured](../../ci/environments/index.md#deployment-tier-of-environments).
Value stream analytics shows the following deployment metrics for your project:
- Deploys: The number of successful deployments in the date range.
- Deployment Frequency: The average number of successful deployments per day in the date range.
......@@ -174,14 +174,14 @@ This example shows a workflow through all seven stages in one day. In this
example, milestones have been created and CI for testing and setting environments is configured.
- 09:00: Create issue. **Issue** stage starts.
- 11:00: Add issue to a milestone, start work on the issue, and create a branch locally.
**Issue** stage stops and **Plan** stage starts.
- 11:00: Add issue to a milestone, start work on the issue, and create a branch locally.
**Issue** stage stops and **Plan** stage starts.
- 12:00: Make the first commit.
- 12:30: Make the second commit to the branch that mentions the issue number. **Plan** stage stops and **Code** stage starts.
- 14:00: Push branch and create a merge request that contains the [issue closing pattern](../project/issues/managing_issues.md#closing-issues-automatically). **Code** stage stops and **Test** and **Review** stages start.
- The CI takes 5 minutes to run scripts defined in [`.gitlab-ci.yml`](../../ci/yaml/index.md).
- The CI takes 5 minutes to run scripts defined in [`.gitlab-ci.yml`](../../ci/yaml/index.md).
**Test** stage stops.
- Review merge request.
- Review merge request.
- 19:00: Merge the merge request. **Review** stage stops and **Staging** stage starts.
- 19:30: Deployment to the `production` environment starts and finishes. **Staging** stops.
......@@ -191,7 +191,7 @@ Value stream analytics records the following times for each stage:
- **Plan**: 11:00 to 12:00: 1 hr
- **Code**: 12:00 to 14:00: 2 hrs
- **Test**: 5 minutes
- **Review**: 14:00 to 19:00: 5 hrs
- **Review**: 14:00 to 19:00: 5 hrs
- **Staging**: 19:00 to 19:30: 30 minutes
There are some additional considerations for this example:
......@@ -202,5 +202,5 @@ still collects analytics data for the issue.
as every merge request should be tested.
- This example illustrates only one cycle of multiple stages. The value
stream analytics dashboard shows the calculated median elapsed time for these issues.
- Value stream analytics identifies production environments based on the
- Value stream analytics identifies production environments based on the
[deployment tier of environments](../../ci/environments/index.md#deployment-tier-of-environments).
......@@ -804,7 +804,7 @@ variables:
If the value must be generated or regenerated on expiration, you can provide a program or script for
the API fuzzer to execute on a specified interval. The provided script runs in an Alpine Linux
container that has Python 3 and Bash installed.
container that has Python 3 and Bash installed.
You have to set the environment variable `FUZZAPI_OVERRIDES_CMD` to the program or script you would like
to execute. The provided command creates the overrides JSON file as defined previously.
......@@ -813,7 +813,7 @@ You might want to install other scripting runtimes like NodeJS or Ruby, or maybe
your overrides command. In this case, we recommend setting the `FUZZAPI_PRE_SCRIPT` to the file path of a script which
provides those prerequisites. The script provided by `FUZZAPI_PRE_SCRIPT` is executed once, before the analyzer starts.
See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management)
See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management)
page for information about installing Alpine Linux packages.
You must provide three CI/CD variables, each set for correct operation:
......
......@@ -8,13 +8,13 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Description
A private RFC 1918 was identified in the target application. Public facing websites should not be issuing
requests to private IP Addresses. Attackers attempting to execute subsequent attacks, such as Server-Side
A private RFC 1918 was identified in the target application. Public facing websites should not be issuing
requests to private IP Addresses. Attackers attempting to execute subsequent attacks, such as Server-Side
Request Forgery (SSRF), may be able to use this information to identify additional internal targets.
## Remediation
Identify the resource that is incorrectly specifying an internal IP address and replace it with it's public
Identify the resource that is incorrectly specifying an internal IP address and replace it with it's public
facing version, or remove the reference from the target application.
## Details
......
......@@ -8,8 +8,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Description
The target web server is configured to list the contents of directories that do not contain an index file
such as `index.html`. This could lead to accidental exposure of sensitive information, or give an attacker
The target web server is configured to list the contents of directories that do not contain an index file
such as `index.html`. This could lead to accidental exposure of sensitive information, or give an attacker
details on how filenames and directories are structured and stored.
## Remediation
......@@ -17,11 +17,11 @@ details on how filenames and directories are structured and stored.
Directory indexing should be disabled.
Apache:
For Apache based web sites, ensure all `<Directory>` definitions have `Options -Indexes` configured in the
For Apache based web sites, ensure all `<Directory>` definitions have `Options -Indexes` configured in the
`apache2.conf` or `httpd.conf` configuration file.
NGINX:
For NGINX based websites, ensure all `location` definitions have the `autoindex off` directive set in the
For NGINX based websites, ensure all `location` definitions have the `autoindex off` directive set in the
`nginx.conf` file.
IIS:
......
......@@ -479,8 +479,8 @@ Follow these steps to provide the bearer token with `DAST_API_OVERRIDES_ENV`:
`{"headers":{"Authorization":"Bearer dXNlcm5hbWU6cGFzc3dvcmQ="}}` (substitute your token). You
can create CI/CD variables from the GitLab projects page at **Settings > CI/CD**, in the
**Variables** section.
Due to the format of `TEST_API_BEARERAUTH` it's not possible to mask the variable.
To mask the token's value, you can create a second variable with the token value's, and define
Due to the format of `TEST_API_BEARERAUTH` it's not possible to mask the variable.
To mask the token's value, you can create a second variable with the token value's, and define
`TEST_API_BEARERAUTH` with the value `{"headers":{"Authorization":"Bearer $MASKED_VARIABLE"}}`.
1. In your `.gitlab-ci.yml` file, set `DAST_API_OVERRIDES_ENV` to the variable you just created:
......@@ -876,7 +876,7 @@ variables:
If the value must be generated or regenerated on expiration, you can provide a program or script for
the DAST API scanner to execute on a specified interval. The provided command runs in an Alpine Linux
container that has Python 3 and Bash installed.
container that has Python 3 and Bash installed.
You have to set the environment variable `DAST_API_OVERRIDES_CMD` to the program or script you would like
to execute. The provided command creates the overrides JSON file as defined previously.
......@@ -885,7 +885,7 @@ You might want to install other scripting runtimes like NodeJS or Ruby, or maybe
your overrides command. In this case, we recommend setting the `DAST_API_PRE_SCRIPT` to the file path of a script which
provides those prerequisites. The script provided by `DAST_API_PRE_SCRIPT` is executed once, before the analyzer starts.
See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management)
See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management)
page for information about installing Alpine Linux packages.
You must provide three CI/CD variables, each set for correct operation:
......
......@@ -878,12 +878,12 @@ variables:
## Reports JSON format
SAST outputs a report file in JSON format. The report file contains details of all found vulnerabilities.
To download the report file, you can either:
SAST outputs a report file in JSON format. The report file contains details of all found vulnerabilities.
To download the report file, you can either:
- Download the file from the CI/CD pipelines page.
- In the pipelines tab on merge requests, set [`artifacts: paths`](../../../ci/yaml/index.md#artifactspaths) to `gl-sast-report.json`.
- In the pipelines tab on merge requests, set [`artifacts: paths`](../../../ci/yaml/index.md#artifactspaths) to `gl-sast-report.json`.
For information, see [Download job artifacts](../../../ci/pipelines/job_artifacts.md#download-job-artifacts).
For details of the report file's schema, see
......
......@@ -442,9 +442,9 @@ secret_detection:
### `secret-detection` job fails with `ERR fatal: ambiguous argument` message
Your `secret-detection` job can fail with `ERR fatal: ambiguous argument` error if your
repository's default branch is unrelated to the branch the job was triggered for.
Your `secret-detection` job can fail with `ERR fatal: ambiguous argument` error if your
repository's default branch is unrelated to the branch the job was triggered for.
See issue [!352014](https://gitlab.com/gitlab-org/gitlab/-/issues/352014) for more details.
To resolve the issue, make sure to correctly [set your default branch](../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch
that has related history with the branch you run the `secret-detection` job on.
To resolve the issue, make sure to correctly [set your default branch](../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch
that has related history with the branch you run the `secret-detection` job on.
......@@ -56,7 +56,7 @@ A vendor revocation receiver service integrates with a GitLab instance to receiv
a web notification and respond to leaked token requests.
To implement a receiver service to revoke leaked tokens:
1. Create a publicly accessible HTTP service matching the corresponding API contract
below. Your service should be idempotent and rate-limited.
1. When a pipeline corresponding to its revocable token type (in the example, `my_api_token`)
......
......@@ -7,7 +7,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Vulnerability Report **(ULTIMATE)**
The Vulnerability Report provides information about vulnerabilities from scans of the default branch. It contains cumulative results of all successful jobs, regardless of whether the pipeline was successful.
The Vulnerability Report provides information about vulnerabilities from scans of the default branch. It contains cumulative results of all successful jobs, regardless of whether the pipeline was successful.
The scan results from a pipeline are only ingested after all the jobs in the pipeline complete. Partial results for a pipeline with jobs in progress can be seen in the pipeline security tab.
......
......@@ -21,7 +21,7 @@ Then you can run Kubernetes API commands as part of your GitLab CI/CD pipeline.
To ensure access to your cluster is safe:
- Each agent has a separate context (`kubecontext`).
- Only the project where the agent is, and any additional projects you authorize can access the agent in your cluster.
- Only the project where the agent is, and any additional projects you authorize can access the agent in your cluster.
You do not need to have a runner in the cluster with the agent.
......@@ -208,7 +208,7 @@ SPDY protocol.
[An issue exists](https://gitlab.com/gitlab-org/gitlab/-/issues/346248) to add support for these commands.
### Grant write permissions to `~/.kube/cache`
Tools like `kubectl`, Helm, `kpt`, and `kustomize` cache information about
the cluster in `~/.kube/cache`. If this directory is not writable, the tool fetches information on each invocation,
making interactions slower and creating unnecessary load on the cluster. For the best experience, in the
......
......@@ -226,7 +226,7 @@ To change the SAML app used for sign in:
### Migrate to a different SAML provider
You can migrate to a different SAML provider. During the migration process users will not be able to access any of the SAML groups.
To mitigate this, you can disable [SSO enforcement](#sso-enforcement).
To mitigate this, you can disable [SSO enforcement](#sso-enforcement).
To migrate SAML providers:
......
......@@ -51,7 +51,7 @@ Once [Group Single Sign-On](index.md) has been configured, we can:
The SAML application that was created during [Single sign-on](index.md) setup for [Azure](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/view-applications-portal) now needs to be set up for SCIM. You can refer to [Azure SCIM setup documentation](https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups#getting-started).
1. In your app, go to the Provisioning tab, and set the **Provisioning Mode** to **Automatic**.
1. In your app, go to the Provisioning tab, and set the **Provisioning Mode** to **Automatic**.
Then fill in the **Admin Credentials**, and save. The **Tenant URL** and **secret token** are the items
retrieved in the [previous step](#gitlab-configuration).
......@@ -60,7 +60,7 @@ The SAML application that was created during [Single sign-on](index.md) setup fo
- **Settings**: We recommend setting a notification email and selecting the **Send an email notification when a failure occurs** checkbox.
You also control what is actually synced by selecting the **Scope**. For example, **Sync only assigned users and groups** only syncs the users and groups assigned to the application. Otherwise, it syncs the whole Active Directory.
- **Mappings**: We recommend keeping **Provision Azure Active Directory Users** enabled, and disable **Provision Azure Active Directory Groups**.
- **Mappings**: We recommend keeping **Provision Azure Active Directory Users** enabled, and disable **Provision Azure Active Directory Groups**.
Leaving **Provision Azure Active Directory Groups** enabled does not break the SCIM user provisioning, but it causes errors in Azure AD that may be confusing and misleading.
1. You can then test the connection by selecting **Test Connection**. If the connection is successful, save your configuration before moving on. See below for [troubleshooting](#troubleshooting).
......
......@@ -16,7 +16,7 @@ We recommend deleting unnecessary packages and files. This page offers examples
## Check Package Registry Storage Use
The Usage Quotas page (**Settings > Usage Quotas > Storage**) displays storage usage for Packages.
The Usage Quotas page (**Settings > Usage Quotas > Storage**) displays storage usage for Packages.
## Delete a package
......
......@@ -24,7 +24,7 @@ Project access tokens are similar to [group access tokens](../../group/settings/
and [personal access tokens](../../profile/personal_access_tokens.md), except they are
associated with a project rather than a group or user.
In self-managed instances, project access tokens are subject to the same [maximum lifetime limits](../../admin_area/settings/account_and_limit_settings.md#limit-the-lifetime-of-personal-access-tokens) as personal access tokens if the limit is set.
In self-managed instances, project access tokens are subject to the same [maximum lifetime limits](../../admin_area/settings/account_and_limit_settings.md#limit-the-lifetime-of-personal-access-tokens) as personal access tokens if the limit is set.
You can use project access tokens:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment