-[Repository storage paths](repository_storage_paths.md): Manage the paths used to store repositories.
-[Repository storage types](repository_storage_types.md): Information about the different repository storage types.
-[Repository storage rake tasks](raketasks/storage.md): A collection of rake tasks to list and migrate existing projects and attachments associated with it from Legacy storage to Hashed storage.
-[Repository storage Rake tasks](raketasks/storage.md): A collection of Rake tasks to list and migrate existing projects and attachments associated with it from Legacy storage to Hashed storage.
-[Limit repository size](../user/admin_area/settings/account_and_limit_settings.md): Set a hard limit for your repositories' size. **(STARTER ONLY)**
-[Static objects external storage](static_objects_external_storage.md): Set external storage for static objects in a repository.
@@ -15,7 +15,7 @@ To see all available time zones, run `bundle exec rake time:zones:all`.
For Omnibus installations, run `gitlab-rake time:zones:all`.
NOTE: **Note:**
Currently, this rake task does not list timezones in TZInfo format required by GitLab Omnibus during a reconfigure: [#58672](https://gitlab.com/gitlab-org/gitlab-foss/issues/58672).
Currently, this Rake task does not list timezones in TZInfo format required by GitLab Omnibus during a reconfigure: [#58672](https://gitlab.com/gitlab-org/gitlab-foss/issues/58672).
If you still encounter issues, try creating an index manually on the Elasticsearch
instance. The details of the index aren't important here, as we want to test if indices
...
...
@@ -220,7 +220,7 @@ during the indexing of projects. If errors do occur, they will either stem from
something you are familiar with, contact GitLab support for guidance.
- Within the Elasticsearch instance itself. See if the error is [documented and has a fix](../../integration/elasticsearch.md#troubleshooting). If not, speak with your Elasticsearch admin.
If the indexing process does not present errors, you will want to check the status of the indexed projects. You can do this via the following rake tasks:
If the indexing process does not present errors, you will want to check the status of the indexed projects. You can do this via the following Rake tasks:
-[`sudo gitlab-rake gitlab:elastic:index_projects_status`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks)(shows the overall status)
-[`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](../../integration/elasticsearch.md#gitlab-elasticsearch-rake-tasks)(shows specific projects that are not indexed)
@@ -725,7 +725,7 @@ GitLab Shell has a configuration file at `/home/git/gitlab-shell/config.yml`.
### Maintenance Tasks
[GitLab](https://gitlab.com/gitlab-org/gitlab/tree/master) provides rake tasks with which you see version information and run a quick check on your configuration to ensure it is configured properly within the application. See [maintenance rake tasks](https://gitlab.com/gitlab-org/gitlab/blob/master/doc/raketasks/maintenance.md).
[GitLab](https://gitlab.com/gitlab-org/gitlab/tree/master) provides Rake tasks with which you see version information and run a quick check on your configuration to ensure it is configured properly within the application. See [maintenance Rake tasks](../raketasks/maintenance.md).
@@ -25,7 +25,7 @@ Developers making significant changes to Elasticsearch queries should test their
See the [Elasticsearch GDK setup instructions](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/master/doc/howto/elasticsearch.md)
## Helpful rake tasks
## Helpful Rake tasks
-`gitlab:elastic:test:index_size`: Tells you how much space the current index is using, as well as how many documents are in the index.
-`gitlab:elastic:test:index_size_change`: Outputs index size, reindexes, and outputs index size again. Useful when testing improvements to indexing size.
...
...
@@ -34,7 +34,7 @@ Additionally, if you need large repos or multiple forks for testing, please cons
## How does it work?
The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [/ee/app/models/concerns/elastic/application_versioned_search.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
The Elasticsearch integration depends on an external indexer. We ship an [indexer written in Go](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer). The user must trigger the initial indexing via a Rake task but, after this is done, GitLab itself will trigger reindexing when required via `after_` callbacks on create, update, and destroy that are inherited from [/ee/app/models/concerns/elastic/application_versioned_search.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/models/concerns/elastic/application_versioned_search.rb).
After initial indexing is complete, create, update, and delete operations for all models except projects (see [#207494](https://gitlab.com/gitlab-org/gitlab/issues/207494)) are tracked in a Redis [`ZSET`](https://redis.io/topics/data-types#sorted-sets). A regular `sidekiq-cron``ElasticIndexBulkCronWorker` processes this queue, updating many Elasticsearch documents at a time with the [Bulk Request API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html).
@@ -45,7 +45,7 @@ There is also an option to [import the project via GitHub](../user/project/impor
This method will take longer to import than the other methods and will depend on several factors. It's recommended to use the other methods.
### Importing via a rake task
### Importing via a Rake task
[`import.rake`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/tasks/gitlab/import_export/import.rake) was introduced for importing large GitLab project exports.
@@ -129,11 +129,11 @@ To run several tests inside one directory:
-`bin/rspec spec/requests/api/` for the rspec tests if you want to test API only
### Speed-up tests, rake tasks, and migrations
### Speed-up tests, Rake tasks, and migrations
[Spring](https://github.com/rails/spring) is a Rails application preloader. It
speeds up development by keeping your application running in the background so
you don't need to boot it every time you run a test, rake task or migration.
you don't need to boot it every time you run a test, Rake task or migration.
If you want to use it, you'll need to export the `ENABLE_SPRING` environment
variable to `1`:
...
...
@@ -247,7 +247,7 @@ To generate GraphQL documentation based on the GitLab schema, run:
bundle exec rake gitlab:graphql:compile_docs
```
In its current state, the rake task:
In its current state, the Rake task:
- Generates output for GraphQL objects.
- Places the output at `doc/api/graphql/reference/index.md`.
...
...
@@ -270,4 +270,4 @@ To generate GraphQL schema files based on the GitLab schema, run:
bundle exec rake gitlab:graphql:schema:dump
```
This uses graphql-ruby's built-in rake tasks to generate files in both [IDL](https://www.prisma.io/blog/graphql-sdl-schema-definition-language-6755bcb9ce51) and JSON formats.
This uses graphql-ruby's built-in Rake tasks to generate files in both [IDL](https://www.prisma.io/blog/graphql-sdl-schema-definition-language-6755bcb9ce51) and JSON formats.
| `Elasticsearch indexing` | Enables/disables Elasticsearch indexing. You may want to enable indexing but disable search in order to give the index time to be fully completed, for example. Also, keep in mind that this option doesn't have any impact on existing data, this only enables/disables background indexer which tracks data changes. So by enabling this you will not get your existing data indexed, use special rake task for that as explained in [Adding GitLab's data to the Elasticsearch index](#adding-gitlabs-data-to-the-elasticsearch-index). |
| `Elasticsearch indexing` | Enables/disables Elasticsearch indexing. You may want to enable indexing but disable search in order to give the index time to be fully completed, for example. Also, keep in mind that this option doesn't have any impact on existing data, this only enables/disables background indexer which tracks data changes. So by enabling this you will not get your existing data indexed, use special Rake task for that as explained in [Adding GitLab's data to the Elasticsearch index](#adding-gitlabs-data-to-the-elasticsearch-index). |
| `Search with Elasticsearch enabled` | Enables/disables using Elasticsearch in search. |
| `URL` | The URL to use for connecting to Elasticsearch. Use a comma-separated list to support clustering (e.g., `http://host1, https://host2:9200`). If your Elasticsearch instance is password protected, pass the `username:password` in the URL (e.g., `http://<username>:<password>@<elastic_host>:9200/`). |
| `Number of Elasticsearch shards` | Elasticsearch indexes are split into multiple shards for performance reasons. In general, larger indexes need to have more shards. Changes to this value do not take effect until the index is recreated. You can read more about tradeoffs in the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-create-index.html#create-index-settings) |
...
...
@@ -174,7 +174,7 @@ If no namespaces or projects are selected, no Elasticsearch indexing will take p
CAUTION: **Warning**:
If you have already indexed your instance, you will have to regenerate the index in order to delete all existing data
for filtering to work correctly. To do this run the rake tasks `gitlab:elastic:create_empty_index` and
for filtering to work correctly. To do this run the Rake tasks `gitlab:elastic:create_empty_index` and
`gitlab:elastic:clear_index_status`. Afterwards, removing a namespace or a project from the list will delete the data
from the Elasticsearch index as expected.
...
...
@@ -298,7 +298,7 @@ or creating [extra Sidekiq processes](../administration/operations/extra_sidekiq
This enqueues a Sidekiq job for each project that needs to be indexed.
You can view the jobs in **Admin Area > Monitoring > Background Jobs > Queues Tab**
and click `elastic_indexer`, or you can query indexing status using a rake task:
and click `elastic_indexer`, or you can query indexing status using a Rake task:
```shell
# Omnibus installations
...
...
@@ -402,7 +402,7 @@ For repository and snippet files, GitLab will only index up to 1 MiB of content,
## GitLab Elasticsearch Rake Tasks
There are several rake tasks available to you via the command line:
There are several Rake tasks available to you via the command line:
@@ -550,7 +550,7 @@ Here are some common pitfalls and how to overcome them:
-**I indexed all the repositories but then switched Elasticsearch servers and now I can't find anything**
You will need to re-run all the rake tasks to re-index the database, repositories, and wikis.
You will need to re-run all the Rake tasks to re-index the database, repositories, and wikis.
-**The indexing process is taking a very long time**
...
...
@@ -564,7 +564,7 @@ Here are some common pitfalls and how to overcome them:
When performing the initial indexing of blobs, we lock all projects until the project finishes indexing. It could
happen that an error during the process causes one or multiple projects to remain locked. In order to unlock them,
run the `gitlab:elastic:clear_locked_projects`rake task.
run the `gitlab:elastic:clear_locked_projects`Rake task.
-**"Can't specify parent if no parent field has been configured"**
...
...
@@ -632,7 +632,7 @@ Here are some common pitfalls and how to overcome them:
```
You probably have not used either `http://` or `https://` as part of your value in the **"URL"** field of the Elasticseach Integration Menu. Please make sure you are using either `http://` or `https://` in this field as the [Elasticsearch client for Go](https://github.com/olivere/elastic) that we are using [needs the prefix for the URL to be accepted as valid](https://github.com/olivere/elastic/commit/a80af35aa41856dc2c986204e2b64eab81ccac3a).
Once you have corrected the formatting of the URL, delete the index (via the [dedicated rake task](#gitlab-elasticsearch-rake-tasks)) and [reindex the content of your instance](#adding-gitlabs-data-to-the-elasticsearch-index).
Once you have corrected the formatting of the URL, delete the index (via the [dedicated Rake task](#gitlab-elasticsearch-rake-tasks)) and [reindex the content of your instance](#adding-gitlabs-data-to-the-elasticsearch-index).
Similarly to the Kubernetes case, if you have scaled out your GitLab
cluster to use multiple application servers, you should pick a
designated node (that won't be auto-scaled away) for running the
backup rake task. Because the backup rake task is tightly coupled to
backup Rake task. Because the backup Rake task is tightly coupled to
the main Rails application, this is typically a node on which you're
also running Unicorn/Puma and/or Sidekiq.
...
...
@@ -171,7 +171,7 @@ the GitLab container according to the documentation, it should be under
`/srv/gitlab/config`.
For [GitLab Helm chart Installations](https://gitlab.com/gitlab-org/charts/gitlab) on a
Kubernetes cluster, you must follow the [Backup the secrets](https://docs.gitlab.com/charts/backup-restore/backup.html#backup-the-secrets) instructions.
Kubernetes cluster, you must follow the [Backup the secrets](https://docs.gitlab.com/charts/backup-restore/backup.html#backup-the-secrets) instructions.
You may also want to back up any TLS keys and certificates, and your
@@ -819,7 +819,7 @@ a Kubernetes cluster, the restore task expects the restore directories to be emp
However, with docker and Kubernetes volume mounts, some system level directories
may be created at the volume roots, like `lost+found` directory found in Linux
operating systems. These directories are usually owned by `root`, which can
cause access permission errors since the restore rake task runs as `git` user.
cause access permission errors since the restore Rake task runs as `git` user.
So, to restore a GitLab installation, users have to confirm the restore target
directories are empty.
...
...
@@ -875,8 +875,8 @@ to export / backup your data yourself from GitLab.com.
Issues are stored in the database. They can't be stored in Git itself.
To migrate your repositories from one server to another with an up-to-date version of
GitLab, you can use the [import rake task](import.md) to do a mass import of the
repository. Note that if you do an import rake task, rather than a backup restore, you
GitLab, you can use the [import Rake task](import.md) to do a mass import of the
repository. Note that if you do an import Rake task, rather than a backup restore, you
will have all your repositories, but not any other data.
## Troubleshooting
...
...
@@ -893,7 +893,7 @@ psql:/var/opt/gitlab/backups/db/database.sql:2933: WARNING: no privileges were
Be advised that, backup is successfully restored in spite of these warnings.
The rake task runs this as the `gitlab` user which does not have the superuser access to the database. When restore is initiated it will also run as `gitlab` user but it will also try to alter the objects it does not have access to.
The Rake task runs this as the `gitlab` user which does not have the superuser access to the database. When restore is initiated it will also run as `gitlab` user but it will also try to alter the objects it does not have access to.
Those objects have no influence on the database backup/restore but they give this annoying warning.
For more information see similar questions on PostgreSQL issue tracker[here](https://www.postgresql.org/message-id/201110220712.30886.adrian.klaver@gmail.com) and [here](https://www.postgresql.org/message-id/2039.1177339749@sss.pgh.pa.us) as well as [stack overflow](https://stackoverflow.com/questions/4368789/error-must-be-owner-of-language-plpgsql).
@@ -9,7 +9,7 @@ GitLab provides Rake tasks for cleaning up GitLab instances.
DANGER: **Danger:**
Do not run this within 12 hours of a GitLab upgrade. This is to ensure that all background migrations have finished, which otherwise may lead to data loss.
When you remove LFS files from a repository's history, they become orphaned and continue to consume disk space. With this rake task, you can remove invalid references from the database, which
When you remove LFS files from a repository's history, they become orphaned and continue to consume disk space. With this Rake task, you can remove invalid references from the database, which
will allow garbage collection of LFS files.
For example:
...
...
@@ -36,7 +36,7 @@ By default, this task does not delete anything but shows how many file reference
delete. Run the command with `DRY_RUN=false` if you actually want to
delete the references. You can also use `LIMIT={number}` parameter to limit the number of deleted references.
Note that this rake task only removes the references to LFS files. Unreferenced LFS files will be garbage-collected
Note that this Rake task only removes the references to LFS files. Unreferenced LFS files will be garbage-collected
later (once a day). If you need to garbage collect them immediately, run
`rake gitlab:cleanup:orphan_lfs_files` described below.
@@ -878,7 +878,7 @@ If the "No data found" screen continues to appear, it could be due to:
are not labeled correctly. To test this, connect to the Prometheus server and
[run a query](prometheus_library/kubernetes.md#metrics-supported), replacing `$CI_ENVIRONMENT_SLUG`
with the name of your environment.
- You may need to re-add the GitLab predefined common metrics. This can be done by running the [import common metrics rake task](../../../administration/raketasks/maintenance.md#import-common-metrics).
- You may need to re-add the GitLab predefined common metrics. This can be done by running the [import common metrics Rake task](../../../administration/raketasks/maintenance.md#import-common-metrics).