We recommend you use it with at least GitLab Enterprise Edition 10.0 for
basic Geo features, or latest version for a better experience
- You should make sure that all nodes run the same GitLab version
- Geo requires PostgreSQL 9.6 and Git 2.9 in addition to GitLab's usual
[minimum requirements][install-requirements]
- Using Geo in combination with High Availability (HA) is considered **Generally Available** (GA) in GitLab Enterprise Edition 10.4
>**Note:**
Geo changes significantly from release to release. Upgrades **are**
supported and [documented](#updating-the-geo-nodes), but you should ensure that
you're following the right version of the documentation for your installation!
The best way to do this is to follow the documentation from the `/help` endpoint
on your **primary** node, but you can also navigate to [this page on GitLab.com](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/gitlab-geo/README.md)
and choose the appropriate release from the `tags` dropdown, e.g., `v10.0.0-ee`.
> - Geo is part of [GitLab Premium][ee]
> - Introduced in GitLab Enterprise Edition 8.9
> We recommend you use it with at least GitLab Enterprise Edition 10.0 for
> basic Geo features, or latest version for a better experience
> - You should make sure that all nodes run the same GitLab version
> - Geo requires PostgreSQL 9.6 and Git 2.9 in addition to GitLab's usual
> [minimum requirements][install-requirements]
> - Using Geo in combination with High Availability (HA) is considered **Generally Available** (GA) in GitLab Enterprise Edition 10.4
>
>**Note:**
> Geo changes significantly from release to release. Upgrades **are**
> supported and [documented](#updating-the-geo-nodes), but you should ensure that
> you're following the right version of the documentation for your installation!
> The best way to do this is to follow the documentation from the `/help` endpoint
> on your **primary** node, but you can also navigate to [this page on GitLab.com](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/gitlab-geo/README.md)
> and choose the appropriate release from the `tags` dropdown, e.g., `v10.0.0-ee`.
Geo allows you to replicate your GitLab instance to other geographical
locations as a read-only fully operational version.
...
...
@@ -202,9 +202,8 @@ Read how to [replicate the Container Registry][docker-registry].
extra limitations may be in place.
- Pushing code to a secondary redirects the request to the primary instead of handling it directly [gitlab-ee#1381](https://gitlab.com/gitlab-org/gitlab-ee/issues/1381):
* Only push via HTTP is currently supported
* Git LFS is supported
* Pushing via SSH is currently not supported: [gitlab-ee#5387](https://gitlab.com/gitlab-org/gitlab-ee/issues/5387)
* Push via HTTP and SSH supported
* Git LFS also supported
- The primary node has to be online for OAuth login to happen (existing sessions and Git are not affected)
- The installation takes multiple manual steps that together can take about an hour depending on circumstances; we are
working on improving this experience, see [gitlab-org/omnibus-gitlab#2978] for details.
@@ -6,13 +6,12 @@ PostgreSQL. This is the database that will be installed if you use the
Omnibus package to manage your database.
> Important notes:
- This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
- If you are a Community Edition or Starter user, consider using a cloud hosted solution.
- This document will not cover installations from source.
> - This document will focus only on configuration supported with [GitLab Premium](https://about.gitlab.com/pricing/), using the Omnibus GitLab package.
> - If you are a Community Edition or Starter user, consider using a cloud hosted solution.
> - This document will not cover installations from source.
>
- If HA setup is not what you were looking for, see the [database configuration document](http://docs.gitlab.com/omnibus/settings/database.html)
for the Omnibus GitLab packages.
> - If HA setup is not what you were looking for, see the [database configuration document](http://docs.gitlab.com/omnibus/settings/database.html)
> for the Omnibus GitLab packages.
## Configure your own database server
...
...
@@ -31,20 +30,18 @@ If you use a cloud-managed service, or provide your own PostgreSQL:
## Configure using Omnibus for High Availability
> Please read this document fully before attempting to configure PostgreSQL HA
> for GitLab.
>
Please read this document fully before attempting to configure PostgreSQL HA
for GitLab.
>
This configuration is GA in EE 10.2.
> This configuration is GA in EE 10.2.
The recommended configuration for a PostgreSQL HA requires:
- A minimum of three database nodes
- Each node will run the following services:
-`PostgreSQL` - The database itself
-`repmgrd` - A service to monitor, and handle failover in case of a failure
-`Consul` agent - Used for service discovery, to alert other nodes when failover occurs
-`PostgreSQL` - The database itself
-`repmgrd` - A service to monitor, and handle failover in case of a failure
-`Consul` agent - Used for service discovery, to alert other nodes when failover occurs
- A minimum of three `Consul` server nodes
- A minimum of one `pgbouncer` service node
...
...
@@ -59,12 +56,12 @@ otherwise the networks will become a single point of failure.
Database nodes run two services besides PostgreSQL
1. Repmgrd -- monitors the cluster and handles failover in case of an issue with the master
The failover consists of
* Selecting a new master for the cluster
* Promoting the new node to master
* Instructing remaining servers to follow the new master node
The failover consists of
* Selecting a new master for the cluster
* Promoting the new node to master
* Instructing remaining servers to follow the new master node
On failure, the old master node is automatically evicted from the cluster, and should be rejoined manually once recovered.
On failure, the old master node is automatically evicted from the cluster, and should be rejoined manually once recovered.
1. Consul -- Monitors the status of each node in the database cluster, and tracks its health in a service definiton on the consul cluster.
...
...
@@ -94,12 +91,12 @@ Similarly, PostgreSQL access is controlled based on the network source.
This is why you will need:
> IP address of each nodes network interface
- This can be set to `0.0.0.0` to listen on all interfaces. It cannot
be set to the loopack address `127.0.0.1`
> - This can be set to `0.0.0.0` to listen on all interfaces. It cannot
> be set to the loopack address `127.0.0.1`
>
> Network Address
- This can be in subnet (i.e. `192.168.0.0/255.255.255.0`) or CIDR (i.e.
`192.168.0.0/24`) form.
> - This can be in subnet (i.e. `192.168.0.0/255.255.255.0`) or CIDR (i.e.
> `192.168.0.0/24`) form.
#### User information
...
...
@@ -115,10 +112,12 @@ When using default setup, minimum configuration requires:
-`CONSUL_USERNAME`. Defaults to `gitlab-consul`
-`CONSUL_DATABASE_PASSWORD`. Password for the database user.
-`CONSUL_PASSWORD_HASH`. This is a hash generated out of consul username/password pair.
Can be generated with:
Can be generated with:
```sh
sudo gitlab-ctl pg-password-md5 CONSUL_USERNAME
```
-`CONSUL_SERVER_NODES`. The IP addresses or DNS records of the Consul server nodes.
Few notes on the service itself:
...
...
@@ -141,7 +140,7 @@ This is used to prevent replication from using up all of the
available database connections.
> Note:
- In this document we are assuming 3 database nodes, which makes this configuration:
> - In this document we are assuming 3 database nodes, which makes this configuration:
```
postgresql['max_wal_senders'] = 4
...
...
@@ -157,7 +156,8 @@ We will need the following password information for the application's database u
-`POSTGRESQL_USERNAME`. Defaults to `gitlab`
-`POSTGRESQL_USER_PASSWORD`. The password for the database user
-`POSTGRESQL_PASSWORD_HASH`. This is a hash generated out of the username/password pair.
> **Note:** It is best to set the `uid` and `gid`s prior to the initial reconfigure of GitLab. Omnibus will not recursively `chown` directories if set after the initial reconfigure.
@@ -22,39 +22,40 @@ See our [HA documentation for PostgreSQL](database.md) for information on runnin
1. Generate SQL_USER_PASSWORD_HASH with the command `gitlab-ctl pg-password-md5 gitlab`. We'll also need to enter the plaintext SQL_USER_PASSWORD later
1. On your database node, ensure the following is set in your `/etc/gitlab/gitlab.rb`
> * [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/4466) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
>
> * ChatOps is currently in alpha, with some important features missing like access control.
GitLab ChatOps provides a method to interact with CI/CD jobs through chat services like Slack. Many organizations' discussion, collaboration, and troubleshooting is taking place in chat services these days, and having a method to run CI/CD jobs with output posted back to the channel can significantly augment a team's workflow.
@@ -9,7 +9,7 @@ you may need to enable pipeline triggering in your project's
## Pipelines
A pipeline is a group of [jobs][] that get executed in [stages][](batches).
A pipeline is a group of [jobs] that get executed in [stages].
All of the jobs in a stage are executed in parallel (if there are enough
concurrent [Runners]), and if they all succeed, the pipeline moves on to the
next stage. If one of the jobs fails, the next stage is not (usually)
...
...
@@ -29,17 +29,17 @@ There are three types of pipelines that often use the single shorthand of "pipel
![Types of Pipelines](img/types-of-pipelines.svg)
1.**CI Pipeline**: Build and test stages defined in `.gitlab-ci.yml`
2.**Deploy Pipeline**: Deploy stage(s) defined in `.gitlab-ci.yml` The flow of deploying code to servers through various stages: e.g. development to staging to production
3.**Project Pipeline**: Cross-project CI dependencies [triggered via API][triggers], particularly for micro-services, but also for complicated build dependencies: e.g. api -> front-end, ce/ee -> omnibus.
1.**CI Pipeline**: Build and test stages defined in `.gitlab-ci.yml`.
1.**Deploy Pipeline**: Deploy stage(s) defined in `.gitlab-ci.yml` The flow of deploying code to servers through various stages: e.g. development to staging to production.
1.**Project Pipeline**: Cross-project CI dependencies [triggered via API][triggers], particularly for micro-services, but also for complicated build dependencies: e.g. api -> front-end, ce/ee -> omnibus.
## Development workflows
Pipelines accommodate several development workflows:
1.**Branch Flow** (e.g. different branch for dev, qa, staging, production)
2.**Trunk-based Flow** (e.g. feature branches and single master branch, possibly with tags for releases)
3.**Fork-based Flow** (e.g. merge requests come from forks)
1.**Branch Flow** (e.g. different branch for dev, qa, staging, production).
1.**Trunk-based Flow** (e.g. feature branches and single master branch, possibly with tags for releases).
1.**Fork-based Flow** (e.g. merge requests come from forks).
Example continuous delivery flow:
...
...
@@ -57,6 +57,16 @@ Pipelines are defined in `.gitlab-ci.yml` by specifying [jobs] that run in
See the reference [documentation for jobs](yaml/README.md#jobs).
## Manually executing pipelines
Pipelines can be manually executed, with predefined or manually-specified [variables](variables/README.md).
To execute a pipeline manually:
1. Navigate to your project's **CI/CD > Pipelines**.
1. Click on the **Run Pipeline** button.
1. Select the branch to run the pipeline for and enter any environment variables required for the pipeline run.
## Seeing pipeline status
You can find the current and historical pipeline runs under your project's
...
...
@@ -112,9 +122,9 @@ Then, there is the pipeline mini graph which takes less space and can give you a
quick glance if all jobs pass or something failed. The pipeline mini graph can
be found when you visit:
-the pipelines index page
-a single commit page
-a merge request page
-The pipelines index page.
-A single commit page.
-A merge request page.
That way, you can see all related jobs for a single commit and the net result
of each stage of your pipeline. This allows you to quickly see what failed and
...
...
@@ -142,9 +152,9 @@ jobs. Click to expand them.
The basic requirements is that there are two numbers separated with one of
the following (you can even use them interchangeably):
-a space
-a slash (`/`)
-a colon (`:`)
-A space (` `)
-A slash (`/`)
-A colon (`:`)
>**Note:**
More specifically, [it uses][regexp] this regular expression: `\d+[\s:\/\\]+\d+\s*`.
...
...
@@ -257,11 +267,12 @@ A strict security model is enforced when pipelines are executed on
The following actions are allowed on protected branches only if the user is
[allowed to merge or push](../user/project/protected_branches.md#using-the-allowed-to-merge-and-allowed-to-push-settings)
on that specific branch:
- run **manual pipelines** (using Web UI or Pipelines API)
- run **scheduled pipelines**
- run pipelines using **triggers**
- trigger **manual actions** on existing pipelines
-**retry/cancel** existing jobs (using Web UI or Pipelines API)
- Run **manual pipelines** (using [Web UI](#manually-executing-pipelines) or Pipelines API).
- Run **scheduled pipelines**.
- Run pipelines using **triggers**.
- Trigger **manual actions** on existing pipelines.
-**Retry/cancel** existing jobs (using Web UI or Pipelines API).
**Variables** marked as **protected** are accessible only to jobs that
run on protected branches, avoiding untrusted users to get unintended access to
expect(json_response['error']).toeql('secret_token is missing, data is missing, data[gl_id] is missing, data[primary_repo] is missing, output is missing')