summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2019-10-16 03:06:12 +0000
committerGitLab Bot <gitlab-bot@gitlab.com>2019-10-16 03:06:12 +0000
commitf155cc9034f2247c5d368f9b0212ad44248b0c5e (patch)
tree902480293b665d74a337aeae6a0521104f561988 /doc
parentc920712fab6abdc37de9444e6bbcd170c295b21a (diff)
downloadgitlab-ce-f155cc9034f2247c5d368f9b0212ad44248b0c5e.tar.gz
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md18
-rw-r--r--doc/administration/geo/replication/index.md1
-rw-r--r--doc/administration/geo/replication/object_storage.md49
-rw-r--r--doc/administration/troubleshooting/postgresql.md24
-rw-r--r--doc/ci/yaml/README.md52
-rw-r--r--doc/user/analytics/productivity_analytics.md3
6 files changed, 82 insertions, 65 deletions
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 75e07bcf863..8fee172ec64 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -43,23 +43,14 @@ will go smoothly.
### Object storage
-Some classes of non-repository data can use object storage in preference to
-file storage. Geo [does not replicate data in object storage](../replication/object_storage.md),
-leaving that task up to the object store itself. For a planned failover, this
-means you can decouple the replication of this data from the failover of the
-GitLab service.
-
-If you're already using object storage, simply verify that your **secondary**
-node has access to the same data as the **primary** node - they must either they share the
-same object storage configuration, or the **secondary** node should be configured to
-access a [geographically-replicated][os-repl] copy provided by the object store
-itself.
-
If you have a large GitLab installation or cannot tolerate downtime, consider
[migrating to Object Storage][os-conf] **before** scheduling a planned failover.
Doing so reduces both the length of the maintenance window, and the risk of data
loss as a result of a poorly executed planned failover.
+In GitLab 12.4, you can optionally allow GitLab to manage replication of Object Storage for
+**secondary** nodes. For more information, see [Object Storage replication][os-conf].
+
### Review the configuration of each **secondary** node
Database settings are automatically replicated to the **secondary** node, but the
@@ -224,5 +215,4 @@ Don't forget to remove the broadcast message after failover is complete.
[background-verification]: background_verification.md
[limitations]: ../replication/index.md#current-limitations
[moving-repositories]: ../../operations/moving_repositories.md
-[os-conf]: ../replication/object_storage.md#configuration
-[os-repl]: ../replication/object_storage.md#replication
+[os-conf]: ../replication/object_storage.md
diff --git a/doc/administration/geo/replication/index.md b/doc/administration/geo/replication/index.md
index bcad820531c..0e1ae2a2628 100644
--- a/doc/administration/geo/replication/index.md
+++ b/doc/administration/geo/replication/index.md
@@ -283,7 +283,6 @@ You can keep track of the progress to include the missing items in:
| [Maven Packages](../../../user/packages/maven_repository/index.md) | No | No |
| [Conan Packages](../../../user/packages/conan_repository/index.md) | No | No |
| [External merge request diffs](../../merge_request_diffs.md) | No, if they are on-disk | No |
-| Content in object storage ([track progress](https://gitlab.com/groups/gitlab-org/-/epics/1526)) | No | No |
1. The integrity can be verified manually using [Integrity Check Rake Task](../../raketasks/check.md) on both nodes and comparing the output between them.
diff --git a/doc/administration/geo/replication/object_storage.md b/doc/administration/geo/replication/object_storage.md
index 878b67a8f8e..c85005a3f91 100644
--- a/doc/administration/geo/replication/object_storage.md
+++ b/doc/administration/geo/replication/object_storage.md
@@ -1,16 +1,30 @@
# Geo with Object storage **(PREMIUM ONLY)**
-Geo can be used in combination with Object Storage (AWS S3, or
-other compatible object storage).
+Geo can be used in combination with Object Storage (AWS S3, or other compatible object storage).
-## Configuration
+Currently, **secondary** nodes can use either:
-At this time it is required that if object storage is enabled on the
-**primary** node, it must also be enabled on each **secondary** node.
+- The same storage bucket as the **primary** node.
+- A replicated storage bucket.
-**Secondary** nodes can use the same storage bucket as the **primary** node, or
-they can use a replicated storage bucket. At this time GitLab does not
-take care of content replication in object storage.
+To have:
+
+- GitLab manage replication, follow [Enabling GitLab replication](#enabling-gitlab-managed-object-storage-replication).
+- Third-party services manage replication, follow [Third-party replication services](#third-party-replication-services).
+
+## Enabling GitLab managed object storage replication
+
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/10586) in GitLab 12.4.
+
+**Secondary** nodes can replicate files stored on the **primary** node regardless of
+whether they are stored on the local filesystem or in object storage.
+
+To enable GitLab replication, you must:
+
+1. Go to **Admin Area > Geo**.
+1. Press **Edit** on the **secondary** node.
+1. Enable the **Allow this secondary node to replicate content on Object Storage**
+ checkbox.
For LFS, follow the documentation to
[set up LFS object storage](../../../workflow/lfs/lfs_administration.md#storing-lfs-objects-in-remote-object-storage).
@@ -20,12 +34,21 @@ For CI job artifacts, there is similar documentation to configure
For user uploads, there is similar documentation to configure [upload object storage](../../uploads.md#using-object-storage-core-only)
-You should enable and configure object storage on both **primary** and **secondary**
-nodes. Migrating existing data to object storage should be performed on the
-**primary** node only. **Secondary** nodes will automatically notice that the migrated
-files are now in object storage.
+If you want to migrate the **primary** node's files to object storage, you can
+configure the **secondary** in a few ways:
+
+- Use the exact same object storage.
+- Use a separate object store but leverage your object storage solution's built-in
+ replication.
+- Use a separate object store and enable the **Allow this secondary node to replicate
+ content on Object Storage** setting.
+
+GitLab does not currently support the case where both:
+
+- The **primary** node uses local storage.
+- A **secondary** node uses object storage.
-## Replication
+## Third-party replication services
When using Amazon S3, you can use
[CRR](https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html) to
diff --git a/doc/administration/troubleshooting/postgresql.md b/doc/administration/troubleshooting/postgresql.md
index 3bbc3f23d83..f427cd88ce0 100644
--- a/doc/administration/troubleshooting/postgresql.md
+++ b/doc/administration/troubleshooting/postgresql.md
@@ -31,30 +31,30 @@ This section is for links to information elsewhere in the GitLab documentation.
- Destructively reseeding the GitLab database.
- Guidance around updating packaged PostgreSQL, including how to stop it happening automatically.
-- [More about external PostgreSQL](/ee/administration/external_database.html)
+- [More about external PostgreSQL](../external_database.md)
-- [Running GEO with external PostgreSQL](/ee/administration/geo/replication/external_database.html)
+- [Running GEO with external PostgreSQL](../geo/replication/external_database.md)
- [Upgrades when running PostgreSQL configured for HA.](https://docs.gitlab.com/omnibus/settings/database.html#upgrading-a-gitlab-ha-cluster)
-- Consuming PostgreSQL from [within CI runners](/ee/ci/services/postgres.html)
+- Consuming PostgreSQL from [within CI runners](../../ci/services/postgres.md)
-- [Using Slony to update PostgreSQL](/ee/update/upgrading_postgresql_using_slony.html)
+- [Using Slony to update PostgreSQL](../../update/upgrading_postgresql_using_slony.md)
- Uses replication to handle PostgreSQL upgrades - providing the schemas are the same.
- Reduces downtime to a short window for swinging over to the newer vewrsion.
- Managing Omnibus PostgreSQL versions [from the development docs](https://docs.gitlab.com/omnibus/development/managing-postgresql-versions.html)
-- [PostgreSQL scaling and HA](/ee/administration/high_availability/database.html)
- - including [troubleshooting](/ee/administration/high_availability/database.html#troubleshooting) gitlab-ctl repmgr-check-master and pgbouncer errors
+- [PostgreSQL scaling and HA](../high_availability/database.md)
+ - including [troubleshooting](../high_availability/database.md#troubleshooting) gitlab-ctl repmgr-check-master and pgbouncer errors
-- [Developer database documentation](/ee/development/README.html#database-guides) - some of which is absolutely not for production use. Including:
+- [Developer database documentation](../../development/README.md#database-guides) - some of which is absolutely not for production use. Including:
- understanding EXPLAIN plans
### Troubleshooting/Fixes
-- [GitLab database requirements](/ee/install/requirements.html#database) including
- - Support for MySQL was removed in GitLab 12.1; [migrate to PostgreSQL](/ee/update/mysql_to_postgresql.html)
+- [GitLab database requirements](../../install/requirements.md#database) including
+ - Support for MySQL was removed in GitLab 12.1; [migrate to PostgreSQL](../../update/mysql_to_postgresql.md)
- required extension pg_trgm
- required extension postgres_fdw for Geo
@@ -71,7 +71,7 @@ pg_basebackup: could not create temporary replication slot "pg_basebackup_12345"
HINT: Free one or increase max_replication_slots.
```
-- GEO [replication errors](/ee/administration/geo/replication/troubleshooting.html#fixing-replication-errors) including:
+- GEO [replication errors](../geo/replication/troubleshooting.md#fixing-replication-errors) including:
```
ERROR: replication slots can only be used if max_replication_slots > 0
@@ -83,11 +83,11 @@ Command exceeded allowed execution time
PANIC: could not write to file ‘pg_xlog/xlogtemp.123’: No space left on device
```
-- [Checking GEO configuration](/ee/administration/geo/replication/troubleshooting.html#checking-configuration) including
+- [Checking GEO configuration](../geo/replication/troubleshooting.md#checking-configuration) including
- reconfiguring hosts/ports
- checking and fixing user/password mappings
-- [Common GEO errors](/ee/administration/geo/replication/troubleshooting.html#fixing-common-errors)
+- [Common GEO errors](../geo/replication/troubleshooting.md#fixing-common-errors)
## Support topics
diff --git a/doc/ci/yaml/README.md b/doc/ci/yaml/README.md
index 114581e1e5d..628c3f01043 100644
--- a/doc/ci/yaml/README.md
+++ b/doc/ci/yaml/README.md
@@ -1087,7 +1087,7 @@ Manual actions are considered to be write actions, so permissions for
a user wants to trigger an action. In other words, in order to trigger a manual
action assigned to a branch that the pipeline is running for, the user needs to
have the ability to merge to this branch. It is possible to use protected environments
-to more strictly [protect manual deployments](#protecting-manual-jobs) from being
+to more strictly [protect manual deployments](#protecting-manual-jobs-premium) from being
run by unauthorized users.
NOTE: **Note:**
@@ -1095,36 +1095,38 @@ Using `when:manual` and `trigger` together results in the error `jobs:#{job-name
should be on_success, on_failure or always`, because `when:manual` prevents triggers
being used.
-##### Protecting manual jobs
+##### Protecting manual jobs **(PREMIUM)**
It's possible to use [protected environments](../environments/protected_environments.md)
to define a precise list of users authorized to run a manual job. By allowing only
users associated with a protected environment to trigger manual jobs, it is possible
to implement some special use cases, such as:
-- more precisely limiting who can deploy to an environment.
-- enabling a pipeline to be blocked until an approved user "approves" it.
-
-To do this, you must add an environment to the job. For example:
-
-```yaml
-deploy_prod:
- stage: deploy
- script:
- - echo "Deploy to production server"
- environment:
- name: production
- url: https://example.com
- when: manual
- only:
- - master
-```
-
-Then, in the [protected environments settings](../environments/protected_environments.md#protecting-environments),
-select the environment (`production` in the example above) and add the users, roles or groups
-that are authorized to trigger the manual job to the **Allowed to Deploy** list. Only those in
-this list will be able to trigger this manual job, as well as GitLab admins who are always able
-to use protected environments.
+- More precisely limiting who can deploy to an environment.
+- Enabling a pipeline to be blocked until an approved user "approves" it.
+
+To do this, you must:
+
+1. Add an `environment` to the job. For example:
+
+ ```yaml
+ deploy_prod:
+ stage: deploy
+ script:
+ - echo "Deploy to production server"
+ environment:
+ name: production
+ url: https://example.com
+ when: manual
+ only:
+ - master
+ ```
+
+1. In the [protected environments settings](../environments/protected_environments.md#protecting-environments),
+ select the environment (`production` in the example above) and add the users, roles or groups
+ that are authorized to trigger the manual job to the **Allowed to Deploy** list. Only those in
+ this list will be able to trigger this manual job, as well as GitLab administrators
+ who are always able to use protected environments.
Additionally, if a manual job is defined as blocking by adding `allow_failure: false`,
the next stages of the pipeline will not run until the manual job is triggered. This
diff --git a/doc/user/analytics/productivity_analytics.md b/doc/user/analytics/productivity_analytics.md
index bd3cbd62ba0..87a92fcbf2a 100644
--- a/doc/user/analytics/productivity_analytics.md
+++ b/doc/user/analytics/productivity_analytics.md
@@ -21,6 +21,7 @@ Productivity Analytics allows GitLab users to:
- Visualize typical merge request (MR) lifetime and statistics. Use a histogram that shows the distribution of the time elapsed between creating and merging merge requests.
- Drill down into the most time consuming merge requests, select a number of outliers, and filter down all subsequent charts to investigate potential causes.
- Filter by group, project, author, label, milestone, or a specific date range. Filter down, for example, to the merge requests of a specific author in a group or project during a milestone or specific date range.
+- Measure velocity over time. Visualize the trends of each metric from the charts above over time in order to observe progress. Zoom in on a particular date range if you notice outliers.
## Accessing metrics and visualizations
@@ -40,6 +41,8 @@ The following metrics and visualizations are available on a project or group lev
- Number of commits per merge request.
- Number of lines of code per commit.
- Number of files touched.
+- Scatterplot showing all MRs merged on a certain date, together with the days it took to complete the action and a 30 day rolling median.
+ - Users can zoom in and out on specific days of interest.
- Table showing the list of merge requests with their respective time duration metrics.
- Users can sort by any of the above metrics.