summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/audit_events.md1
-rw-r--r--doc/administration/high_availability/README.md49
-rw-r--r--doc/administration/scaling/index.md73
-rw-r--r--doc/administration/troubleshooting/ssl.md33
-rw-r--r--doc/development/documentation/styleguide.md8
-rw-r--r--doc/install/aws/index.md295
-rw-r--r--doc/user/project/integrations/prometheus.md15
7 files changed, 300 insertions, 174 deletions
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index aa70890d3cd..26b4434de77 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -41,6 +41,7 @@ From there, you can see the following actions:
- Group created or deleted
- Group changed visibility
- User was added to group and with which [permissions]
+- User sign-in via [Group SAML](../user/group/saml_sso/index.md)
- Permissions changes of a user assigned to a group
- Removed user from group
- Project added to group and with which visibility level
diff --git a/doc/administration/high_availability/README.md b/doc/administration/high_availability/README.md
index b5258a66e59..4734df324e0 100644
--- a/doc/administration/high_availability/README.md
+++ b/doc/administration/high_availability/README.md
@@ -2,18 +2,15 @@
type: reference, concepts
---
-# Scaling and High Availability
+# High Availability
-GitLab supports a number of scaling options to ensure that your self-managed
-instance is able to scale out to meet your organization's needs when scaling up
-is no longer practical or feasible.
-
-GitLab also offers high availability options for organizations that require
+GitLab offers high availability options for organizations that require
the fault tolerance and redundancy necessary to maintain high-uptime operations.
-Scaling and high availability can be tackled separately as GitLab comprises
-modular components which can be individually scaled or made highly available
-depending on your organization's needs and resources.
+Please consult our [scaling documentation](../scaling) if you want to resolve
+performance bottlenecks you encounter in individual GitLab components without
+incurring the additional complexity costs associated with maintaining a
+highly-available architecture.
On this page, we present examples of self-managed instances which demonstrate
how GitLab can be scaled out and made highly available. These examples progress
@@ -29,39 +26,7 @@ watch [this 1 hour Q&A](https://www.youtube.com/watch?v=uCU8jdYzpac)
with [John Northrup](https://gitlab.com/northrup), and live questions coming
in from some of our customers.
-## Scaling examples
-
-### Single-node Omnibus installation
-
-This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
-
-You can also optionally configure GitLab to use an [external PostgreSQL service](../external_database.md)
-or an [external object storage service](object_storage.md) for added
-performance and reliability at a relatively low complexity cost.
-
-References:
-
-- [Installation Docs](../../install/README.md)
-- [Backup/Restore Docs](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration)
-
-### Omnibus installation with multiple application servers
-
-This solution is appropriate for teams that are starting to scale out when
-scaling up is no longer meeting their needs. In this configuration, additional application nodes will handle frontend traffic, with a load balancer in front to distribute traffic across those nodes. Meanwhile, each application node connects to a shared file server and PostgreSQL and Redis services on the back end.
-
-The additional application servers adds limited fault tolerance to your GitLab
-instance. As long as one application node is online and capable of handling the
-instance's usage load, your team's productivity will not be interrupted. Having
-multiple application nodes also enables [zero-downtime updates](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates).
-
-References:
-
-- [Configure your load balancer for GitLab](load_balancer.md)
-- [Configure your NFS server to work with GitLab](nfs.md)
-- [Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip)
-- [Setting up a Redis-only server](https://docs.gitlab.com/omnibus/settings/redis.html#setting-up-a-redis-only-server)
-
-## High-availability examples
+## Examples
### Omnibus installation with automatic database failover
diff --git a/doc/administration/scaling/index.md b/doc/administration/scaling/index.md
new file mode 100644
index 00000000000..99e8ca9a65f
--- /dev/null
+++ b/doc/administration/scaling/index.md
@@ -0,0 +1,73 @@
+---
+type: reference, concepts
+---
+
+# Scaling
+
+GitLab supports a number of scaling options to ensure that your self-managed
+instance is able to scale out to meet your organization's needs when scaling up
+a single-box GitLab installation is no longer practical or feasible.
+
+Please consult our [high availability documentation](../high_availability/README.md)
+if your organization requires fault tolerance and redundancy features, such as
+automatic database system failover.
+
+## GitLab components and scaling instructions
+
+Here's a list of components directly provided by Omnibus GitLab or installed as
+part of a source installation and their configuration instructions for scaling.
+
+| Component | Description | Configuration instructions |
+|-----------|-------------|----------------------------|
+| [PostgreSQL](../../development/architecture.md#postgresql) | Database | [PostgreSQL configuration](https://docs.gitlab.com/omnibus/settings/database.html) |
+| [Redis](../../development/architecture.md#redis) | Key/value store for fast data lookup and caching | [Redis configuration](../high_availability/redis.md) |
+| [GitLab application services](../../development/architecture.md#unicorn) | Unicorn/Puma, Workhorse, GitLab Shell - serves front-end requests requests (UI, API, Git over HTTP/SSH) | [GitLab app scaling configuration](../high_availability/gitlab.md) |
+| [PgBouncer](../../development/architecture.md#pgbouncer) | Database connection pooler | [PgBouncer configuration](../high_availability/pgbouncer.md#running-pgbouncer-as-part-of-a-non-ha-gitlab-installation) **(PREMIUM ONLY)** |
+| [Sidekiq](../../development/architecture.md#sidekiq) | Asynchronous/background jobs | [Sidekiq configuration](../high_availability/sidekiq.md) |
+| [Gitaly](../../development/architecture.md#gitaly) | Provides access to Git repositories | [Gitaly configuration](../gitaly/index.md#running-gitaly-on-its-own-server) |
+| [Prometheus](../../development/architecture.md#prometheus) and [Grafana](../../development/architecture.md#grafana) | GitLab environment monitoring | [Monitoring node for scaling](../high_availability/monitoring_node.md) |
+
+## Third-party services used for scaling
+
+Here's a list of third-party services you may require as part of scaling GitLab.
+The services can be provided by numerous applications or vendors and further
+advice is given on how best to select the right choice for your organization's
+needs.
+
+| Component | Description | Configuration instructions |
+|-----------|-------------|----------------------------|
+| Load balancer(s) | Handles load balancing, typically when you have multiple GitLab application services nodes | [Load balancer configuration](../high_availability/load_balancer.md) |
+| Object storage service | Recommended store for shared data objects | [Cloud Object Storage configuration](../high_availability/object_storage.md) |
+| NFS | Shared disk storage service. Can be used as an alternative for Gitaly or Object Storage. Required for GitLab Pages | [NFS configuration](../high_availability/nfs.md) |
+
+## Examples
+
+### Single-node Omnibus installation
+
+This solution is appropriate for many teams that have a single server at their disposal. With automatic backup of the GitLab repositories, configuration, and the database, this can be an optimal solution if you don't have strict availability requirements.
+
+You can also optionally configure GitLab to use an [external PostgreSQL service](../external_database.md)
+or an [external object storage service](../high_availability/object_storage.md) for added
+performance and reliability at a relatively low complexity cost.
+
+References:
+
+- [Installation Docs](../../install/README.md)
+- [Backup/Restore Docs](https://docs.gitlab.com/omnibus/settings/backups.html#backup-and-restore-omnibus-gitlab-configuration)
+
+### Omnibus installation with multiple application servers
+
+This solution is appropriate for teams that are starting to scale out when
+scaling up is no longer meeting their needs. In this configuration, additional application nodes will handle frontend traffic, with a load balancer in front to distribute traffic across those nodes. Meanwhile, each application node connects to a shared file server and PostgreSQL and Redis services on the back end.
+
+The additional application servers adds limited fault tolerance to your GitLab
+instance. As long as one application node is online and capable of handling the
+instance's usage load, your team's productivity will not be interrupted. Having
+multiple application nodes also enables [zero-downtime updates](https://docs.gitlab.com/omnibus/update/#zero-downtime-updates).
+
+References:
+
+- [Configure your load balancer for GitLab](../high_availability/load_balancer.md)
+- [Configure your NFS server to work with GitLab](../high_availability/nfs.md)
+- [Configure packaged PostgreSQL server to listen on TCP/IP](https://docs.gitlab.com/omnibus/settings/database.html#configure-packaged-postgresql-server-to-listen-on-tcpip)
+- [Setting up a Redis-only server](https://docs.gitlab.com/omnibus/settings/redis.html#setting-up-a-redis-only-server)
diff --git a/doc/administration/troubleshooting/ssl.md b/doc/administration/troubleshooting/ssl.md
index 475b7d44eac..b66b6e8c90a 100644
--- a/doc/administration/troubleshooting/ssl.md
+++ b/doc/administration/troubleshooting/ssl.md
@@ -137,3 +137,36 @@ To fix this problem:
```shell
git config --global http.sslVerify false
```
+
+## SSL_connect wrong version number
+
+A misconfiguration may result in:
+
+- `gitlab-rails/exceptions_json.log` entries containing:
+
+ ```plaintext
+ "exception.class":"Excon::Error::Socket","exception.message":"SSL_connect returned=1 errno=0 state=error: wrong version number (OpenSSL::SSL::SSLError)",
+ "exception.class":"Excon::Error::Socket","exception.message":"SSL_connect returned=1 errno=0 state=error: wrong version number (OpenSSL::SSL::SSLError)",
+ ```
+
+- `gitlab-workhorse/current` containing:
+
+ ```plaintext
+ http: server gave HTTP response to HTTPS client
+ http: server gave HTTP response to HTTPS client
+ ```
+
+- `gitlab-rails/sidekiq.log` or `sidekiq/current` containing:
+
+ ```plaintext
+ message: SSL_connect returned=1 errno=0 state=error: wrong version number (OpenSSL::SSL::SSLError)
+ message: SSL_connect returned=1 errno=0 state=error: wrong version number (OpenSSL::SSL::SSLError)
+ ```
+
+Some of these errors come from the Excon Ruby gem, and could be generated in circumstances
+where GitLab is configured to initiate an HTTPS session to a remote server
+that is serving just HTTP.
+
+One scenario is that you're using [object storage](../high_availability/object_storage.md)
+which is not served under HTTPS. GitLab is misconfigured and attempts a TLS handshake,
+but the object storage will respond with plain HTTP.
diff --git a/doc/development/documentation/styleguide.md b/doc/development/documentation/styleguide.md
index 370effc940c..596ba01f5e0 100644
--- a/doc/development/documentation/styleguide.md
+++ b/doc/development/documentation/styleguide.md
@@ -1233,6 +1233,14 @@ a helpful link back to how the feature was developed.
> - Enabled by default in GitLab 11.4.
```
+- If a feature is moved to another tier:
+
+ ```md
+ > - [Introduced](<link-to-issue>) in [GitLab Premium](https://about.gitlab.com/pricing/) 11.5.
+ > - [Moved](<link-to-issue>) to [GitLab Starter](https://about.gitlab.com/pricing/) in 11.8.
+ > - [Moved](<link-to-issue>) to GitLab Core in 12.0.
+ ```
+
NOTE: **Note:**
Version text must be on its own line and surounded by blank lines to render correctly.
diff --git a/doc/install/aws/index.md b/doc/install/aws/index.md
index c2b1198940b..128d0746df2 100644
--- a/doc/install/aws/index.md
+++ b/doc/install/aws/index.md
@@ -381,7 +381,103 @@ EC2 instances running Linux use private key files for SSH authentication. You'll
Storing private key files on your bastion host is a bad idea. To get around this, use SSH agent forwarding on your client. See [Securely Connect to Linux Instances Running in a Private Amazon VPC](https://aws.amazon.com/blogs/security/securely-connect-to-linux-instances-running-in-a-private-amazon-vpc/) for a step-by-step guide on how to use SSH agent forwarding.
-## Setting up Gitaly
+## Install GitLab and create custom AMI
+
+We will need a preconfigured, custom GitLab AMI to use in our launch configuration later. As a starting point, we will use the official GitLab AMI to create a GitLab instance. Then, we'll add our custom configuration for PostgreSQL, Redis, and Gitaly. If you prefer, instead of using the official GitLab AMI, you can also spin up an EC2 instance of your choosing and [manually install GitLab](https://about.gitlab.com/install/).
+
+### Install GitLab
+
+From the EC2 dashboard:
+
+1. Click **Launch Instance** and select **Community AMIs** from the left menu.
+1. In the search bar, search for `GitLab EE <version>` where `<version>` is the latest version as seen on the [releases page](https://about.gitlab.com/releases/). Select the latest patch release, for example `GitLab EE 12.9.2`.
+1. Select an instance type based on your workload. Consult the [hardware requirements](../../install/requirements.md#hardware-requirements) to choose one that fits your needs (at least `c5.xlarge`, which is sufficient to accommodate 100 users).
+1. Click **Configure Instance Details**:
+ 1. In the **Network** dropdown, select `gitlab-vpc`, the VPC we created earlier.
+ 1. In the **Subnet** dropdown, `select gitlab-private-10.0.1.0` from the list of subnets we created earlier.
+ 1. Double check that **Auto-assign Public IP** is set to `Use subnet setting (Disable)`.
+ 1. Click **Add Storage**.
+ 1. The root volume is 8GiB by default and should be enough given that we won’t store any data there.
+1. Click **Add Tags** and add any tags you may need. In our case, we'll only set `Key: Name` and `Value: GitLab`.
+1. Click **Configure Security Group**. Check **Select an existing security group** and select the `gitlab-loadbalancer-sec-group` we created earlier.
+1. Click **Review and launch** followed by **Launch** if you’re happy with your settings.
+1. Finally, acknowledge that you have access to the selected private key file or create a new one. Click **Launch Instances**.
+
+### Add custom configuration
+
+Connect to your GitLab instance via **Bastion Host A** using [SSH Agent Forwarding](#use-ssh-agent-forwarding). Once connected, add the following custom configuration:
+
+#### Install the `pg_trgm` extension for PostgreSQL
+
+From your GitLab instance, connect to the RDS instance to verify access and to install the required `pg_trgm` extension.
+
+To find the host or endpoint, navigate to **Amazon RDS > Databases** and click on the database you created earlier. Look for the endpoint under the **Connectivity & security** tab.
+
+Do not to include the colon and port number:
+
+```shell
+sudo /opt/gitlab/embedded/bin/psql -U gitlab -h <rds-endpoint> -d gitlabhq_production
+```
+
+At the `psql` prompt create the extension and then quit the session:
+
+```shell
+psql (10.9)
+Type "help" for help.
+
+gitlab=# CREATE EXTENSION pg_trgm;
+gitlab=# \q
+```
+
+#### Configure GitLab to connect to PostgreSQL and Redis
+
+1. Edit `/etc/gitlab/gitlab.rb`, find the `external_url 'http://<domain>'` option
+ and change it to the `https` domain you will be using.
+
+1. Look for the GitLab database settings and uncomment as necessary. In
+ our current case we'll specify the database adapter, encoding, host, name,
+ username, and password:
+
+ ```ruby
+ # Disable the built-in Postgres
+ postgresql['enable'] = false
+
+ # Fill in the connection details
+ gitlab_rails['db_adapter'] = "postgresql"
+ gitlab_rails['db_encoding'] = "unicode"
+ gitlab_rails['db_database'] = "gitlabhq_production"
+ gitlab_rails['db_username'] = "gitlab"
+ gitlab_rails['db_password'] = "mypassword"
+ gitlab_rails['db_host'] = "<rds-endpoint>"
+ ```
+
+1. Next, we need to configure the Redis section by adding the host and
+ uncommenting the port:
+
+ ```ruby
+ # Disable the built-in Redis
+ redis['enable'] = false
+
+ # Fill in the connection details
+ gitlab_rails['redis_host'] = "<redis-endpoint>"
+ gitlab_rails['redis_port'] = 6379
+ ```
+
+1. Finally, reconfigure GitLab for the changes to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. You might also find it useful to run a check and a service status to make sure
+ everything has been setup correctly:
+
+ ```shell
+ sudo gitlab-rake gitlab:check
+ sudo gitlab-ctl status
+ ```
+
+#### Set up Gitaly
CAUTION: **Caution:** In this architecture, having a single Gitaly server creates a single point of failure. This limitation will be removed once [Gitaly HA](https://gitlab.com/groups/gitlab-org/-/epics/842) is released.
@@ -410,7 +506,79 @@ Let's create an EC2 instance where we'll install Gitaly:
> **Optional:** Instead of storing configuration _and_ repository data on the root volume, you can also choose to add an additional EBS volume for repository storage. Follow the same guidance as above. See the [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/).
-Now that we have our EC2 instance ready, follow the [documentation to install GitLab and set up Gitaly on its own server](../../administration/gitaly/index.md#running-gitaly-on-its-own-server).
+Now that we have our EC2 instance ready, follow the [documentation to install GitLab and set up Gitaly on its own server](../../administration/gitaly/index.md#running-gitaly-on-its-own-server). Perform the client setup steps from that document on the [GitLab instance we created](#install-gitlab) above.
+
+#### Add Support for Proxied SSL
+
+As we are terminating SSL at our [load balancer](#load-balancer), follow the steps at [Supporting proxied SSL](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl) to configure this in `/etc/gitlab/gitlab.rb`.
+
+Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
+
+#### Disable Let's Encrypt
+
+Since we're adding our SSL certificate at the load balancer, we do not need GitLab's built-in support for Let's Encrypt. Let's Encrypt [is enabled by default](https://docs.gitlab.com/omnibus/settings/ssl.html#lets-encrypt-integration) when using an `https` domain since GitLab 10.7, so we need to explicitly disable it:
+
+1. Open `/etc/gitlab/gitlab.rb` and disable it:
+
+ ```ruby
+ letsencrypt['enable'] = false
+ ```
+
+1. Save the file and reconfigure for the changes to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+#### Configure host keys
+
+Ordinarily we would manually copy the contents (primary and public keys) of `/etc/ssh/` on the primary application server to `/etc/ssh` on all secondary servers. This prevents false man-in-the-middle-attack alerts when accessing servers in your High Availability cluster behind a load balancer.
+
+We'll automate this by creating static host keys as part of our custom AMI. As these host keys are also rotated every time an EC2 instance boots up, "hard coding" them into our custom AMI serves as a handy workaround.
+
+On your GitLab instance run the following:
+
+```shell
+mkdir /etc/ssh_static
+cp -R /etc/ssh/* /etc/ssh_static
+```
+
+In `/etc/ssh/sshd_config` update the following:
+
+```bash
+ # HostKeys for protocol version 2
+ HostKey /etc/ssh_static/ssh_host_rsa_key
+ HostKey /etc/ssh_static/ssh_host_dsa_key
+ HostKey /etc/ssh_static/ssh_host_ecdsa_key
+ HosstKey /etc/ssh_static/ssh_host_ed25519_key
+```
+
+#### Amazon S3 object storage
+
+Since we're not using NFS for shared storage, we will use [Amazon S3](https://aws.amazon.com/s3/) buckets to store backups, artifacts, LFS objects, uploads, merge request diffs, container registry images, and more. For instructions on how to configure each of these, please see [Cloud Object Storage](../../administration/high_availability/object_storage.md).
+
+Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
+
+NOTE: **Note:**
+One current feature of GitLab that still requires a shared directory (NFS) is
+[GitLab Pages](../../user/project/pages/index.md).
+There is [work in progress](https://gitlab.com/gitlab-org/gitlab-pages/issues/196)
+to eliminate the need for NFS to support GitLab Pages.
+
+---
+
+That concludes the configuration changes for our GitLab instance. Next, we'll create a custom AMI based on this instance to use for our launch configuration and auto scaling group.
+
+### Create custom AMI
+
+On the EC2 dashboard:
+
+1. Select the `GitLab` instance we [created earlier](#install-gitLab).
+1. Click on **Actions**, scroll down to **Image** and click **Create Image**.
+1. Give your image a name and description (we'll use `GitLab-Source` for both).
+1. Leave everything else as default and click **Create Image**
+
+Now we have a custom AMI that we'll use to create our launch configuration the the next step.
## Deploying GitLab inside an auto scaling group
@@ -497,129 +665,6 @@ You'll notice that after we save the configuration, AWS starts launching our two
instances in different AZs and without a public IP which is exactly what
we intended.
-## After deployment
-
-After a few minutes, the instances should be up and accessible via the internet.
-Let's connect to the primary and configure some things before logging in.
-
-### Installing the `pg_trgm` extension for PostgreSQL
-
-Connect to the RDS instance to verify access and to install the required `pg_trgm` extension.
-
-To find the host or endpoint, naviagate to **Amazon RDS > Databases** and click on the database you created earlier. Look for the endpoint under the **Connectivity & security** tab.
-
-Do not to include the colon and port number:
-
-```shell
-sudo /opt/gitlab/embedded/bin/psql -U gitlab -h <rds-endpoint> -d gitlabhq_production
-```
-
-At the psql prompt create the extension and then quit the session:
-
-```shell
-psql (10.9)
-Type "help" for help.
-
-gitlab=# CREATE EXTENSION pg_trgm;
-gitlab=# \q
-```
-
----
-
-### Configuring GitLab to connect with PostgreSQL and Redis
-
-Edit the `gitlab.rb` file at `/etc/gitlab/gitlab.rb`
-find the `external_url 'http://gitlab.example.com'` option and change it
-to the domain you will be using or the public IP address of the current
-instance to test the configuration.
-
-For a more detailed description about configuring GitLab, see [Configuring GitLab for HA](../../administration/high_availability/gitlab.md)
-
-Now look for the GitLab database settings and uncomment as necessary. In
-our current case we'll specify the database adapter, encoding, host, name,
-username, and password:
-
-```ruby
-# Disable the built-in Postgres
-postgresql['enable'] = false
-
-# Fill in the connection details
-gitlab_rails['db_adapter'] = "postgresql"
-gitlab_rails['db_encoding'] = "unicode"
-gitlab_rails['db_database'] = "gitlabhq_production"
-gitlab_rails['db_username'] = "gitlab"
-gitlab_rails['db_password'] = "mypassword"
-gitlab_rails['db_host'] = "<rds-endpoint>"
-```
-
-Next, we need to configure the Redis section by adding the host and
-uncommenting the port:
-
-```ruby
-# Disable the built-in Redis
-redis['enable'] = false
-
-# Fill in the connection details
-gitlab_rails['redis_host'] = "<redis-endpoint>"
-gitlab_rails['redis_port'] = 6379
-```
-
-Finally, reconfigure GitLab for the change to take effect:
-
-```shell
-sudo gitlab-ctl reconfigure
-```
-
-You might also find it useful to run a check and a service status to make sure
-everything has been setup correctly:
-
-```shell
-sudo gitlab-rake gitlab:check
-sudo gitlab-ctl status
-```
-
-If everything looks good, you should be able to reach GitLab in your browser.
-
-### Using Amazon S3 object storage
-
-GitLab stores many objects outside the Git repository, many of which can be
-uploaded to S3. That way, you can offload the root disk volume of these objects
-which would otherwise take much space.
-
-In particular, you can store in S3:
-
-- [The Git LFS objects](../../administration/lfs/lfs_administration.md#s3-for-omnibus-installations) ((Omnibus GitLab installations))
-- [The Container Registry images](../../administration/packages/container_registry.md#container-registry-storage-driver) (Omnibus GitLab installations)
-- [The GitLab CI/CD job artifacts](../../administration/job_artifacts.md#using-object-storage) (Omnibus GitLab installations)
-
-### Setting up a domain name
-
-After you SSH into the instance, configure the domain name:
-
-1. Open `/etc/gitlab/gitlab.rb` with your preferred editor.
-1. Edit the `external_url` value:
-
- ```ruby
- external_url 'http://example.com'
- ```
-
-1. Reconfigure GitLab:
-
- ```shell
- sudo gitlab-ctl reconfigure
- ```
-
-You should now be able to reach GitLab at the URL you defined. To use HTTPS
-(recommended), see the [HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https).
-
-### Logging in for the first time
-
-If you followed the previous section, you should be now able to visit GitLab
-in your browser. The very first time, you will be asked to set up a password
-for the `root` user which has admin privileges on the GitLab instance.
-
-After you set it up, login with username `root` and the newly created password.
-
## Health check and monitoring with Prometheus
Apart from Amazon's Cloudwatch which you can enable on various services,
diff --git a/doc/user/project/integrations/prometheus.md b/doc/user/project/integrations/prometheus.md
index eb0013ab6e5..6d848f73cb6 100644
--- a/doc/user/project/integrations/prometheus.md
+++ b/doc/user/project/integrations/prometheus.md
@@ -13,7 +13,7 @@ There are two ways to set up Prometheus integration, depending on where your app
- For deployments on Kubernetes, GitLab can automatically [deploy and manage Prometheus](#managed-prometheus-on-kubernetes).
- For other deployment targets, simply [specify the Prometheus server](#manual-configuration-of-prometheus).
-Once enabled, GitLab will automatically detect metrics from known services in the [metric library](#monitoring-cicd-environments). You are also able to [add your own metrics](#adding-additional-metrics-premium) as well.
+Once enabled, GitLab will automatically detect metrics from known services in the [metric library](#monitoring-cicd-environments). You can also [add your own metrics](#adding-custom-metrics).
## Enabling Prometheus Integration
@@ -132,9 +132,10 @@ GitLab will automatically scan the Prometheus server for metrics from known serv
You can view the performance dashboard for an environment by [clicking on the monitoring button](../../../ci/environments.md#monitoring-environments).
-### Adding additional metrics **(PREMIUM)**
+### Adding custom metrics
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/3799) in [GitLab Premium](https://about.gitlab.com/pricing/) 10.6.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/3799) in [GitLab Premium](https://about.gitlab.com/pricing/) 10.6.
+> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28527) to [GitLab Core](https://about.gitlab.com/pricing/) 12.10.
Custom metrics can be monitored by adding them on the monitoring dashboard page. Once saved, they will be displayed on the environment performance dashboard provided that either:
@@ -191,8 +192,8 @@ You may create a new file from scratch or duplicate a GitLab-defined Prometheus
dashboard.
NOTE: **Note:**
-The custom metrics as defined below do not support alerts, unlike
-[additional metrics](#adding-additional-metrics-premium).
+The metrics as defined below do not support alerts, unlike
+[custom metrics](#adding-custom-metrics).
#### Adding a new dashboard to your project
@@ -654,9 +655,9 @@ Data from Prometheus charts on the metrics dashboard can be downloaded as CSV.
#### Managed Prometheus instances
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6590) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 11.2 for [custom metrics](#adding-additional-metrics-premium), and 11.3 for [library metrics](prometheus_library/metrics.md).
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/6590) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 11.2 for [custom metrics](#adding-custom-metrics), and 11.3 for [library metrics](prometheus_library/metrics.md).
-For managed Prometheus instances using auto configuration, alerts for metrics [can be configured](#adding-additional-metrics-premium) directly in the performance dashboard.
+For managed Prometheus instances using auto configuration, alerts for metrics [can be configured](#adding-custom-metrics) directly in the performance dashboard.
To set an alert: