diff options
Diffstat (limited to 'doc/topics/autodevops')
-rw-r--r-- | doc/topics/autodevops/customize.md | 48 | ||||
-rw-r--r-- | doc/topics/autodevops/index.md | 406 | ||||
-rw-r--r-- | doc/topics/autodevops/quick_start_guide.md | 8 | ||||
-rw-r--r-- | doc/topics/autodevops/stages.md | 404 | ||||
-rw-r--r-- | doc/topics/autodevops/upgrading_postgresql.md | 44 |
5 files changed, 503 insertions, 407 deletions
diff --git a/doc/topics/autodevops/customize.md b/doc/topics/autodevops/customize.md index 7c587ad3444..056b4c1caf4 100644 --- a/doc/topics/autodevops/customize.md +++ b/doc/topics/autodevops/customize.md @@ -2,7 +2,7 @@ While [Auto DevOps](index.md) provides great defaults to get you started, you can customize almost everything to fit your needs. Auto DevOps offers everything from custom -[buildpacks](#custom-buildpacks), to [`Dockerfiles](#custom-dockerfile), and +[buildpacks](#custom-buildpacks), to [Dockerfiles](#custom-dockerfile), and [Helm charts](#custom-helm-chart). You can even copy the complete [CI/CD configuration](#customizing-gitlab-ciyml) into your project to enable staging and canary deployments, and more. @@ -146,7 +146,7 @@ to override the default chart values by setting `HELM_UPGRADE_EXTRA_ARGS` to `-- ## Custom Helm chart per environment You can specify the use of a custom Helm chart per environment by scoping the environment variable -to the desired environment. See [Limiting environment scopes of variables](../../ci/variables/README.md#limiting-environment-scopes-of-environment-variables). +to the desired environment. See [Limiting environment scopes of variables](../../ci/variables/README.md#limit-the-environment-scopes-of-environment-variables). ## Customizing `.gitlab-ci.yml` @@ -179,7 +179,7 @@ into your project and edit it as needed. For clusters not managed by GitLab, you can customize the namespace in `.gitlab-ci.yml` by specifying -[`environment:kubernetes:namespace`](../../ci/environments.md#configuring-kubernetes-deployments). +[`environment:kubernetes:namespace`](../../ci/environments/index.md#configuring-kubernetes-deployments). For example, the following configuration overrides the namespace used for `production` deployments: @@ -227,6 +227,8 @@ If your `.gitlab-ci.yml` extends these Auto DevOps templates and override the `o `except` keywords, you must migrate your templates to use the [`rules`](../../ci/yaml/README.md#rules) syntax after the base template is migrated to use the `rules` syntax. +For users who cannot migrate just yet, you can alternatively pin your templates to +the [GitLab 12.10 based templates](https://gitlab.com/gitlab-org/auto-devops-v12-10). ## PostgreSQL database support @@ -243,23 +245,19 @@ postgres://user:password@postgres-host:postgres-port/postgres-database CAUTION: **Deprecation** The variable `AUTO_DEVOPS_POSTGRES_CHANNEL` that controls default provisioned -PostgreSQL currently defaults to `1`. This value is scheduled to change to `2` in -[GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/210499). +PostgreSQL was changed to `2` in [GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/210499). +To keep using the old PostgreSQL, set the `AUTO_DEVOPS_POSTGRES_CHANNEL` variable to +`1`. The version of the chart used to provision PostgreSQL: +- Is 8.2.1 in GitLab 13.0 and later, but can be set back to 0.7.1 if needed. +- Can be set to from 0.7.1 to 8.2.1 in GitLab 12.9 and 12.10. - Is 0.7.1 in GitLab 12.8 and earlier. -- Can be set to from 0.7.1 to 8.2.1 in GitLab 12.9 and later. GitLab encourages users to [migrate their database](upgrading_postgresql.md) to the newer PostgreSQL. -To use the new PostgreSQL: - -- New projects can set the `AUTO_DEVOPS_POSTGRES_CHANNEL` variable to `2`. -- Old projects can be upgraded by following the guide to - [upgrading PostgresSQL](upgrading_postgresql.md). - ### Using external PostgreSQL database providers While Auto DevOps provides out-of-the-box support for a PostgreSQL container for @@ -271,10 +269,9 @@ You must define environment-scoped variables for `POSTGRES_ENABLED` and `DATABASE_URL` in your project's CI/CD settings: 1. Disable the built-in PostgreSQL installation for the required environments using - scoped [environment variables](../../ci/environments.md#scoping-environments-with-specs). + scoped [environment variables](../../ci/environments/index.md#scoping-environments-with-specs). For this use case, it's likely that only `production` will need to be added to this - list. The built-in PostgreSQL setup for Review Apps and staging is sufficient, - because a high availability setup is not required. + list. The built-in PostgreSQL setup for Review Apps and staging is sufficient. ![Auto Metrics](img/disable_postgres.png) @@ -303,6 +300,7 @@ applications. |-----------------------------------------|------------------------------------| | `ADDITIONAL_HOSTS` | Fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. | | `<ENVIRONMENT>_ADDITIONAL_HOSTS` | For a specific environment, the fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. This takes precedence over `ADDITIONAL_HOSTS`. | +| `AUTO_DEVOPS_ATOMIC_RELEASE` | As of GitLab 13.0, Auto DevOps uses [`--atomic`](https://v2.helm.sh/docs/helm/#options-43) for Helm deployments by default. Set this variable to `false` to disable the use of `--atomic` | | `AUTO_DEVOPS_BUILD_IMAGE_CNB_ENABLED` | When set to a non-empty value and no `Dockerfile` is present, Auto Build builds your application using Cloud Native Buildpacks instead of Herokuish. [More details](stages.md#auto-build-using-cloud-native-buildpacks-beta). | | `AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` | Extra arguments to be passed to the `docker build` command. Note that using quotes won't prevent word splitting. [More details](#passing-arguments-to-docker-build). | | `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` | A [comma-separated list of CI variable names](#passing-secrets-to-docker-build) to be passed to the `docker build` command as secrets. | @@ -318,7 +316,7 @@ applications. | `CANARY_REPLICAS` | Number of canary replicas to deploy for [Canary Deployments](../../user/project/canary_deployments.md). Defaults to 1. | | `HELM_RELEASE_NAME` | From GitLab 12.1, allows the `helm` release name to be overridden. Can be used to assign unique release names when deploying multiple projects to a single namespace. | | `HELM_UPGRADE_VALUES_FILE` | From GitLab 12.6, allows the `helm upgrade` values file to be overridden. Defaults to `.gitlab/auto-deploy-values.yaml`. | -| `HELM_UPGRADE_EXTRA_ARGS` | From GitLab 11.11, allows extra arguments in `helm` commands when deploying the application. Note that using quotes won't prevent word splitting. **Tip:** you can use this variable to [customize the Auto Deploy Helm chart](#custom-helm-chart) by applying custom override values with `--values my-values.yaml`. | +| `HELM_UPGRADE_EXTRA_ARGS` | From GitLab 11.11, allows extra arguments in `helm` commands when deploying the application. Note that using quotes won't prevent word splitting. | | `INCREMENTAL_ROLLOUT_MODE` | From GitLab 11.4, if present, can be used to enable an [incremental rollout](#incremental-rollout-to-production-premium) of your application for the production environment. Set to `manual` for manual deployment jobs or `timed` for automatic rollout deployments with a 5 minute delay each one. | | `K8S_SECRET_*` | From GitLab 11.7, any variable prefixed with [`K8S_SECRET_`](#application-secret-variables) will be made available by Auto DevOps as environment variables to the deployed application. | | `KUBE_INGRESS_BASE_DOMAIN` | From GitLab 11.8, can be used to set a domain per cluster. See [cluster domains](../../user/project/clusters/index.md#base-domain) for more information. | @@ -329,9 +327,9 @@ applications. | `STAGING_ENABLED` | From GitLab 10.8, used to define a [deploy policy for staging and production environments](#deploy-policy-for-staging-and-production-environments). | TIP: **Tip:** -Set up the replica variables using a -[project variable](../../ci/variables/README.md#gitlab-cicd-environment-variables) -and scale your application by only redeploying it. +After you set up your replica variables using a +[project variable](../../ci/variables/README.md#gitlab-cicd-environment-variables), +you can scale your application by redeploying it. CAUTION: **Caution:** You should *not* scale your application using Kubernetes directly. This can @@ -350,15 +348,7 @@ The following table lists variables related to the database. | `POSTGRES_USER` | The PostgreSQL user. Defaults to `user`. Set it to use a custom username. | | `POSTGRES_PASSWORD` | The PostgreSQL password. Defaults to `testing-password`. Set it to use a custom password. | | `POSTGRES_DB` | The PostgreSQL database name. Defaults to the value of [`$CI_ENVIRONMENT_SLUG`](../../ci/variables/README.md#predefined-environment-variables). Set it to use a custom database name. | -| `POSTGRES_VERSION` | Tag for the [`postgres` Docker image](https://hub.docker.com/_/postgres) to use. Defaults to `9.6.2`. | - -### Security tools - -The following table lists variables related to security tools. - -| **Variable** | **Description** | -|-----------------------------------------|------------------------------------| -| `SAST_CONFIDENCE_LEVEL` | Minimum confidence level of security issues you want to be reported; `1` for Low, `2` for Medium, `3` for High. Defaults to `3`. | +| `POSTGRES_VERSION` | Tag for the [`postgres` Docker image](https://hub.docker.com/_/postgres) to use. Defaults to `9.6.16` for tests and deployments as of GitLab 13.0 (previously `9.6.2`). If `AUTO_DEVOPS_POSTGRES_CHANNEL` is set to `1`, deployments will use the default version `9.6.2`. | ### Disable jobs @@ -544,7 +534,7 @@ required to go from `10%` to `100%`, you can jump to whatever job you want. You can also scale down by running a lower percentage job, just before hitting `100%`. Once you get to `100%`, you can't scale down, and you'd have to roll back by redeploying the old version using the -[rollback button](../../ci/environments.md#retrying-and-rolling-back) in the +[rollback button](../../ci/environments/index.md#retrying-and-rolling-back) in the environment page. Below, you can see how the pipeline will look if the rollout or staging diff --git a/doc/topics/autodevops/index.md b/doc/topics/autodevops/index.md index 7ed6625bea3..e7165136cf0 100644 --- a/doc/topics/autodevops/index.md +++ b/doc/topics/autodevops/index.md @@ -3,17 +3,25 @@ > - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/37115) in GitLab 10.0. > - Generally available on GitLab 11.0. -Auto DevOps provides pre-defined CI/CD configuration which allows you to automatically detect, build, test, -deploy, and monitor your applications. Leveraging CI/CD best practices and tools, Auto DevOps aims -to simplify the setup and execution of a mature & modern software development lifecycle. +Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically +detect, build, test, deploy, and monitor your applications. Leveraging CI/CD +best practices and tools, Auto DevOps aims to simplify the setup and execution +of a mature and modern software development lifecycle. ## Overview -With Auto DevOps, the software development process becomes easier to set up -as every project can have a complete workflow from verification to monitoring -with minimal configuration. Just push your code and GitLab takes -care of everything else. This makes it easier to start new projects and brings -consistency to how applications are set up throughout a company. +You can spend a lot of effort to set up the workflow and processes required to +build, deploy, and monitor your project. It gets worse when your company has +hundreds, if not thousands, of projects to maintain. With new projects +constantly starting up, the entire software development process becomes +impossibly complex to manage. + +Auto DevOps provides you a seamless software development process by +automatically detecting all dependencies and language technologies required to +test, build, package, deploy, and monitor every project with minimal +configuration. Automation enables consistency across your projects, seamless +management of processes, and faster creation of new projects: push your code, +and GitLab does the rest, improving your productivity and efficiency. For an introduction to Auto DevOps, watch [AutoDevOps in GitLab 11.0](https://youtu.be/0Tc0YYBxqi4). @@ -21,14 +29,14 @@ For an introduction to Auto DevOps, watch [AutoDevOps in GitLab 11.0](https://yo > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/41729) in GitLab 11.3. -Auto DevOps is enabled by default for all projects and will attempt to run on all pipelines -in each project. This default can be enabled or disabled by an instance administrator in the +Auto DevOps is enabled by default for all projects and attempts to run on all pipelines +in each project. An instance administrator can enable or disable this default in the [Auto DevOps settings](../../user/admin_area/settings/continuous_integration.md#auto-devops-core-only). -It will be automatically disabled in individual projects on their first pipeline failure, +Auto DevOps automatically disables in individual projects on their first pipeline failure, if it has not been explicitly enabled for the project. Since [GitLab 12.7](https://gitlab.com/gitlab-org/gitlab/issues/26655), Auto DevOps -will run on pipelines automatically only if a [`Dockerfile` or matching buildpack](stages.md#auto-build) +runs on pipelines automatically only if a [`Dockerfile` or matching buildpack](stages.md#auto-build) exists. If a [CI/CD configuration file](../../ci/yaml/README.md) is present in the project, @@ -36,18 +44,21 @@ it will continue to be used, whether or not Auto DevOps is enabled. ## Quick start -If you are using GitLab.com, see the [quick start guide](quick_start_guide.md) -for how to use Auto DevOps with GitLab.com and a Kubernetes cluster on Google Kubernetes +If you're using GitLab.com, see the [quick start guide](quick_start_guide.md) +for setting up Auto DevOps with GitLab.com and a Kubernetes cluster on Google Kubernetes Engine (GKE). -If you are using a self-managed instance of GitLab, you will need to configure the +If you use a self-managed instance of GitLab, you must configure the [Google OAuth2 OmniAuth Provider](../../integration/google.md) before -you can configure a cluster on GKE. Once this is set up, you can follow the steps on the -[quick start guide](quick_start_guide.md) to get started. +configuring a cluster on GKE. After configuring the provider, you can follow +the steps in the [quick start guide](quick_start_guide.md) to get started. + +In [GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/208132) and later, it is +possible to leverage Auto DevOps to deploy to [AWS ECS](#aws-ecs). ## Comparison to application platforms and PaaS -Auto DevOps provides functionality that is often included in an application +Auto DevOps provides features often included in an application platform or a Platform as a Service (PaaS). It takes inspiration from the innovative work done by [Heroku](https://www.heroku.com/) and goes beyond it in multiple ways: @@ -60,7 +71,7 @@ in multiple ways: - Auto DevOps has more features including security testing, performance testing, and code quality testing. - Auto DevOps offers an incremental graduation path. If you need advanced customizations, - you can start modifying the templates without having to start over on a + you can start modifying the templates without starting over on a completely different platform. Review the [customizing](customize.md) documentation for more information. ## Features @@ -81,7 +92,7 @@ project in a simple and automatic way: 1. [Auto Browser Performance Testing](stages.md#auto-browser-performance-testing-premium) **(PREMIUM)** 1. [Auto Monitoring](stages.md#auto-monitoring) -As Auto DevOps relies on many different components, it's good to have a basic +As Auto DevOps relies on many different components, you should have a basic knowledge of the following: - [Kubernetes](https://kubernetes.io/docs/home/) @@ -102,101 +113,137 @@ Auto DevOps. ## Requirements -To make full use of Auto DevOps, you will need: +### Kubernetes + +To make full use of Auto DevOps with Kubernetes, you need: -- **Kubernetes** (for Auto Review Apps, Auto Deploy, and Auto Monitoring) +- **Kubernetes** (for [Auto Review Apps](stages.md#auto-review-apps), + [Auto Deploy](stages.md#auto-deploy), and [Auto Monitoring](stages.md#auto-monitoring)) - To enable deployments, you will need: + To enable deployments, you need: - 1. A [Kubernetes 1.12+ cluster](../../user/project/clusters/index.md) for the project. The easiest - way is to create a [new cluster using the GitLab UI](../../user/project/clusters/add_remove_clusters.md#create-new-cluster). - For Kubernetes 1.16+ clusters, there is some additional configuration for [Auto Deploy for Kubernetes 1.16+](stages.md#kubernetes-116). + 1. A [Kubernetes 1.12+ cluster](../../user/project/clusters/index.md) for your + project. The easiest way is to create a + [new cluster using the GitLab UI](../../user/project/clusters/add_remove_clusters.md#create-new-cluster). + For Kubernetes 1.16+ clusters, you must perform additional configuration for + [Auto Deploy for Kubernetes 1.16+](stages.md#kubernetes-116). 1. NGINX Ingress. You can deploy it to your Kubernetes cluster by installing the [GitLab-managed app for Ingress](../../user/clusters/applications.md#ingress), - once you have configured GitLab's Kubernetes integration in the previous step. + after configuring GitLab's Kubernetes integration in the previous step. Alternatively, you can use the [`nginx-ingress`](https://github.com/helm/charts/tree/master/stable/nginx-ingress) Helm chart to install Ingress manually. NOTE: **Note:** - If you are using your own Ingress instead of the one provided by GitLab's managed - apps, ensure you are running at least version 0.9.0 of NGINX Ingress and + If you use your own Ingress instead of the one provided by GitLab's managed + apps, ensure you're running at least version 0.9.0 of NGINX Ingress and [enable Prometheus metrics](https://github.com/helm/charts/tree/master/stable/nginx-ingress#prometheus-metrics) - in order for the response metrics to appear. You will also have to + for the response metrics to appear. You must also [annotate](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) the NGINX Ingress deployment to be scraped by Prometheus using `prometheus.io/scrape: "true"` and `prometheus.io/port: "10254"`. -- **Base domain** (for Auto Review Apps, Auto Deploy, and Auto Monitoring) +- **Base domain** (for [Auto Review Apps](stages.md#auto-review-apps), + [Auto Deploy](stages.md#auto-deploy), and [Auto Monitoring](stages.md#auto-monitoring)) - You will need a domain configured with wildcard DNS which is going to be used - by all of your Auto DevOps applications. If you're using the + You need a domain configured with wildcard DNS, which all of your Auto DevOps + applications will use. If you're using the [GitLab-managed app for Ingress](../../user/clusters/applications.md#ingress), - the URL endpoint will be automatically configured for you. + the URL endpoint is automatically configured for you. - You will then need to [specify the Auto DevOps base domain](#auto-devops-base-domain). + You must also [specify the Auto DevOps base domain](#auto-devops-base-domain). - **GitLab Runner** (for all stages) - Your Runner needs to be configured to be able to run Docker. Generally this - means using either the [Docker](https://docs.gitlab.com/runner/executors/docker.html) + Your Runner must be configured to run Docker, usually with either the + [Docker](https://docs.gitlab.com/runner/executors/docker.html) or [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes.html) executors, with [privileged mode enabled](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode). - The Runners do not need to be installed in the Kubernetes cluster, but the - Kubernetes executor is easy to use and is automatically autoscaling. - Docker-based Runners can be configured to autoscale as well, using [Docker - Machine](https://docs.gitlab.com/runner/install/autoscaling.html). + The Runners don't need to be installed in the Kubernetes cluster, but the + Kubernetes executor is easy to use and automatically autoscales. + You can configure Docker-based Runners to autoscale as well, using + [Docker Machine](https://docs.gitlab.com/runner/install/autoscaling.html). - If you have configured GitLab's Kubernetes integration in the first step, you + If you've configured GitLab's Kubernetes integration in the first step, you can deploy it to your cluster by installing the [GitLab-managed app for GitLab Runner](../../user/clusters/applications.md#gitlab-runner). Runners should be registered as [shared Runners](../../ci/runners/README.md#registering-a-shared-runner) for the entire GitLab instance, or [specific Runners](../../ci/runners/README.md#registering-a-specific-runner) - that are assigned to specific projects (the default if you have installed the + that are assigned to specific projects (the default if you've installed the GitLab Runner managed application). -- **Prometheus** (for Auto Monitoring) +- **Prometheus** (for [Auto Monitoring](stages.md#auto-monitoring)) - To enable Auto Monitoring, you will need Prometheus installed somewhere - (inside or outside your cluster) and configured to scrape your Kubernetes cluster. - If you have configured GitLab's Kubernetes integration, you can deploy it to + To enable Auto Monitoring, you need Prometheus installed either inside or + outside your cluster, and configured to scrape your Kubernetes cluster. + If you've configured GitLab's Kubernetes integration, you can deploy it to your cluster by installing the [GitLab-managed app for Prometheus](../../user/clusters/applications.md#prometheus). The [Prometheus service](../../user/project/integrations/prometheus.md) - integration needs to be enabled for the project (or enabled as a + integration must be enabled for the project, or enabled as a [default service template](../../user/project/integrations/services_templates.md) - for the entire GitLab installation). + for the entire GitLab installation. - To get response metrics (in addition to system metrics), you need to + To get response metrics (in addition to system metrics), you must [configure Prometheus to monitor NGINX](../../user/project/integrations/prometheus_library/nginx_ingress.md#configuring-nginx-ingress-monitoring). - **cert-manager** (optional, for TLS/HTTPS) - To enable HTTPS endpoints for your application, you need to install cert-manager, - a native Kubernetes certificate management controller that helps with issuing certificates. - Installing cert-manager on your cluster will issue a certificate by - [Let’s Encrypt](https://letsencrypt.org/) and ensure that certificates are valid and up-to-date. - If you have configured GitLab's Kubernetes integration, you can deploy it to - your cluster by installing the + To enable HTTPS endpoints for your application, you must install cert-manager, + a native Kubernetes certificate management controller that helps with issuing + certificates. Installing cert-manager on your cluster issues a + [Let’s Encrypt](https://letsencrypt.org/) certificate and ensures the + certificates are valid and up-to-date. If you've configured GitLab's Kubernetes + integration, you can deploy it to your cluster by installing the [GitLab-managed app for cert-manager](../../user/clusters/applications.md#cert-manager). -If you do not have Kubernetes or Prometheus installed, then Auto Review Apps, -Auto Deploy, and Auto Monitoring will be silently skipped. +If you don't have Kubernetes or Prometheus installed, then +[Auto Review Apps](stages.md#auto-review-apps), +[Auto Deploy](stages.md#auto-deploy), and [Auto Monitoring](stages.md#auto-monitoring) +are skipped. + +After all requirements are met, you can [enable Auto DevOps](#enablingdisabling-auto-devops). + +### AWS ECS + +> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/208132) in GitLab 13.0. + +You can choose to target [AWS ECS](../../ci/cloud_deployment/index.md) as a deployment platform instead of using Kubernetes. + +To get started on Auto DevOps to ECS, you'll have to add a specific Environment +Variable. To do so, follow these steps: -One all requirements are met, you can go ahead and [enable Auto DevOps](#enablingdisabling-auto-devops). +1. In your project, go to **Settings > CI / CD** and expand the **Variables** + section. + +1. Specify which AWS platform to target during the Auto DevOps deployment + by adding the `AUTO_DEVOPS_PLATFORM_TARGET` variable. + +1. Give this variable the value `ECS` before saving it. + +When you trigger a pipeline, if you have AutoDev Ops enabled and if you have correctly +[entered AWS credentials as environment variables](../../ci/cloud_deployment/index.md#deploy-your-application-to-aws-elastic-container-service-ecs), +your application will be deployed to AWS ECS. + +NOTE: **Note:** +If you have both a valid `AUTO_DEVOPS_PLATFORM_TARGET` variable and a Kubernetes cluster tied to your project, +only the deployment to Kubernetes will run. ## Auto DevOps base domain -The Auto DevOps base domain is required if you want to make use of +The Auto DevOps base domain is required to use [Auto Review Apps](stages.md#auto-review-apps), [Auto Deploy](stages.md#auto-deploy), and -[Auto Monitoring](stages.md#auto-monitoring). It can be defined in any of the following -places: - -- either under the cluster's settings, whether for [projects](../../user/project/clusters/index.md#base-domain) or [groups](../../user/group/clusters/index.md#base-domain) -- or in instance-wide settings in the **Admin Area > Settings** under the "Continuous Integration and Delivery" section +[Auto Monitoring](stages.md#auto-monitoring). You can define the base domain in +any of the following places: + +- either under the cluster's settings, whether for + [projects](../../user/project/clusters/index.md#base-domain) or + [groups](../../user/group/clusters/index.md#base-domain) +- or in instance-wide settings in **{admin}** **Admin Area > Settings** under the + **Continuous Integration and Delivery** section - or at the project level as a variable: `KUBE_INGRESS_BASE_DOMAIN` - or at the group level as a variable: `KUBE_INGRESS_BASE_DOMAIN`. @@ -204,55 +251,57 @@ The base domain variable `KUBE_INGRESS_BASE_DOMAIN` follows the same order of pr as other environment [variables](../../ci/variables/README.md#priority-of-environment-variables). TIP: **Tip:** -If you're using the [GitLab managed app for Ingress](../../user/clusters/applications.md#ingress), -the URL endpoint should be automatically configured for you. All you have to do +If you use the [GitLab managed app for Ingress](../../user/clusters/applications.md#ingress), +the URL endpoint should be automatically configured for you. All you must do is use its value for the `KUBE_INGRESS_BASE_DOMAIN` variable. NOTE: **Note:** `AUTO_DEVOPS_DOMAIN` was [deprecated in GitLab 11.8](https://gitlab.com/gitlab-org/gitlab-foss/issues/52363) -and replaced with `KUBE_INGRESS_BASE_DOMAIN`. It was removed in +and replaced with `KUBE_INGRESS_BASE_DOMAIN`, and removed in [GitLab 12.0](https://gitlab.com/gitlab-org/gitlab-foss/issues/56959). -A wildcard DNS A record matching the base domain(s) is required, for example, -given a base domain of `example.com`, you'd need a DNS entry like: +Auto DevOps requires a wildcard DNS A record matching the base domain(s). For +a base domain of `example.com`, you'd need a DNS entry like: -```text +```plaintext *.example.com 3600 A 1.2.3.4 ``` -In this case, `example.com` is the domain name under which the deployed apps will be served, -and `1.2.3.4` is the IP address of your load balancer; generally NGINX -([see requirements](#requirements)). How to set up the DNS record is beyond -the scope of this document; you should check with your DNS provider. +In this case, the deployed applications are served from `example.com`, and `1.2.3.4` +is the IP address of your load balancer; generally NGINX ([see requirements](#requirements)). +Setting up the DNS record is beyond the scope of this document; check with your +DNS provider for information. -Alternatively you can use free public services like [nip.io](https://nip.io) -which provide automatic wildcard DNS without any configuration. Just set the -Auto DevOps base domain to `1.2.3.4.nip.io`. +Alternatively, you can use free public services like [nip.io](https://nip.io) +which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io), +set the Auto DevOps base domain to `1.2.3.4.nip.io`. -Once set up, all requests will hit the load balancer, which in turn will route -them to the Kubernetes pods that run your application(s). +After completing setup, all requests hit the load balancer, which routes requests +to the Kubernetes pods running your application. ## Enabling/Disabling Auto DevOps -When first using Auto DevOps, review the [requirements](#requirements) to ensure all necessary components to make -full use of Auto DevOps are available. If this is your fist time, we recommend you follow the -[quick start guide](quick_start_guide.md). +When first using Auto DevOps, review the [requirements](#requirements) to ensure +all the necessary components to make full use of Auto DevOps are available. First-time +users should follow the [quick start guide](quick_start_guide.md). -GitLab.com users can enable/disable Auto DevOps at the project-level only. Self-managed users -can enable/disable Auto DevOps at the project-level, group-level or instance-level. +GitLab.com users can enable or disable Auto DevOps only at the project level. +Self-managed users can enable or disable Auto DevOps at the project, group, or +instance level. ### At the project level -If enabling, check that your project doesn't have a `.gitlab-ci.yml`, or if one exists, remove it. +If enabling, check that your project does not have a `.gitlab-ci.yml`, or if one exists, remove it. -1. Go to your project's **Settings > CI/CD > Auto DevOps**. -1. Toggle the **Default to Auto DevOps pipeline** checkbox (checked to enable, unchecked to disable) -1. When enabling, it's optional but recommended to add in the [base domain](#auto-devops-base-domain) - that will be used by Auto DevOps to [deploy your application](stages.md#auto-deploy) +1. Go to your project's **{settings}** **Settings > CI/CD > Auto DevOps**. +1. Select the **Default to Auto DevOps pipeline** checkbox to enable it. +1. (Optional, but recommended) When enabling, you can add in the + [base domain](#auto-devops-base-domain) Auto DevOps uses to + [deploy your application](stages.md#auto-deploy), and choose the [deployment strategy](#deployment-strategy). 1. Click **Save changes** for the changes to take effect. -When the feature has been enabled, an Auto DevOps pipeline is triggered on the default branch. +After enabling the feature, an Auto DevOps pipeline is triggered on the default branch. ### At the group level @@ -260,48 +309,50 @@ When the feature has been enabled, an Auto DevOps pipeline is triggered on the d Only administrators and group owners can enable or disable Auto DevOps at the group level. -To enable or disable Auto DevOps at the group-level: +When enabling or disabling Auto DevOps at group level, group configuration is +implicitly used for the subgroups and projects inside that group, unless Auto DevOps +is specifically enabled or disabled on the subgroup or project. -1. Go to group's **Settings > CI/CD > Auto DevOps** page. -1. Toggle the **Default to Auto DevOps pipeline** checkbox (checked to enable, unchecked to disable). -1. Click **Save changes** button for the changes to take effect. +To enable or disable Auto DevOps at the group level: -When enabling or disabling Auto DevOps at group-level, group configuration will be implicitly used for -the subgroups and projects inside that group, unless Auto DevOps is specifically enabled or disabled on -the subgroup or project. +1. Go to your group's **{settings}** **Settings > CI/CD > Auto DevOps** page. +1. Select the **Default to Auto DevOps pipeline** checkbox to enable it. +1. Click **Save changes** for the changes to take effect. ### At the instance level (Administrators only) Even when disabled at the instance level, group owners and project maintainers can still enable Auto DevOps at the group and project level, respectively. -1. Go to **Admin Area > Settings > Continuous Integration and Deployment**. -1. Toggle the checkbox labeled **Default to Auto DevOps pipeline for all projects**. -1. If enabling, optionally set up the Auto DevOps [base domain](#auto-devops-base-domain) which will be used for Auto Deploy and Auto Review Apps. +1. Go to **{admin}** **Admin Area > Settings > Continuous Integration and Deployment**. +1. Select **Default to Auto DevOps pipeline for all projects** to enable it. +1. (Optional) You can set up the Auto DevOps [base domain](#auto-devops-base-domain), + for Auto Deploy and Auto Review Apps to use. 1. Click **Save changes** for the changes to take effect. ### Enable for a percentage of projects -There is also a feature flag to enable Auto DevOps by default to your chosen percentage of projects. +You can use a feature flag to enable Auto DevOps by default to your desired percentage +of projects. From the console, enter the following command, replacing `10` with +your desired percentage: -This can be enabled from the console with the following, which uses the example of 10%: - -`Feature.get(:force_autodevops_on_by_default).enable_percentage_of_actors(10)` +```ruby +Feature.get(:force_autodevops_on_by_default).enable_percentage_of_actors(10) +``` ### Deployment strategy > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/38542) in GitLab 11.0. You can change the deployment strategy used by Auto DevOps by going to your -project's **Settings > CI/CD > Auto DevOps**. - -The available options are: +project's **{settings}** **Settings > CI/CD > Auto DevOps**. The following options +are available: - **Continuous deployment to production**: Enables [Auto Deploy](stages.md#auto-deploy) with `master` branch directly deployed to production. - **Continuous deployment to production using timed incremental rollout**: Sets the [`INCREMENTAL_ROLLOUT_MODE`](customize.md#timed-incremental-rollout-to-production-premium) variable - to `timed`, and production deployment will be executed with a 5 minute delay between + to `timed`. Production deployments execute with a 5 minute delay between each increment in rollout. - **Automatic deployment to staging, manual deployment to production**: Sets the [`STAGING_ENABLED`](customize.md#deploy-policy-for-staging-and-production-environments) and @@ -313,63 +364,61 @@ The available options are: ## Using multiple Kubernetes clusters **(PREMIUM)** -When using Auto DevOps, you may want to deploy different environments to -different Kubernetes clusters. This is possible due to the 1:1 connection that -[exists between them](../../user/project/clusters/index.md#multiple-kubernetes-clusters-premium). +When using Auto DevOps, you can deploy different environments to +different Kubernetes clusters, due to the 1:1 connection +[existing between them](../../user/project/clusters/index.md#multiple-kubernetes-clusters-premium). -In the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) (used behind the scenes by Auto DevOps), there -are currently 3 defined environment names that you need to know: +The [template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) +used by Auto DevOps currently defines 3 environment names: - `review/` (every environment starting with `review/`) - `staging` - `production` -Those environments are tied to jobs that use [Auto Deploy](stages.md#auto-deploy), so -except for the environment scope, they would also need to have a different -domain they would be deployed to. This is why you need to define a separate -`KUBE_INGRESS_BASE_DOMAIN` variable for all the above -[based on the environment](../../ci/variables/README.md#limiting-environment-scopes-of-environment-variables). +Those environments are tied to jobs using [Auto Deploy](stages.md#auto-deploy), so +except for the environment scope, they must have a different deployment domain. +You must define a separate `KUBE_INGRESS_BASE_DOMAIN` variable for each of the above +[based on the environment](../../ci/variables/README.md#limit-the-environment-scopes-of-environment-variables). -The following table is an example of how the three different clusters would -be configured. +The following table is an example of how to configure the three different clusters: | Cluster name | Cluster environment scope | `KUBE_INGRESS_BASE_DOMAIN` variable value | Variable environment scope | Notes | |--------------|---------------------------|-------------------------------------------|----------------------------|---| -| review | `review/*` | `review.example.com` | `review/*` | The review cluster which will run all [Review Apps](../../ci/review_apps/index.md). `*` is a wildcard, which means it will be used by every environment name starting with `review/`. | -| staging | `staging` | `staging.example.com` | `staging` | (Optional) The staging cluster which will run the deployments of the staging environments. You need to [enable it first](customize.md#deploy-policy-for-staging-and-production-environments). | -| production | `production` | `example.com` | `production` | The production cluster which will run the deployments of the production environment. You can use [incremental rollouts](customize.md#incremental-rollout-to-production-premium). | +| review | `review/*` | `review.example.com` | `review/*` | The review cluster which runs all [Review Apps](../../ci/review_apps/index.md). `*` is a wildcard, used by every environment name starting with `review/`. | +| staging | `staging` | `staging.example.com` | `staging` | (Optional) The staging cluster which runs the deployments of the staging environments. You must [enable it first](customize.md#deploy-policy-for-staging-and-production-environments). | +| production | `production` | `example.com` | `production` | The production cluster which runs the production environment deployments. You can use [incremental rollouts](customize.md#incremental-rollout-to-production-premium). | To add a different cluster for each environment: -1. Navigate to your project's **Operations > Kubernetes** and create the Kubernetes clusters - with their respective environment scope as described from the table above. +1. Navigate to your project's **{cloud-gear}** **Operations > Kubernetes**. +1. Create the Kubernetes clusters with their respective environment scope, as + described from the table above. ![Auto DevOps multiple clusters](img/autodevops_multiple_clusters.png) -1. After the clusters are created, navigate to each one and install Helm Tiller +1. After creating the clusters, navigate to each cluster and install Helm Tiller and Ingress. Wait for the Ingress IP address to be assigned. -1. Make sure you have [configured your DNS](#auto-devops-base-domain) with the +1. Make sure you've [configured your DNS](#auto-devops-base-domain) with the specified Auto DevOps domains. -1. Navigate to each cluster's page, through **Operations > Kubernetes**, +1. Navigate to each cluster's page, through **{cloud-gear}** **Operations > Kubernetes**, and add the domain based on its Ingress IP address. -Now that all is configured, you can test your setup by creating a merge request -and verifying that your app is deployed as a review app in the Kubernetes +After completing configuration, you can test your setup by creating a merge request +and verifying your application is deployed as a Review App in the Kubernetes cluster with the `review/*` environment scope. Similarly, you can check the other environments. ## Currently supported languages Note that not all buildpacks support Auto Test yet, as it's a relatively new -enhancement. All of Heroku's [officially supported -languages](https://devcenter.heroku.com/articles/heroku-ci#currently-supported-languages) -support it, and some third-party buildpacks as well e.g., Go, Node, Java, PHP, -Python, Ruby, Gradle, Scala, and Elixir all support Auto Test, but notably the -multi-buildpack does not. +enhancement. All of Heroku's +[officially supported languages](https://devcenter.heroku.com/articles/heroku-ci#supported-languages) +support Auto Test. The languages supported by Heroku's Herokuish buildpacks all +support Auto Test, but notably the multi-buildpack does not. As of GitLab 10.0, the supported buildpacks are: -```text +```plaintext - heroku-buildpack-multi v1.0.0 - heroku-buildpack-ruby v168 - heroku-buildpack-nodejs v99 @@ -385,24 +434,27 @@ As of GitLab 10.0, the supported buildpacks are: - buildpack-nginx v8 ``` +If your application needs a buildpack that is not in the above list, you +might want to use a [custom buildpack](customize.md#custom-buildpacks). + ## Limitations The following restrictions apply. ### Private registry support -There is no documented way of using private container registry with Auto DevOps. -We strongly advise using GitLab Container Registry with Auto DevOps in order to +No documented way of using private container registry with Auto DevOps exists. +We strongly advise using GitLab Container Registry with Auto DevOps to simplify configuration and prevent any unforeseen issues. ### Installing Helm behind a proxy -GitLab does not yet support installing [Helm as a GitLab-managed App](../../user/clusters/applications.md#helm) when -behind a proxy. Users who wish to do so must inject their proxy settings -into the installation pods at runtime, for example by using a +GitLab does not support installing [Helm as a GitLab-managed App](../../user/clusters/applications.md#helm) when +behind a proxy. Users who want to do so must inject their proxy settings +into the installation pods at runtime, such as by using a [`PodPreset`](https://kubernetes.io/docs/concepts/workloads/pods/podpreset/): -```yml +```yaml apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: @@ -418,28 +470,52 @@ spec: ## Troubleshooting -- Auto Build and Auto Test may fail to detect your language or framework with the - following error: - - ```plaintext - Step 5/11 : RUN /bin/herokuish buildpack build - ---> Running in eb468cd46085 - -----> Unable to select a buildpack - The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1 - ``` - - The following are possible reasons: - - - Your application may be missing the key files the buildpack is looking for. For - example, for Ruby applications you must have a `Gemfile` to be properly detected, - even though it is possible to write a Ruby app without a `Gemfile`. - - There may be no buildpack for your application. Try specifying a - [custom buildpack](customize.md#custom-buildpacks). -- Auto Test may fail because of a mismatch between testing frameworks. In this - case, you may need to customize your `.gitlab-ci.yml` with your test commands. -- Auto Deploy will fail if GitLab can not create a Kubernetes namespace and - service account for your project. For help debugging this issue, see - [Troubleshooting failed deployment jobs](../../user/project/clusters/index.md#troubleshooting). +### Unable to select a buildpack + +Auto Build and Auto Test may fail to detect your language or framework with the +following error: + +```plaintext +Step 5/11 : RUN /bin/herokuish buildpack build + ---> Running in eb468cd46085 + -----> Unable to select a buildpack +The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1 +``` + +The following are possible reasons: + +- Your application may be missing the key files the buildpack is looking for. + Ruby applications require a `Gemfile` to be properly detected, + even though it's possible to write a Ruby app without a `Gemfile`. +- No buildpack may exist for your application. Try specifying a + [custom buildpack](customize.md#custom-buildpacks). + +### Mismatch between testing frameworks + +Auto Test may fail because of a mismatch between testing frameworks. In this +case, you may need to customize your `.gitlab-ci.yml` with your test commands. + +### Pipeline that extends Auto DevOps with only / except fails + +If your pipeline fails with the following message: + +```plaintext +Found errors in your .gitlab-ci.yml: + + jobs:test config key may not be used with `rules`: only +``` + +This error appears when the included job’s rules configuration has been overridden with the `only` or `except` syntax. +To fix this issue, you must either: + +- Transition your `only/except` syntax to rules. +- (Temporarily) Pin your templates to the [GitLab 12.10 based templates](https://gitlab.com/gitlab-org/auto-devops-v12-10). + +### Failure to create a Kubernetes namespace + +Auto Deploy will fail if GitLab can't create a Kubernetes namespace and +service account for your project. For help debugging this issue, see +[Troubleshooting failed deployment jobs](../../user/project/clusters/index.md#troubleshooting). ## Development guides diff --git a/doc/topics/autodevops/quick_start_guide.md b/doc/topics/autodevops/quick_start_guide.md index 53d5e664bc1..859219689f9 100644 --- a/doc/topics/autodevops/quick_start_guide.md +++ b/doc/topics/autodevops/quick_start_guide.md @@ -88,7 +88,7 @@ to deploy this project to. [Cloud Run](../../user/project/clusters/add_gke_clusters.md#cloud-run-for-anthos), Istio, and HTTP Load Balancing add-ons for this cluster. - **GitLab-managed cluster** - Select this checkbox to - [allow GitLab to manage namespace and service accounts](../..//user/project/clusters/index.md#gitlab-managed-clusters) for this cluster. + [allow GitLab to manage namespace and service accounts](../../user/project/clusters/index.md#gitlab-managed-clusters) for this cluster. 1. Click **Create Kubernetes cluster**. @@ -215,12 +215,12 @@ you to common environment tasks: about the Kubernetes cluster and how the application affects it in terms of memory usage, CPU usage, and latency - **Deploy to** (**{play}** **{angle-down}**) - Displays a list of environments you can deploy to -- **Terminal** (**{terminal}**) - Opens a [web terminal](../../ci/environments.md#web-terminals) +- **Terminal** (**{terminal}**) - Opens a [web terminal](../../ci/environments/index.md#web-terminals) session inside the container where the application is running - **Re-deploy to environment** (**{repeat}**) - For more information, see - [Retrying and rolling back](../../ci/environments.md#retrying-and-rolling-back) + [Retrying and rolling back](../../ci/environments/index.md#retrying-and-rolling-back) - **Stop environment** (**{stop}**) - For more information, see - [Stopping an environment](../../ci/environments.md#stopping-an-environment) + [Stopping an environment](../../ci/environments/index.md#stopping-an-environment) GitLab displays the [Deploy Board](../../user/project/deploy_boards.md) below the environment's information, with squares representing pods in your diff --git a/doc/topics/autodevops/stages.md b/doc/topics/autodevops/stages.md index 66b76dcc05a..8c56a87ba30 100644 --- a/doc/topics/autodevops/stages.md +++ b/doc/topics/autodevops/stages.md @@ -1,47 +1,50 @@ # Stages of Auto DevOps -The following sections describe the stages of Auto DevOps. Read them carefully -to understand how each one works. +The following sections describe the stages of [Auto DevOps](index.md). +Read them carefully to understand how each one works. ## Auto Build Auto Build creates a build of the application using an existing `Dockerfile` or -Heroku buildpacks. - -Either way, the resulting Docker image is automatically pushed to the -[Container Registry](../../user/packages/container_registry/index.md) and tagged with the commit SHA or tag. +Heroku buildpacks. The resulting Docker image is pushed to the +[Container Registry](../../user/packages/container_registry/index.md), and tagged +with the commit SHA or tag. ### Auto Build using a Dockerfile -If a project's repository contains a `Dockerfile` at its root, Auto Build will use +If a project's repository contains a `Dockerfile` at its root, Auto Build uses `docker build` to create a Docker image. -If you are also using Auto Review Apps and Auto Deploy and choose to provide -your own `Dockerfile`, make sure you expose your application to port -`5000` as this is the port assumed by the -[default Helm chart](https://gitlab.com/gitlab-org/charts/auto-deploy-app). Alternatively you can override the default values by [customizing the Auto Deploy Helm chart](customize.md#custom-helm-chart) +If you're also using Auto Review Apps and Auto Deploy, and you choose to provide +your own `Dockerfile`, you must either: + +- Expose your application to port `5000`, as the + [default Helm chart](https://gitlab.com/gitlab-org/charts/auto-deploy-app) + assumes this port is available. +- Override the default values by + [customizing the Auto Deploy Helm chart](customize.md#custom-helm-chart). ### Auto Build using Heroku buildpacks -Auto Build builds an application using a project's `Dockerfile` if present, or -otherwise it will use [Herokuish](https://github.com/gliderlabs/herokuish) +Auto Build builds an application using a project's `Dockerfile` if present. If no +`Dockerfile` is present, it uses [Herokuish](https://github.com/gliderlabs/herokuish) and [Heroku buildpacks](https://devcenter.heroku.com/articles/buildpacks) -to automatically detect and build the application into a Docker image. +to detect and build the application into a Docker image. -Each buildpack requires certain files to be in your project's repository for -Auto Build to successfully build your application. For example, the following -files are required at the root of your application's repository, depending on -the language: +Each buildpack requires your project's repository to contain certain files for +Auto Build to build your application successfully. For example, your application's +root directory must contain the appropriate file for your application's +language: -- A `Pipfile` or `requirements.txt` file for Python projects. -- A `Gemfile` or `Gemfile.lock` file for Ruby projects. +- For Python projects, a `Pipfile` or `requirements.txt` file. +- For Ruby projects, a `Gemfile` or `Gemfile.lock` file. For the requirements of other languages and frameworks, read the -[buildpacks docs](https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks). +[Heroku buildpacks documentation](https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks). TIP: **Tip:** If Auto Build fails despite the project meeting the buildpack requirements, set -a project variable `TRACE=true` to enable verbose logging, which may help to +a project variable `TRACE=true` to enable verbose logging, which may help you troubleshoot. ### Auto Build using Cloud Native Buildpacks (beta) @@ -73,13 +76,13 @@ yet part of the Cloud Native Buildpack specification. For more information, see ## Auto Test -Auto Test automatically runs the appropriate tests for your application using -[Herokuish](https://github.com/gliderlabs/herokuish) and [Heroku -buildpacks](https://devcenter.heroku.com/articles/buildpacks) by analyzing +Auto Test runs the appropriate tests for your application using +[Herokuish](https://github.com/gliderlabs/herokuish) and +[Heroku buildpacks](https://devcenter.heroku.com/articles/buildpacks) by analyzing your project to detect the language and framework. Several languages and frameworks are detected automatically, but if your language is not detected, -you may succeed with a [custom buildpack](customize.md#custom-buildpacks). Check the -[currently supported languages](index.md#currently-supported-languages). +you may be able to create a [custom buildpack](customize.md#custom-buildpacks). +Check the [currently supported languages](index.md#currently-supported-languages). Auto Test uses tests you already have in your application. If there are no tests, it's up to you to add them. @@ -88,12 +91,10 @@ tests, it's up to you to add them. Auto Code Quality uses the [Code Quality image](https://gitlab.com/gitlab-org/ci-cd/codequality) to run -static analysis and other code checks on the current code. The report is -created, and is uploaded as an artifact which you can later download and check -out. - -Any differences between the source and target branches are also -[shown in the merge request widget](../../user/project/merge_requests/code_quality.md). +static analysis and other code checks on the current code. After creating the +report, it's uploaded as an artifact which you can later download and check +out. The merge request widget also displays any +[differences between the source and target branches](../../user/project/merge_requests/code_quality.md). ## Auto SAST **(ULTIMATE)** @@ -101,14 +102,17 @@ Any differences between the source and target branches are also Static Application Security Testing (SAST) uses the [SAST Docker image](https://gitlab.com/gitlab-org/security-products/sast) to run static -analysis on the current code and checks for potential security issues. The -Auto SAST stage will be skipped on licenses other than Ultimate and requires GitLab Runner 11.5 or above. +analysis on the current code, and checks for potential security issues. The +Auto SAST stage will be skipped on licenses other than +[Ultimate](https://about.gitlab.com/pricing/), and requires +[GitLab Runner](https://docs.gitlab.com/runner/) 11.5 or above. -Once the report is created, it's uploaded as an artifact which you can later download and -check out. +After creating the report, it's uploaded as an artifact which you can later +download and check out. The merge request widget also displays any security +warnings. -Any security warnings are also shown in the merge request widget. Read more how -[SAST works](../../user/application_security/sast/index.md). +To learn more about [how SAST works](../../user/application_security/sast/index.md), +see the documentation. ## Auto Dependency Scanning **(ULTIMATE)** @@ -116,16 +120,17 @@ Any security warnings are also shown in the merge request widget. Read more how Dependency Scanning uses the [Dependency Scanning Docker image](https://gitlab.com/gitlab-org/security-products/dependency-scanning) -to run analysis on the project dependencies and checks for potential security issues. -The Auto Dependency Scanning stage will be skipped on licenses other than Ultimate -and requires GitLab Runner 11.5 or above. +to run analysis on the project dependencies and check for potential security issues. +The Auto Dependency Scanning stage is skipped on licenses other than +[Ultimate](https://about.gitlab.com/pricing/) and requires +[GitLab Runner](https://docs.gitlab.com/runner/) 11.5 or above. -Once the -report is created, it's uploaded as an artifact which you can later download and -check out. +After creating the report, it's uploaded as an artifact which you can later download and +check out. The merge request widget displays any security warnings detected, -Any security warnings are also shown in the merge request widget. Read more about -[Dependency Scanning](../../user/application_security/dependency_scanning/index.md). +To learn more about +[Dependency Scanning](../../user/application_security/dependency_scanning/index.md), +see the documentation. ## Auto License Compliance **(ULTIMATE)** @@ -134,60 +139,57 @@ Any security warnings are also shown in the merge request widget. Read more abou License Compliance uses the [License Compliance Docker image](https://gitlab.com/gitlab-org/security-products/license-management) to search the project dependencies for their license. The Auto License Compliance stage -will be skipped on licenses other than Ultimate. +is skipped on licenses other than [Ultimate](https://about.gitlab.com/pricing/). -Once the -report is created, it's uploaded as an artifact which you can later download and -check out. +After creating the report, it's uploaded as an artifact which you can later download and +check out. The merge request displays any detected licenses. -Any licenses are also shown in the merge request widget. Read more how -[License Compliance works](../../user/compliance/license_compliance/index.md). +To learn more about +[License Compliance](../../user/compliance/license_compliance/index.md), see the +documentation. ## Auto Container Scanning **(ULTIMATE)** > Introduced in GitLab 10.4. -Vulnerability Static Analysis for containers uses -[Clair](https://github.com/quay/clair) to run static analysis on a -Docker image and checks for potential security issues. The Auto Container Scanning stage -will be skipped on licenses other than Ultimate. +Vulnerability Static Analysis for containers uses [Clair](https://github.com/quay/clair) +to check for potential security issues on Docker images. The Auto Container Scanning +stage is skipped on licenses other than [Ultimate](https://about.gitlab.com/pricing/). -Once the report is -created, it's uploaded as an artifact which you can later download and -check out. +After creating the report, it's uploaded as an artifact which you can later download and +check out. The merge request displays any detected security issues. -Any security warnings are also shown in the merge request widget. Read more how -[Container Scanning works](../../user/application_security/container_scanning/index.md). +To learn more about +[Container Scanning](../../user/application_security/container_scanning/index.md), +see the documentation. ## Auto Review Apps -This is an optional step, since many projects do not have a Kubernetes cluster -available. If the [requirements](index.md#requirements) are not met, the job will -silently be skipped. +This is an optional step, since many projects don't have a Kubernetes cluster +available. If the [requirements](index.md#requirements) are not met, the job is +silently skipped. [Review Apps](../../ci/review_apps/index.md) are temporary application environments based on the branch's code so developers, designers, QA, product managers, and other reviewers can actually see and interact with code changes as part of the review process. Auto Review Apps create a Review App for each branch. -Auto Review Apps will deploy your app to your Kubernetes cluster only. When no cluster -is available, no deployment will occur. +Auto Review Apps deploy your application to your Kubernetes cluster only. If no cluster +is available, no deployment occurs. -The Review App will have a unique URL based on the project ID, the branch or tag -name, and a unique number, combined with the Auto DevOps base domain. For -example, `13083-review-project-branch-123456.example.com`. A link to the Review App shows -up in the merge request widget for easy discovery. When the branch or tag is deleted, -for example after the merge request is merged, the Review App will automatically -be deleted. +The Review App has a unique URL based on a combination of the project ID, the branch +or tag name, a unique number, and the Auto DevOps base domain, such as +`13083-review-project-branch-123456.example.com`. The merge request widget displays +a link to the Review App for easy discovery. When the branch or tag is deleted, +such as after merging a merge request, the Review App is also deleted. Review apps are deployed using the [auto-deploy-app](https://gitlab.com/gitlab-org/charts/auto-deploy-app) chart with -Helm, which can be [customized](customize.md#custom-helm-chart). The app will be deployed into the [Kubernetes -namespace](../../user/project/clusters/index.md#deployment-variables) +Helm, which you can [customize](customize.md#custom-helm-chart). The application deploys +into the [Kubernetes namespace](../../user/project/clusters/index.md#deployment-variables) for the environment. -Since GitLab 11.4, a [local -Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is +Since GitLab 11.4, [local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is used. Previous versions of GitLab had a Tiller installed in the project namespace. @@ -196,52 +198,64 @@ Your apps should *not* be manipulated outside of Helm (using Kubernetes directly This can cause confusion with Helm not detecting the change and subsequent deploys with Auto DevOps can undo your changes. Also, if you change something and want to undo it by deploying again, Helm may not detect that anything changed -in the first place, and thus not realize that it needs to re-apply the old config. +in the first place, and thus not realize that it needs to re-apply the old configuration. ## Auto DAST **(ULTIMATE)** > Introduced in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.4. -Dynamic Application Security Testing (DAST) uses the -popular open source tool [OWASP ZAProxy](https://github.com/zaproxy/zaproxy) -to perform an analysis on the current code and checks for potential security -issues. The Auto DAST stage will be skipped on licenses other than Ultimate. +Dynamic Application Security Testing (DAST) uses the popular open source tool +[OWASP ZAProxy](https://github.com/zaproxy/zaproxy) to analyze the current code +and check for potential security issues. The Auto DAST stage is skipped on +licenses other than [Ultimate](https://about.gitlab.com/pricing/). -Once the DAST scan is complete, any security warnings are shown -on the [Security Dashboard](../../user/application_security/security_dashboard/index.md) -and the Merge Request Widget. Read how -[DAST works](../../user/application_security/dast/index.md). +- On your default branch, DAST scans an application deployed specifically for that purpose + unless you [override the target branch](#overriding-the-dast-target). + The app is deleted after DAST has run. +- On feature branches, DAST scans the [review app](#auto-review-apps). -On your default branch, DAST scans an app deployed specifically for that purpose. -The app is deleted after DAST has run. +After the DAST scan completes, any security warnings are displayed +on the [Security Dashboard](../../user/application_security/security_dashboard/index.md) +and the merge request widget. -On feature branches, DAST scans the [review app](#auto-review-apps). +To learn more about +[Dynamic Application Security Testing](../../user/application_security/dast/index.md), +see the documentation. ### Overriding the DAST target To use a custom target instead of the auto-deployed review apps, set a `DAST_WEBSITE` environment variable to the URL for DAST to scan. -NOTE: **Note:** -If [DAST Full Scan](../../user/application_security/dast/index.md#full-scan) is enabled, it is strongly advised **not** +DANGER: **Danger:** +If [DAST Full Scan](../../user/application_security/dast/index.md#full-scan) is +enabled, GitLab strongly advises **not** to set `DAST_WEBSITE` to any staging or production environment. DAST Full Scan -actively attacks the target, which can take down the application and lead to +actively attacks the target, which can take down your application and lead to data loss or corruption. ### Disabling Auto DAST -DAST can be disabled: +You can disable DAST: - On all branches by setting the `DAST_DISABLED` environment variable to `"true"`. -- Only on the default branch by setting the `DAST_DISABLED_FOR_DEFAULT_BRANCH` environment variable to `"true"`. +- Only on the default branch by setting the `DAST_DISABLED_FOR_DEFAULT_BRANCH` + environment variable to `"true"`. +- Only on feature branches by setting `REVIEW_DISABLED` environment variable to + `"true"`. This also disables the Review App. ## Auto Browser Performance Testing **(PREMIUM)** > Introduced in [GitLab Premium](https://about.gitlab.com/pricing/) 10.4. -Auto Browser Performance Testing utilizes the [Sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/) to measure the performance of a web page. A JSON report is created and uploaded as an artifact, which includes the overall performance score for each page. By default, the root page of Review and Production environments will be tested. If you would like to add additional URL's to test, simply add the paths to a file named `.gitlab-urls.txt` in the root directory, one per line. For example: +Auto Browser Performance Testing measures the performance of a web page with the +[Sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/), +creates a JSON report including the overall performance score for each page, and +uploads the report as an artifact. By default, it tests the root page of your Review and +Production environments. If you want to test additional URLs, add the paths to a +file named `.gitlab-urls.txt` in the root directory, one file per line. For example: -```text +```plaintext / /features /direction @@ -252,30 +266,31 @@ Any performance differences between the source and target branches are also ## Auto Deploy -This is an optional step, since many projects do not have a Kubernetes cluster -available. If the [requirements](index.md#requirements) are not met, the job will -silently be skipped. +This is an optional step, since many projects don't have a Kubernetes cluster +available. If the [requirements](index.md#requirements) are not met, the job is skipped. After a branch or merge request is merged into the project's default branch (usually `master`), Auto Deploy deploys the application to a `production` environment in the Kubernetes cluster, with a namespace based on the project name and unique -project ID, for example `project-4321`. +project ID, such as `project-4321`. -Auto Deploy doesn't include deployments to staging or canary by default, but the -[Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) contains job definitions for these tasks if you want to -enable them. +Auto Deploy does not include deployments to staging or canary environments by +default, but the +[Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) +contains job definitions for these tasks if you want to enable them. -You can make use of [environment variables](customize.md#environment-variables) to automatically -scale your pod replicas and to apply custom arguments to the Auto DevOps `helm upgrade` commands. This is an easy way to [customize the Auto Deploy Helm chart](customize.md#custom-helm-chart). +You can use [environment variables](customize.md#environment-variables) to automatically +scale your pod replicas, and to apply custom arguments to the Auto DevOps `helm upgrade` +commands. This is an easy way to +[customize the Auto Deploy Helm chart](customize.md#custom-helm-chart). -Apps are deployed using the -[auto-deploy-app](https://gitlab.com/gitlab-org/charts/auto-deploy-app) chart with -Helm. The app will be deployed into the [Kubernetes -namespace](../../user/project/clusters/index.md#deployment-variables) +Helm uses the [auto-deploy-app](https://gitlab.com/gitlab-org/charts/auto-deploy-app) +chart to deploy the application into the +[Kubernetes namespace](../../user/project/clusters/index.md#deployment-variables) for the environment. -Since GitLab 11.4, a [local -Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is +Since GitLab 11.4, a +[local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is used. Previous versions of GitLab had a Tiller installed in the project namespace. @@ -284,76 +299,85 @@ Your apps should *not* be manipulated outside of Helm (using Kubernetes directly This can cause confusion with Helm not detecting the change and subsequent deploys with Auto DevOps can undo your changes. Also, if you change something and want to undo it by deploying again, Helm may not detect that anything changed -in the first place, and thus not realize that it needs to re-apply the old config. +in the first place, and thus not realize that it needs to re-apply the old configuration. + +### GitLab deploy tokens > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/19507) in GitLab 11.0. -For internal and private projects a [GitLab Deploy Token](../../user/project/deploy_tokens/index.md#gitlab-deploy-token) -will be automatically created, when Auto DevOps is enabled and the Auto DevOps settings are saved. This Deploy Token -can be used for permanent access to the registry. When the GitLab Deploy Token has been manually revoked, it won't be automatically created. +[GitLab Deploy Tokens](../../user/project/deploy_tokens/index.md#gitlab-deploy-token) +are created for internal and private projects when Auto DevOps is enabled, and the +Auto DevOps settings are saved. You can use a Deploy Token for permanent access to +the registry. After you manually revoke the GitLab Deploy Token, it won't be +automatically created. + +If the GitLab Deploy Token can't be found, `CI_REGISTRY_PASSWORD` is +used. -If the GitLab Deploy Token cannot be found, `CI_REGISTRY_PASSWORD` is -used. Note that `CI_REGISTRY_PASSWORD` is only valid during deployment. -This means that Kubernetes will be able to successfully pull the -container image during deployment but in cases where the image needs to -be pulled again, e.g. after pod eviction, Kubernetes will fail to do so -as it will be attempting to fetch the image using -`CI_REGISTRY_PASSWORD`. +NOTE: **Note:** +`CI_REGISTRY_PASSWORD` is only valid during deployment. Kubernetes will be able +to successfully pull the container image during deployment, but if the image must +be pulled again, such as after pod eviction, Kubernetes will fail to do so +as it attempts to fetch the image using `CI_REGISTRY_PASSWORD`. ### Kubernetes 1.16+ > - [Introduced](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/51) in GitLab 12.8. > - Support for deploying a PostgreSQL version that supports Kubernetes 1.16+ was [introduced](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/merge_requests/49) in GitLab 12.9. +> - Supported out of the box for new deployments as of GitLab 13.0. CAUTION: **Deprecation** -The default value of `extensions/v1beta1` for the `deploymentApiVersion` setting is -deprecated, and is scheduled to be changed to a new default of `apps/v1` in -[GitLab 13.0](https://gitlab.com/gitlab-org/charts/auto-deploy-app/issues/47). +The default value for the `deploymentApiVersion` setting was changed from +`extensions/v1beta` to `apps/v1` in [GitLab 13.0](https://gitlab.com/gitlab-org/charts/auto-deploy-app/issues/47). -In Kubernetes 1.16 onwards, a number of [APIs were removed](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/), +In Kubernetes 1.16 and later, a number of +[APIs were removed](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/), including support for `Deployment` in the `extensions/v1beta1` version. -To use Auto Deploy on a Kubernetes 1.16+ cluster, you must: +To use Auto Deploy on a Kubernetes 1.16+ cluster: -1. Set the following in the [`.gitlab/auto-deploy-values.yaml` file](customize.md#customize-values-for-helm-chart): +1. If you are deploying your application for the first time on GitLab 13.0 or + newer, no configuration should be required. - ```yml +1. On GitLab 12.10 or older, set the following in the [`.gitlab/auto-deploy-values.yaml` file](customize.md#customize-values-for-helm-chart): + + ```yaml deploymentApiVersion: apps/v1 ``` -1. Set the: - - - `AUTO_DEVOPS_POSTGRES_CHANNEL` variable to `2`. - - `POSTGRES_VERSION` variable to `9.6.16` or higher. +1. If you have an in-cluster PostgreSQL database installed with + `AUTO_DEVOPS_POSTGRES_CHANNEL` set to `1`, follow the [guide to upgrade + PostgreSQL](upgrading_postgresql.md). - This will opt-in to using a version of the PostgreSQL chart that supports Kubernetes - 1.16 and higher. +1. If you are deploying your application for the first time and are using + GitLab 12.9 or 12.10, set `AUTO_DEVOPS_POSTGRES_CHANNEL` to `2`. -DANGER: **Danger:** Opting into `AUTO_DEVOPS_POSTGRES_CHANNEL` version -`2` will delete the version `1` PostgreSQL database. Please follow the -guide on [upgrading PostgreSQL](upgrading_postgresql.md) to backup and -restore your database before opting into version `2`. +DANGER: **Danger:** On GitLab 12.9 and 12.10, opting into +`AUTO_DEVOPS_POSTGRES_CHANNEL` version `2` deletes the version `1` PostgreSQL +database. Follow the [guide to upgrading PostgreSQL](upgrading_postgresql.md) +to back up and restore your database before opting into version `2` (On +GitLab 13.0, an additional variable is required to trigger the database +deletion). ### Migrations > [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/21955) in GitLab 11.4 -Database initialization and migrations for PostgreSQL can be configured to run +You can configure database initialization and migrations for PostgreSQL to run within the application pod by setting the project variables `DB_INITIALIZE` and `DB_MIGRATE` respectively. -If present, `DB_INITIALIZE` will be run as a shell command within an -application pod as a Helm post-install hook. As some applications will -not run without a successful database initialization step, GitLab will -deploy the first release without the application deployment and only the -database initialization step. After the database initialization completes, -GitLab will deploy a second release with the application deployment as -normal. +If present, `DB_INITIALIZE` is run as a shell command within an application pod +as a Helm post-install hook. As some applications can't run without a successful +database initialization step, GitLab deploys the first release without the +application deployment, and only the database initialization step. After the database +initialization completes, GitLab deploys a second release with the application +deployment as normal. Note that a post-install hook means that if any deploy succeeds, -`DB_INITIALIZE` will not be processed thereafter. +`DB_INITIALIZE` won't be processed thereafter. -If present, `DB_MIGRATE` will be run as a shell command within an application pod as +If present, `DB_MIGRATE` is run as a shell command within an application pod as a Helm pre-upgrade hook. For example, in a Rails application in an image built with @@ -362,38 +386,39 @@ For example, in a Rails application in an image built with - `DB_INITIALIZE` can be set to `RAILS_ENV=production /bin/herokuish procfile exec bin/rails db:setup` - `DB_MIGRATE` can be set to `RAILS_ENV=production /bin/herokuish procfile exec bin/rails db:migrate` -Unless you have a `Dockerfile` in your repo, your image is built with -Herokuish, and you must prefix commands run in these images with `/bin/herokuish -procfile exec` to replicate the environment where your application will run. +Unless your repository contains a `Dockerfile`, your image is built with +Herokuish, and you must prefix commands run in these images with +`/bin/herokuish procfile exec` to replicate the environment where your application +will run. ### Workers -Some web applications need to run extra deployments for "worker processes". For -example, it is common in a Rails application to have a separate worker process +Some web applications must run extra deployments for "worker processes". For +example, Rails applications commonly use separate worker processes to run background tasks like sending emails. The [default Helm chart](https://gitlab.com/gitlab-org/charts/auto-deploy-app) -used in Auto Deploy [has support for running worker -processes](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/9). +used in Auto Deploy +[has support for running worker processes](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/9). -In order to run a worker, you'll need to ensure that it is able to respond to +To run a worker, you must ensure the worker can respond to the standard health checks, which expect a successful HTTP response on port -`5000`. For [Sidekiq](https://github.com/mperham/sidekiq), you could make use of -the [`sidekiq_alive` gem](https://rubygems.org/gems/sidekiq_alive) to do this. +`5000`. For [Sidekiq](https://github.com/mperham/sidekiq), you can use +the [`sidekiq_alive` gem](https://rubygems.org/gems/sidekiq_alive). -In order to work with Sidekiq, you'll also need to ensure your deployments have -access to a Redis instance. Auto DevOps won't deploy this for you so you'll -need to: +To work with Sidekiq, you must also ensure your deployments have +access to a Redis instance. Auto DevOps won't deploy this instance for you, so +you must: - Maintain your own Redis instance. -- Set a CI variable `K8S_SECRET_REDIS_URL`, which the URL of this instance to - ensure it's passed into your deployments. +- Set a CI variable `K8S_SECRET_REDIS_URL`, which is the URL of this instance, + to ensure it's passed into your deployments. -Once you have configured your worker to respond to health checks, run a Sidekiq +After configuring your worker to respond to health checks, run a Sidekiq worker for your Rails application. You can enable workers by setting the following in the [`.gitlab/auto-deploy-values.yaml` file](customize.md#customize-values-for-helm-chart): -```yml +```yaml workers: sidekiq: replicaCount: 1 @@ -417,7 +442,7 @@ workers: By default, all Kubernetes pods are [non-isolated](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods), -meaning that they will accept traffic to and from any source. You can use +and accept traffic to and from any source. You can use [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to restrict connections to and from selected pods, namespaces, and the Internet. @@ -437,13 +462,13 @@ networkPolicy: enabled: true ``` -The default policy deployed by the auto deploy pipeline will allow -traffic within a local namespace and from the `gitlab-managed-apps` -namespace. All other inbound connection will be blocked. Outbound +The default policy deployed by the Auto Deploy pipeline allows +traffic within a local namespace, and from the `gitlab-managed-apps` +namespace. All other inbound connections are blocked. Outbound traffic (for example, to the Internet) is not affected by the default policy. You can also provide a custom [policy specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#networkpolicyspec-v1-networking-k8s-io) -via the `.gitlab/auto-deploy-values.yaml` file, for example: +in the `.gitlab/auto-deploy-values.yaml` file, for example: ```yaml networkPolicy: @@ -461,16 +486,19 @@ networkPolicy: app.gitlab.com/managed_by: gitlab ``` -For more information on how to install Network Policies, see +For more information on installing Network Policies, see [Install Cilium using GitLab CI/CD](../../user/clusters/applications.md#install-cilium-using-gitlab-cicd). ### Web Application Firewall (ModSecurity) customization > [Introduced](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/44) in GitLab 12.8. -Customization on an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) or on a deployment base is available for clusters with [ModSecurity installed](../../user/clusters/applications.md#web-application-firewall-modsecurity). +Customization on an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) +or on a deployment base is available for clusters with +[ModSecurity installed](../../user/clusters/applications.md#web-application-firewall-modsecurity). -To enable ModSecurity with Auto Deploy, you need to create a `.gitlab/auto-deploy-values.yaml` file in your project with the following attributes. +To enable ModSecurity with Auto Deploy, you must create a `.gitlab/auto-deploy-values.yaml` +file in your project with the following attributes. |Attribute | Description | Default | -----------|-------------|---------| @@ -481,7 +509,7 @@ To enable ModSecurity with Auto Deploy, you need to create a `.gitlab/auto-deplo In the following `auto-deploy-values.yaml` example, some custom settings are enabled for ModSecurity. Those include setting its engine to process rules instead of only logging them, while adding two specific -rules which are header-based: +header-based rules: ```yaml ingress: @@ -500,17 +528,17 @@ ingress: ### Running commands in the container Applications built with [Auto Build](#auto-build) using Herokuish, the default -unless you have [a custom Dockerfile](#auto-build-using-a-dockerfile), may require -commands to be wrapped as follows: +unless your repository contains [a custom Dockerfile](#auto-build-using-a-dockerfile), +may require commands to be wrapped as follows: ```shell /bin/herokuish procfile exec $COMMAND ``` -This might be necessary, for example, when: +Some of the reasons you may need to wrap commands: - Attaching using `kubectl exec`. -- Using GitLab's [Web Terminal](../../ci/environments.md#web-terminals). +- Using GitLab's [Web Terminal](../../ci/environments/index.md#web-terminals). For example, to start a Rails console from the application root directory, run: @@ -520,12 +548,12 @@ For example, to start a Rails console from the application root directory, run: ## Auto Monitoring -Once your application is deployed, Auto Monitoring makes it possible to monitor +After your application deploys, Auto Monitoring helps you monitor your application's server and response metrics right out of the box. Auto Monitoring uses [Prometheus](../../user/project/integrations/prometheus.md) to -get system metrics such as CPU and memory usage directly from +retrieve system metrics, such as CPU and memory usage, directly from [Kubernetes](../../user/project/integrations/prometheus_library/kubernetes.md), -and response metrics such as HTTP error rates, latency, and throughput from the +and response metrics, such as HTTP error rates, latency, and throughput, from the [NGINX server](../../user/project/integrations/prometheus_library/nginx_ingress.md). The metrics include: @@ -538,14 +566,14 @@ GitLab provides some initial alerts for you after you install Prometheus: - Ingress status code `500` > 0.1% - NGINX status code `500` > 0.1% -To make use of Auto Monitoring: +To use Auto Monitoring: 1. [Install and configure the requirements](index.md#requirements). -1. [Enable Auto DevOps](index.md#enablingdisabling-auto-devops) if you haven't done already. -1. Finally, go to your project's **CI/CD > Pipelines** and run a pipeline. -1. Once the pipeline finishes successfully, open the - [monitoring dashboard for a deployed environment](../../ci/environments.md#monitoring-environments) +1. [Enable Auto DevOps](index.md#enablingdisabling-auto-devops), if you haven't done already. +1. Navigate to your project's **{rocket}** **CI/CD > Pipelines** and click **Run Pipeline**. +1. After the pipeline finishes successfully, open the + [monitoring dashboard for a deployed environment](../../ci/environments/index.md#monitoring-environments) to view the metrics of your deployed application. To view the metrics of the - whole Kubernetes cluster, navigate to **Operations > Metrics**. + whole Kubernetes cluster, navigate to **{cloud-gear}** **Operations > Metrics**. ![Auto Metrics](img/auto_monitoring.png) diff --git a/doc/topics/autodevops/upgrading_postgresql.md b/doc/topics/autodevops/upgrading_postgresql.md index ccb009905eb..893f7ba7cde 100644 --- a/doc/topics/autodevops/upgrading_postgresql.md +++ b/doc/topics/autodevops/upgrading_postgresql.md @@ -39,7 +39,7 @@ being modified after the database dump is created. 1. Get the Kubernetes namespace for the environment. It typically looks like `<project-name>-<project-id>-<environment>`. In our example, the namespace is called `minimal-ruby-app-4349298-production`. - ```sh + ```shell $ kubectl get ns NAME STATUS AGE @@ -48,13 +48,13 @@ being modified after the database dump is created. 1. For ease of use, export the namespace name: - ```sh + ```shell export APP_NAMESPACE=minimal-ruby-app-4349298-production ``` 1. Get the deployment name for your application with the following command. In our example, the deployment name is `production`. - ```sh + ```shell $ kubectl get deployment --namespace "$APP_NAMESPACE" NAME READY UP-TO-DATE AVAILABLE AGE production 2/2 2 2 7d21h @@ -64,7 +64,7 @@ being modified after the database dump is created. 1. To prevent the database from being modified, set replicas to 0 for the deployment with the following command. We use the deployment name from the previous step (`deployments/<DEPLOYMENT_NAME>`). - ```sh + ```shell $ kubectl scale --replicas=0 deployments/production --namespace "$APP_NAMESPACE" deployment.extensions/production scaled ``` @@ -75,7 +75,7 @@ being modified after the database dump is created. 1. Get the service name for PostgreSQL. The name of the service should end with `-postgres`. In our example the service name is `production-postgres`. - ```sh + ```shell $ kubectl get svc --namespace "$APP_NAMESPACE" NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE production-auto-deploy ClusterIP 10.30.13.90 <none> 5000/TCP 7d14h @@ -84,7 +84,7 @@ being modified after the database dump is created. 1. Get the pod name for PostgreSQL with the following command. In our example, the pod name is `production-postgres-5db86568d7-qxlxv`. - ```sh + ```shell $ kubectl get pod --namespace "$APP_NAMESPACE" -l app=production-postgres NAME READY STATUS RESTARTS AGE production-postgres-5db86568d7-qxlxv 1/1 Running 0 7d14h @@ -92,7 +92,7 @@ being modified after the database dump is created. 1. Connect to the pod with: - ```sh + ```shell kubectl exec -it production-postgres-5db86568d7-qxlxv --namespace "$APP_NAMESPACE" bash ``` @@ -104,7 +104,7 @@ being modified after the database dump is created. - You will be asked for the database password, the default is `testing-password`. - ```sh + ```shell ## Format is: # pg_dump -h SERVICE_NAME -U USERNAME DATABASE_NAME > /tmp/backup.sql @@ -115,7 +115,7 @@ being modified after the database dump is created. 1. Download the dump file with the following command: - ```sh + ```shell kubectl cp --namespace "$APP_NAMESPACE" production-postgres-5db86568d7-qxlxv:/tmp/backup.sql backup.sql ``` @@ -126,12 +126,12 @@ volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) used to store the underlying data for PostgreSQL is marked as `Delete` when the pods and pod claims that use the volume is deleted. -This is signficant as, when you opt into the newer 8.2.1 PostgreSQL, the older 0.7.1 PostgreSQL is +This is significant as, when you opt into the newer 8.2.1 PostgreSQL, the older 0.7.1 PostgreSQL is deleted causing the persistent volumes to be deleted as well. You can verify this by using the following command: -```sh +```shell $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 8Gi RWO Delete Bound minimal-ruby-app-4349298-staging/staging-postgres standard 7d22h @@ -145,7 +145,7 @@ interested in keeping the volumes for the staging and production of the `minimal-ruby-app-4349298` application, the volume names here are `pvc-0da80c08-5239-11ea-9c8d-42010a8e0096` and `pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096`: -```sh +```shell $ kubectl patch pv pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' persistentvolume/pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 patched $ kubectl patch pv pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' @@ -164,17 +164,19 @@ deleted, you can choose to retain the [persistent volume](#retain-persistent-volumes). TIP: **Tip:** You can also -[scope](../../ci/environments.md#scoping-environments-with-specs) the -`AUTO_DEVOPS_POSTGRES_CHANNEL` and `POSTGRES_VERSION` variables to -specific environments, e.g. `staging`. +[scope](../../ci/environments/index.md#scoping-environments-with-specs) the +`AUTO_DEVOPS_POSTGRES_CHANNEL`, `AUTO_DEVOPS_POSTGRES_DELETE_V1` and +`POSTGRES_VERSION` variables to specific environments, e.g. `staging`. 1. Set `AUTO_DEVOPS_POSTGRES_CHANNEL` to `2`. This opts into using the newer 8.2.1-based PostgreSQL, and removes the older 0.7.1-based PostgreSQL. -1. Set `POSTGRES_VERSION` to `9.6.16`. This is the minimum PostgreSQL +1. Set `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value. This flag is a + safeguard to prevent accidental deletion of databases. +1. Set `POSTGRES_VERSION` to `11.7`. This is the minimum PostgreSQL version supported. 1. Set `PRODUCTION_REPLICAS` to `0`. For other environments, use - `REPLICAS` with an [environment scope](../../ci/environments.md#scoping-environments-with-specs). + `REPLICAS` with an [environment scope](../../ci/environments/index.md#scoping-environments-with-specs). 1. If you have set the `DB_INITIALIZE` or `DB_MIGRATE` variables, either remove the variables, or rename the variables temporarily to `XDB_INITIALIZE` or the `XDB_MIGRATE` to effectively disable them. @@ -190,7 +192,7 @@ specific environments, e.g. `staging`. 1. Get the pod name for the new PostgreSQL, in our example, the pod name is `production-postgresql-0`: - ```sh + ```shell $ kubectl get pod --namespace "$APP_NAMESPACE" -l app=postgresql NAME READY STATUS RESTARTS AGE production-postgresql-0 1/1 Running 0 19m @@ -198,13 +200,13 @@ specific environments, e.g. `staging`. 1. Copy the dump file from the backup steps to the pod: - ```sh + ```shell kubectl cp --namespace "$APP_NAMESPACE" backup.sql production-postgresql-0:/tmp/backup.sql ``` 1. Connect to the pod: - ```sh + ```shell kubectl exec -it production-postgresql-0 --namespace "$APP_NAMESPACE" bash ``` @@ -214,7 +216,7 @@ specific environments, e.g. `staging`. - `USERNAME` is the username you have configured for PostgreSQL. The default is `user`. - `DATABASE_NAME` is usually the environment name. - ```sh + ```shell ## Format is: # psql -U USERNAME -d DATABASE_NAME < /tmp/backup.sql |