1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
|
# Kubernetes clusters
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/35954) in GitLab 10.1 for projects.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/34758) in
> GitLab 11.6 for [groups](../../group/clusters/index.md).
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/39840) in
> GitLab 11.11 for [instances](../../instance/clusters/index.md).
GitLab provides many features with a Kubernetes integration. Kubernetes can be
integrated with projects, but also:
- [Groups](../../group/clusters/index.md).
- [Instances](../../instance/clusters/index.md).
NOTE: **Scalable app deployment with GitLab and Google Cloud Platform**
[Watch the webcast](https://about.gitlab.com/webcast/scalable-app-deploy/) and learn how to spin up a Kubernetes cluster managed by Google Cloud Platform (GCP) in a few clicks.
## Overview
Using the GitLab project Kubernetes integration, you can:
- Use [Review Apps](../../../ci/review_apps/index.md).
- Run [pipelines](../../../ci/pipelines/index.md).
- [Deploy](#deploying-to-a-kubernetes-cluster) your applications.
- Detect and [monitor Kubernetes](#kubernetes-monitoring).
- Use it with [Auto DevOps](#auto-devops).
- Use [Web terminals](#web-terminals).
- Use [Deploy Boards](#deploy-boards-premium). **(PREMIUM)**
- Use [Canary Deployments](#canary-deployments-premium). **(PREMIUM)**
- View [Logs](#logs).
- Run serverless workloads on [Kubernetes with Knative](serverless/index.md).
### Deploy Boards **(PREMIUM)**
GitLab's Deploy Boards offer a consolidated view of the current health and
status of each CI [environment](../../../ci/environments.md) running on Kubernetes,
displaying the status of the pods in the deployment. Developers and other
teammates can view the progress and status of a rollout, pod by pod, in the
workflow they already use without any need to access Kubernetes.
[Read more about Deploy Boards](../deploy_boards.md)
### Canary Deployments **(PREMIUM)**
Leverage [Kubernetes' Canary deployments](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments)
and visualize your canary deployments right inside the Deploy Board, without
the need to leave GitLab.
[Read more about Canary Deployments](../canary_deployments.md)
### Logs
GitLab makes it easy to view the logs of running pods in connected Kubernetes clusters. By displaying the logs directly in GitLab, developers can avoid having to manage console tools or jump to a different interface.
[Read more about Kubernetes logs](kubernetes_pod_logs.md)
### Kubernetes monitoring
Automatically detect and monitor Kubernetes metrics. Automatic monitoring of
[NGINX Ingress](../integrations/prometheus_library/nginx.md) is also supported.
[Read more about Kubernetes monitoring](../integrations/prometheus_library/kubernetes.md)
### Auto DevOps
Auto DevOps automatically detects, builds, tests, deploys, and monitors your
applications.
To make full use of Auto DevOps(Auto Deploy, Auto Review Apps, and Auto Monitoring)
you will need the Kubernetes project integration enabled.
[Read more about Auto DevOps](../../../topics/autodevops/index.md)
NOTE: **Note**
Kubernetes clusters can be used without Auto DevOps.
### Web terminals
> Introduced in GitLab 8.15.
When enabled, the Kubernetes integration adds [web terminal](../../../ci/environments.md#web-terminals)
support to your [environments](../../../ci/environments.md). This is based on the `exec` functionality found in
Docker and Kubernetes, so you get a new shell session within your existing
containers. To use this integration, you should deploy to Kubernetes using
the deployment variables above, ensuring any deployments, replica sets, and
pods are annotated with:
- `app.gitlab.com/env: $CI_ENVIRONMENT_SLUG`
- `app.gitlab.com/app: $CI_PROJECT_PATH_SLUG`
`$CI_ENVIRONMENT_SLUG` and `$CI_PROJECT_PATH_SLUG` are the values of
the CI variables.
You must be the project owner or have `maintainer` permissions to use terminals. Support is limited
to the first container in the first pod of your environment.
## Adding and removing clusters
See [Adding and removing Kubernetes clusters](add_remove_clusters.md) for details on how
to:
- Create a cluster in Google Cloud Platform (GCP) or Amazon Elastic Kubernetes Service
(EKS) using GitLab's UI.
- Add an integration to an existing cluster from any Kubernetes platform.
## Cluster configuration
After [adding a Kubernetes cluster](add_remove_clusters.md) to GitLab, read this section that covers
important considerations for configuring Kubernetes clusters with GitLab.
### Security implications
CAUTION: **Important:**
The whole cluster security is based on a model where [developers](../../permissions.md)
are trusted, so **only trusted users should be allowed to control your clusters**.
The default cluster configuration grants access to a wide set of
functionalities needed to successfully build and deploy a containerized
application. Bear in mind that the same credentials are used for all the
applications running on the cluster.
### GitLab-managed clusters
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22011) in GitLab 11.5.
> - Became [optional](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/26565) in GitLab 11.11.
You can choose to allow GitLab to manage your cluster for you. If your cluster is
managed by GitLab, resources for your projects will be automatically created. See the
[Access controls](add_remove_clusters.md#access-controls) section for details on which resources will
be created.
If you choose to manage your own cluster, project-specific resources will not be created
automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md), you will
need to explicitly provide the `KUBE_NAMESPACE` [deployment variable](#deployment-variables)
that will be used by your deployment jobs, otherwise a namespace will be created for you.
#### Important notes
Note the following with GitLab and clusters:
- If you [install applications](#installing-applications) on your cluster, GitLab will
create the resources required to run these even if you have chosen to manage your own
cluster.
- Be aware that manually managing resources that have been created by GitLab, like
namespaces and service accounts, can cause unexpected errors. If this occurs, try
[clearing the cluster cache](#clearing-the-cluster-cache).
#### Clearing the cluster cache
> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/31759) in GitLab 12.6.
If you choose to allow GitLab to manage your cluster for you, GitLab stores a cached
version of the namespaces and service accounts it creates for your projects. If you
modify these resources in your cluster manually, this cache can fall out of sync with
your cluster, which can cause deployment jobs to fail.
To clear the cache:
1. Navigate to your project’s **Operations > Kubernetes** page, and select your cluster.
1. Expand the **Advanced settings** section.
1. Click **Clear cluster cache**.
### Base domain
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24580) in GitLab 11.8.
NOTE: **Note:**
You do not need to specify a base domain on cluster settings when using GitLab Serverless. The domain in that case
will be specified as part of the Knative installation. See [Installing Applications](#installing-applications).
Specifying a base domain will automatically set `KUBE_INGRESS_BASE_DOMAIN` as an environment variable.
If you are using [Auto DevOps](../../../topics/autodevops/index.md), this domain will be used for the different
stages. For example, Auto Review Apps and Auto Deploy.
The domain should have a wildcard DNS configured to the Ingress IP address. After Ingress has been installed (see [Installing Applications](#installing-applications)),
you can either:
- Create an `A` record that points to the Ingress IP address with your domain provider.
- Enter a wildcard DNS address using a service such as nip.io or xip.io. For example, `192.168.1.1.xip.io`.
### Setting the environment scope **(PREMIUM)**
When adding more than one Kubernetes cluster to your project, you need to differentiate
them with an environment scope. The environment scope associates clusters with [environments](../../../ci/environments.md) similar to how the
[environment-specific variables](../../../ci/variables/README.md#limiting-environment-scopes-of-environment-variables) work.
The default environment scope is `*`, which means all jobs, regardless of their
environment, will use that cluster. Each scope can only be used by a single
cluster in a project, and a validation error will occur if otherwise.
Also, jobs that don't have an environment keyword set will not be able to access any cluster.
For example, let's say the following Kubernetes clusters exist in a project:
| Cluster | Environment scope |
| ----------- | ----------------- |
| Development | `*` |
| Production | `production` |
And the following environments are set in
[`.gitlab-ci.yml`](../../../ci/yaml/README.md):
```yaml
stages:
- test
- deploy
test:
stage: test
script: sh test
deploy to staging:
stage: deploy
script: make deploy
environment:
name: staging
url: https://staging.example.com/
deploy to production:
stage: deploy
script: make deploy
environment:
name: production
url: https://example.com/
```
The result will then be:
- The Development cluster details will be available in the `deploy to staging`
job.
- The production cluster details will be available in the `deploy to production`
job.
- No cluster details will be available in the `test` job because it doesn't
define any environment.
### Multiple Kubernetes clusters **(PREMIUM)**
> Introduced in [GitLab Premium](https://about.gitlab.com/pricing/) 10.3.
With GitLab Premium, you can associate more than one Kubernetes cluster to your
project. That way you can have different clusters for different environments,
like dev, staging, production, and so on.
Simply add another cluster, like you did the first time, and make sure to
[set an environment scope](#setting-the-environment-scope-premium) that will
differentiate the new cluster with the rest.
## Installing applications
GitLab can install and manage some applications like Helm, GitLab Runner, Ingress,
Prometheus, and so on, in your project-level cluster. For more information on
installing, upgrading, uninstalling, and troubleshooting applications for
your project cluster, see
[GitLab Managed Apps](../../clusters/applications.md).
## Deploying to a Kubernetes cluster
A Kubernetes cluster can be the destination for a deployment job. If
- The cluster is integrated with GitLab, special
[deployment variables](#deployment-variables) are made available to your job
and configuration is not required. You can immediately begin interacting with
the cluster from your jobs using tools such as `kubectl` or `helm`.
- You don't use GitLab's cluster integration you can still deploy to your
cluster. However, you will need configure Kubernetes tools yourself
using [environment variables](../../../ci/variables/README.md#creating-a-custom-environment-variable)
before you can interact with the cluster from your jobs.
### Deployment variables
The Kubernetes cluster integration exposes the following
[deployment variables](../../../ci/variables/README.md#deployment-environment-variables) in the
GitLab CI/CD build environment.
| Variable | Description |
| -------- | ----------- |
| `KUBE_URL` | Equal to the API URL. |
| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). |
| `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. |
| `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. |
| `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. |
| `KUBECONFIG` | Path to a file containing `kubeconfig` for this deployment. CA bundle would be embedded if specified. This config also embeds the same token defined in `KUBE_TOKEN` so you likely will only need this variable. This variable name is also automatically picked up by `kubectl` so you won't actually need to reference it explicitly if using `kubectl`. |
| `KUBE_INGRESS_BASE_DOMAIN` | From GitLab 11.8, this variable can be used to set a domain per cluster. See [cluster domains](#base-domain) for more information. |
NOTE: **Note:**
Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main
service account of the cluster integration.
NOTE: **Note:**
If your cluster was created before GitLab 12.2, default `KUBE_NAMESPACE` will be set to `<project_name>-<project_id>`.
### Custom namespace
> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/27630) in GitLab 12.6.
The Kubernetes integration defaults to project-environment-specific namespaces
of the form `<project_name>-<project_id>-<environment>` (see [Deployment
variables](#deployment-variables)).
For **non**-GitLab-managed clusters, the namespace can be customized using
[`environment:kubernetes:namespace`](../../../ci/environments.md#configuring-kubernetes-deployments)
in `.gitlab-ci.yml`.
NOTE: **Note:** When using a [GitLab-managed cluster](#gitlab-managed-clusters), the
namespaces are created automatically prior to deployment and [can not be
customized](https://gitlab.com/gitlab-org/gitlab/issues/38054).
### Troubleshooting
Before the deployment jobs starts, GitLab creates the following specifically for
the deployment job:
- A namespace.
- A service account.
However, sometimes GitLab can not create them. In such instances, your job will fail with the message:
```text
This job failed because the necessary resources were not successfully created.
```
To find the cause of this error when creating a namespace and service account, check the [logs](../../../administration/logs.md#kuberneteslog).
Reasons for failure include:
- The token you gave GitLab does not have [`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles)
privileges required by GitLab.
- Missing `KUBECONFIG` or `KUBE_TOKEN` variables. To be passed to your job, they must have a matching
[`environment:name`](../../../ci/environments.md#defining-environments). If your job has no
`environment:name` set, it will not be passed the Kubernetes credentials.
NOTE: **NOTE:**
Project-level clusters upgraded from GitLab 12.0 or older may be configured
in a way that causes this error. Ensure you deselect the
[GitLab-managed cluster](#gitlab-managed-clusters) option if you want to manage
namespaces and service accounts yourself.
## Monitoring your Kubernetes cluster **(ULTIMATE)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
When [Prometheus is deployed](#installing-applications), GitLab will automatically monitor the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
![Cluster Monitoring](img/k8s_cluster_monitoring.png)
|