summaryrefslogtreecommitdiff
path: root/doc/development/kubernetes.md
blob: a6d9c75483890fc0e956f1fc73c1a21a85c04eeb (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
stage: Configure
group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---

# Kubernetes integration - development guidelines **(FREE)**

This document provides various guidelines when developing for the GitLab
[Kubernetes integration](../user/infrastructure/clusters/index.md).

## Development

### Architecture

Some Kubernetes operations, such as creating restricted project
namespaces are performed on the GitLab Rails application. These
operations are performed using a [client library](#client-library),
and carry an element of risk. The operations are
run as the same user running the GitLab Rails application. For more information,
read the [security](#security) section below.

Some Kubernetes operations, such as installing cluster applications are
performed on one-off pods on the Kubernetes cluster itself. These
installation pods are named `install-<application_name>` and
are created within the `gitlab-managed-apps` namespace.

In terms of code organization, we generally add objects that represent
Kubernetes resources in
[`lib/gitlab/kubernetes`](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/kubernetes).

### Client library

We use the [`kubeclient`](https://rubygems.org/gems/kubeclient) gem to
perform Kubernetes API calls. As the `kubeclient` gem does not support
different API Groups (such as `apis/rbac.authorization.k8s.io`) from a
single client, we have created a wrapper class,
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
that enable you to achieve this.

Selected Kubernetes API groups are supported. Do add support
for new API groups or methods to
[`Gitlab::Kubernetes::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb)
if you need to use them. New API groups or API group versions can be
added to `SUPPORTED_API_GROUPS` - internally, this creates an
internal client for that group. New methods can be added as a delegation
to the relevant internal client.

### Performance considerations

All calls to the Kubernetes API must be in a background process. Don't
perform Kubernetes API calls within a web request. This blocks
webserver, and can lead to a denial-of-service (DoS) attack in GitLab as
the Kubernetes cluster response times are outside of our control.

The easiest way to ensure your calls happen a background process is to
delegate any such work to happen in a [Sidekiq worker](sidekiq_style_guide.md).

You may want to make calls to Kubernetes and return the response, but a background
worker isn't a good fit. Consider using
[reactive caching](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/reactive_caching.rb).
For example:

```ruby
  def calculate_reactive_cache!
    { pods: cluster.platform_kubernetes.kubeclient.get_pods }
  end

  def pods
    with_reactive_cache do |data|
      data[:pods]
    end
  end
```

### Testing

We have some WebMock stubs in
[`KubernetesHelpers`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/helpers/kubernetes_helpers.rb)
which can help with mocking out calls to Kubernetes API in your tests.

### Amazon EKS integration

This section outlines the process for allowing a GitLab instance to create EKS clusters.

The following prerequisites are required:

A `Customer` AWS account. The EKS cluster is created in this account. The following
resources must be present:

- A provisioning role that has permissions to create the cluster
  and associated resources. It must list the `GitLab` AWS account
  as a trusted entity.
- A VPC, management role, security group, and subnets for use by the cluster.

A `GitLab` AWS account. This is the account which performs
the provisioning actions. The following resources must be present:

- A service account with permissions to assume the provisioning
  role in the `Customer` account above.
- Credentials for this service account configured in GitLab via
  the `kubernetes` section of `gitlab.yml`.

The process for creating a cluster is as follows:

1. Using the `:provision_role_external_id`, GitLab assumes the role provided
   by `:provision_role_arn` and stores a set of temporary credentials on the
   provider record. By default these credentials are valid for one hour.
1. A CloudFormation stack is created, based on the
   [`AWS CloudFormation EKS template`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/vendor/aws/cloudformation/eks_cluster.yaml).
   This triggers creation of all resources required for an EKS cluster.
1. GitLab polls the status of the stack until all resources are ready,
   which takes somewhere between 10 and 15 minutes in most cases.
1. When the stack is ready, GitLab stores the cluster details and generates
   another set of temporary credentials, this time to allow connecting to
   the cluster via `kubeclient`. These credentials are valid for one minute.
1. GitLab configures the worker nodes so that they are able to authenticate
   to the cluster, and creates a service account for itself for future operations.
1. Credentials that are no longer required are removed. This deletes the following
   attributes:

   - `access_key_id`
   - `secret_access_key`
   - `session_token`

## Security

### Server Side Request Forgery (SSRF) attacks

As URLs for Kubernetes clusters are user controlled it is easily
susceptible to Server Side Request Forgery (SSRF) attacks. You should
understand the mitigation strategies if you are adding more API calls to
a cluster.

Mitigation strategies include:

1. Not allowing redirects to attacker controller resources:
   [`Kubeclient::KubeClient`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/kubernetes/kube_client.rb#)
   can be configured to prevent any redirects by passing in
   `http_max_redirects: 0` as an option.
1. Not exposing error messages: by doing so, we
   prevent attackers from triggering errors to expose results from
   attacker controlled requests. For example, we do not expose (or store)
   raw error messages:

   ```ruby
   rescue Kubernetes::HttpError => e
     # bad
     # app.make_errored!("Kubernetes error: #{e.message}")

     # good
     app.make_errored!("Kubernetes error: #{e.error_code}")
   ```

## Debugging Kubernetes integrations

Logs related to the Kubernetes integration can be found in
[`kubernetes.log`](../administration/logs.md#kuberneteslog). On a local
GDK install, these logs are present in `log/kubernetes.log`.

Some services such as
[`Clusters::Applications::InstallService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/clusters/applications/install_service.rb#L18)
rescues `StandardError` which can make it harder to debug issues in an
development environment. The current workaround is to temporarily
comment out the `rescue` in your local development source.

You can also follow the installation logs to debug issues related to
installation. Once the installation/upgrade is underway, wait for the
pod to be created. Then run the following to obtain the pods logs as
they are written:

```shell
kubectl logs <pod_name> --follow -n gitlab-managed-apps
```